• 3 Posts
  • 103 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle


  • Not running any LLMs, but I do a lot of mathematical modelling, and my 32 GB RAM, M1 Pro MacBook is compiling code and crunching numbers like an absolute champ! After about a year, most of my colleagues ditched their old laptops for a MacBook themselves after just noticing that my machine out-performed theirs every day, and that it saved me a bunch of time day-to-day.

    Of course, be a bit careful when buying one: Apple cranks up the price like hell if you start specing out the machine a lot. Especially for RAM.







  • I see you’ve chosen confidence over accuracy again

    This is honestly a great way of calling someone stupid, but you do realise that it can be very offensive to people with narcissistic personality disorder, right?

    Joke aside, what is really stupid about this is the idea of “insulting someone without hurting there feelings”, or as you wrote

    insulting someone’s actions or reasoning can sometimes carry ableist implications if we’re not careful.

    When honestly insulting someone, there is typically an intent to be hurtful, the idea that you should be careful to “not use language that can offend X group” when doing so, kind of overlooks the whole situation of “insulting” going on



  • cmake comes to mind: I can find the docs for whatever function I want to use, but I honestly have such a hard time comprehending what they mean. It’s especially frustrating because I can tell that all the information is there, and it’s just me not being able to understand it, so I don’t want to ask others for help, cause then I’m just bothering people with a problem that I’ve in principle already found the answer to, I’m just not able to apply the answer.

    Then again, I’ve heard plenty of other people complain that the cmake docs are hard to understand…





  • There’s a lot of good advice here already, especially that wool is the gold standard - nothing synthetic cuts it. I want to add that the absolute key is about layering, and not over-stuffing.

    What keeps you warm is primarily the air trapped between your layers, which means that three thin layers can be a lot better than one thick layer. This also means that you will be freezing if your layers are too tight. If you have two thin layers, and put on a sweater, and that sweater feels tight, that likely means you’re pushing out the air trapped in your inner layers, and they won’t be as effective. The same applies when putting on a jacket.

    So: You want a thin base layer (think light, thin wool shirt + long johns), then an optional medium layer or two (slightly thicker wool shirt, I have some in the range of 200 grams), and finally a thicker sweater for when you’re not moving. These should increase in size so that they can fit the thinner layers underneath, and you want your jacket big enough to fit all the underlying layers.

    Finally: When you’re moving around, you will get stupidly warm and sweaty unless you take off clothes. It’s better to take off some stuff and be a bit cold for the first 10 minutes of moving than to get sweaty and be cold for the rest of the day. If (when) you do get cold, running in a circle for 10 min will fix it (run at a calm, steady pace, if you’re really cold it might take longer to get warm than you think, but you will get warm if you move).

    In short: Being in a cold climate is just as much about how you use your equipment, and how you activate yourself to stay warm, as it is about what equipment you have.



  • I have to be honest in that, while I think duck typing should be embraced, I have a hard time seeing how people are actually able to deal with large-scale pure Python projects, just because of the dynamic typing. To me, it makes reading code so much more difficult when I can’t just look at a function and immediately see the types involved.

    Because of this, I also have a small hangup with examples in some C++ libraries that use auto. Like sure, I’m happy to use auto when writing code, but when reading an example I would very much like to immediately be able to know what the return type of a function is. In general, I think the use of auto should be restricted to cases where it increases readability, and not used as a lazy way out of writing out the types, which I think is one of the benefits of C++ vs. Python in large projects.


  • The amount of people I’ve been helping out that have copied some code from somewhere and say “it doesn’t work”, and who are dumbfounded when I ask them to read the surrounding text aloud for me…

    Along the same line: When something crashes, and all I have to do is tell people to read the error message aloud, and ask them what that means. It’s like so many people expect to be spoon-fed solutions, to the point where they don’t even stop to think about the problem if something doesn’t immediately work.


  • While I do agree with most of what is said here, I have a hangup on one of the points: Thinking that “docstrings and variable names” are a trustworthy way to indicate types. Python is not a statically typed language - never will be. You can have as much type hinting as you want, but you will never have a guarantee that some variable holds the type you think it does, short of checking the type at runtime. Also, code logic can change over time, and there is no guarantee that comments, docstrings and variable names will always be up to date.

    By all means, having good docstrings, variable names, and type hinting is important, but none of them should be treated as some kind of silver bullet that gets you around the fact that I can access __globals__ at any time and change any variable to whatever I want if I’m so inclined.

    This doesn’t have to be a bad thing though. I use both Python and C++ daily, and think that the proper way to use Python is to fully embrace duck typing. However that also means my code should be written in such a way that it will work as long as whatever input to it conforms loosely to whatever type I’m expecting to receive.


  • Ok, I’ve done some double checking: The Bantu expansion is approximately what I thought it was. I believe the language group I was thinking about that survived the Bantu expansion was the Khoisan.

    My (very coarse) knowledge of this comes from a mixture of reading Jared Diamond (Guns, Germs and Steel) and from following it up with some Wikipedia. In short: The genetic makeup in a lot of the world is relatively dominated by the groups that were the first to adopt agriculture in their respective regions. Before the Bantu expansion, phenotypes south of Sahara were more varied, just like the phenotypes in the Americas were more varied before the corresponding “European expansion”, or the equivalent expansion that happened in South-East Asia (I don’t remember which society stood behind that one).

    According to Diamond, we can trace a lot of (most?) surviving human phenotypes and languages back to relatively few societies, which after adopting agriculture, more or less wiped out / displaced neighbouring cultures due to increased resistance to a lot of infectious diseases and massively increased food production / need for land. This mostly happened less than 10 000 years ago, i.e. far too recently for natural selection to have a major impact on things like skin colour, hair type, height, facial features, etc. afterwards.

    So: While major trends in phenotypes are of course a result of natural selection / evolutionary pressure in specific regions (resistance to skin cancer / sunburn vs. vitamin D production, or cooling down more efficiently with a wider nose vs. retaining heat with a slimmer one, or having an eye-shape that lets in more light vs. provides more shade), a lot of what we see today is simply a result of what phenotype the first group a given region that adopted agriculture had. This means that looking at the dominant phenotype in a region today will not necessarily give a good impression of what phenotype that is “optimally designed” to survive in the conditions of that region.