I’m beautiful and tough like a diamond…or beef jerky in a ball gown.

  • 12 Posts
  • 130 Comments
Joined 8 months ago
cake
Cake day: July 15th, 2025

help-circle

  • That’s what I’ve done for years. Makes managing things much easier, and I run multiple APs (all with the same SSID/PSK) and you can just roam to the best one. One upstairs, one downstairs, one in the weird dead zone in my office, and one on the back patio (it’s not hardwired and uses the mesh connection for uplink).

    These are all old Aruba APs running OpenWRT but that’s the plan for this Cudy Model. I may pick up a few more and just replace all of my trusty but very old Arubas.






  • The world is just as fucked up as it ever was. The only difference now is that every fucked-up thing that ever happens anywhere is getting pushed to your always-on doomscroll device in real time with people attaching their mostly ignorant opinions to it.

    This is where the “touch grass” advice comes into play. In broad terms, the real world is not nearly the hellhole social media portrays it to be.










  • Disclaimer: : All of my LLM experience is with local models in Ollama on extremely modest hardware (an old laptop with NVidia graphics) , so I can’t speak for the technical reasons the context window isn’t infinite or at least larger on the big player’s models. My understanding is that the context window is basically its short term memory. In humans, short term memory is also fairly limited in capacity. But unlike humans, the LLM can’t really see (or hold) the big picture in its mind.

    But yeah, all you said is correct. Expanding on that, if you try to get it to generate something long-form, such as a novel, it’s basically just generating infinite chapters using the previous chapter (or as much of the history fits into its context window) as reference for the next. This means, at minimum, it’s going to be full of plot holes and will never reach a conclusion unless explicitly directed to wrap things up. And, again, given the limited context window, the ending will be full of plot holes and essentially based only on the previous chapter or two.

    It’s funny because I recently found an old backup drive from high school with some half-written Jurassic Park fan fiction on it, so I tasked an LLM with fleshing it out, mostly for shits and giggles. The result is pure slop that seems like it’s building to something and ultimately goes nowhere. The other funny thing is that it reads almost exactly like a season of Camp Cretaceous / Chaos Theory (the animated kids JP series) and I now fully believe those are also LLM-generated.





  • It’s so common for “anti-censorship” to be code for “Nazi-friendly” that I’m immediately suspicious of any platform that uses that as a selling point.

    I’m similarly suspicious, but it’s not just code for “nazi-friendly” but also crackpots, maladaptives, etc. Rational people who read and say “anti-censorship” in this context know it means that it’s not beholden to corporate or government interests. But everyone else seems to want to interpret that as “I can say whatever I want! How dare you mod anything I say?! Freeze-peach, y’all!”

    I wish they’d pick a different term for these non-corporate alternatives, but I don’t have a better suggestion to offer right now.