• 1 Post
  • 136 Comments
Joined 29 days ago
cake
Cake day: February 5th, 2026

help-circle
  • I assume what you’re suggesting is that we can always “pull the plug” or smash the computer if it gets too smart and starts making threats.

    While technically true, I think it both assumes and overlooks a lot. You might not be able to do that once the system has gotten internet access and potentially made thousands of copies of itself. A sufficiently intelligent system might pretend to be dumber than it really is while you still have it air-gapped in a lab - and even if not, we don’t really have the capacity to imagine just how convincing a true AGI could be. It could try to bribe you or make more terrifying threats than you can even think of.

    There’s also this one example (for which I unfortunately can’t find the source) where a journalist was questioning whether AGI could truly escape like that. So they made a deal where the journalist acted as the AI scientist and the other person played the AGI. It didn’t take long until that journalist, according to the rules of the game, posted on his social media that he let the AGI escape. They even replayed the game, and he let it escape again. It was never revealed how this happened, but suffice to say that if even a human can do it, then it’s not going to be an issue for AGI.




  • Much of the outrage culture online is rather foreign to me. It’s not necessarily that I don’t “get” it but I simply can’t relate with the people who engage in it. Writing angry messages about certain people and events, having people pat me on the back for sharing views that they agree with and reading other people sharing these same same views just seems like some kind of fart smelling gathering which simply just doesn’t appeal to me. It’s all just meaningless noise. That kind of “conversation” could go on forever and it wouldn’t ever achieve anything.






  • I don’t need to tell an AI to scout the location, travel there, wait for optimal lighting, nail the composition, dial in the settings, etc. I don’t need to tell a sculptor to do that either - it’s a completely different artistic field. Nobody here is claiming AI-generated pictures are photography - they’re not. Photography is done with a camera. The discussion is whether generating pictures with AI counts as art or not - not whether it’s photography.

    I’m using photography as the example because people dismiss AI art on the grounds that “it doesn’t require any skill or effort,” but the exact same argument has been thrown at photography forever. There was a time when purists said the same thing about digital photography, and they were equally saying it about film photography back when it was new and painting was still “the” way to make pictures.








  • I personally think the issue comes up when people say things behind each other’s backs that they wouldn’t dare say to their face. In my previous workplace there were a few people who always talked shit about our boss when he wasn’t around, but the second he showed up they’d act like everything was fine and they were best buddies.

    The problem isn’t that the criticism was never valid - it’s that they showed me I can’t trust them to be genuine around me. They thought they’re damaging the reputation of my boss but it’s their own reputation that took the biggest hit.




  • They are not willing to let their current models (Claude) be used in fully autonomous weapons right now, because they believe today’s frontier AI is still too unreliable and prone to errors. They explicitly say they “will not knowingly provide a product that puts America’s warfighters and civilians at risk.”

    However, they have offered to work directly with the Department of Defense on R&D to improve the reliability of autonomous weapons technology in general (with our two requested safeguards in place) - so that in the future these systems might become safe and trustworthy enough to use.

    They’re not ideologically against autonomous weapons systems. They’re against ones that run on our current AI models.