

Which is why my keyword-based list of content filters grows by the day. Trying to block the noise feels like a full-time-job sometimes.


Which is why my keyword-based list of content filters grows by the day. Trying to block the noise feels like a full-time-job sometimes.


Hallucinations are a GenAI-specific problem - not something that applies to AI systems broadly. The people who worry about AI takeover are talking about Artificial General Intelligence (AGI), which isn’t the same as GenAI like LLMs. AGI is defined as at least as intelligent as a human. If it’s not, then by definition it’s not AGI.
The reason people worry about AGI is that intelligence is what makes us the most powerful species on the planet. The moment something more intelligent shows up, we can’t outsmart it. It’s like stepping into an elevator with someone way stronger than you - whether you survive the ride depends on them, not you.


Much of the outrage culture online is rather foreign to me. It’s not necessarily that I don’t “get” it but I simply can’t relate with the people who engage in it. Writing angry messages about certain people and events, having people pat me on the back for sharing views that they agree with and reading other people sharing these same same views just seems like some kind of fart smelling gathering which simply just doesn’t appeal to me. It’s all just meaningless noise. That kind of “conversation” could go on forever and it wouldn’t ever achieve anything.


300k mortage, car payment, two kids, a dog, job that they hate, week long vacation abroad once a year etc.


At some point I realized that when I look at the life of an average person, it’s not something I want for myself. So I probably shouldn’t model my life after theirs and then expect different results.


I don’t see any reason to assume that, given enough time, our VR and AI systems wouldn’t get advanced enough that a VR world becomes indistinguishable from reality - and the AI avatars in it impossible to tell from real humans. Hell, they could even be consciouss. While I don’t think we’re there yet, it’s still conceivable that you could be living in exactly that kind of simulation right now and have no idea. We’re already completely fooled by imaginary worlds every single night when we fall asleep.


GenAI doesn’t generate anything on its own either - it too needs the intentional minuscule effort of a human being in its foundation. I don’t think the “effort” argument holds up here anyway. People happily accept as art a photograph that took me 30 minutes to capture and edit, but they reject a GenAI piece I spent 3 hours tweaking until I got exactly what I wanted.


I don’t need to tell an AI to scout the location, travel there, wait for optimal lighting, nail the composition, dial in the settings, etc. I don’t need to tell a sculptor to do that either - it’s a completely different artistic field. Nobody here is claiming AI-generated pictures are photography - they’re not. Photography is done with a camera. The discussion is whether generating pictures with AI counts as art or not - not whether it’s photography.
I’m using photography as the example because people dismiss AI art on the grounds that “it doesn’t require any skill or effort,” but the exact same argument has been thrown at photography forever. There was a time when purists said the same thing about digital photography, and they were equally saying it about film photography back when it was new and painting was still “the” way to make pictures.


A camera can’t comprehend art either - it’s just a tool a human uses to create it. AI doesn’t generate anything on its own either; it needs a human to operate it too. The camera isn’t the artist, Photoshop isn’t, a canvas and brush aren’t, Illustrator isn’t. They’re just tools. The artist is the human behind them.


The discussion is whether it is art or not. It doesn’t matter how bad someone is at it - people still accept it as art. You’d be a massive dick telling a beginner that their photography is so terrible it doesn’t even qualify as art. You can also take a great picture completely by accident, just like you can put a ton of effort into one and still end up with garbage.


Nobody looks at AI products and goes, wow, this is art.
I’ve came across plenty of AI pieces that I genuinely like.


As a hobbyist photographer, I find it pretty amusing that when I use a device I just point at a target and press a button, it counts as art - but when I spend 3 hours tweaking a prompt to get exactly the image I want, suddenly it doesn’t. Seems way more like an ideological stance than a logical one.


I disagree with the premise and find it borderline offensive to call Lego builds “minimal value”.


I have no issue with an online service knowing my age for as long as that’s all they know and will ever know about me.
I personally think the issue comes up when people say things behind each other’s backs that they wouldn’t dare say to their face. In my previous workplace there were a few people who always talked shit about our boss when he wasn’t around, but the second he showed up they’d act like everything was fine and they were best buddies.
The problem isn’t that the criticism was never valid - it’s that they showed me I can’t trust them to be genuine around me. They thought they’re damaging the reputation of my boss but it’s their own reputation that took the biggest hit.
When someone starts to gossip they’re basically just letting you know that they’re the kind of person you shouldn’t share any sensitive personal information with. I never quite figured out how these people can be so oblivious to it though. If someone talks shit of other people to me then I assume they talk shit of me to other people as well.
People. I do find things like group psychology interesting but discussing individuals is mind-numbingly uninteresting. Especially celebrity gossip but political figures are a close second.


They are not willing to let their current models (Claude) be used in fully autonomous weapons right now, because they believe today’s frontier AI is still too unreliable and prone to errors. They explicitly say they “will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
However, they have offered to work directly with the Department of Defense on R&D to improve the reliability of autonomous weapons technology in general (with our two requested safeguards in place) - so that in the future these systems might become safe and trustworthy enough to use.
They’re not ideologically against autonomous weapons systems. They’re against ones that run on our current AI models.


That’s your intrepretation - not a direct quote.
I assume what you’re suggesting is that we can always “pull the plug” or smash the computer if it gets too smart and starts making threats.
While technically true, I think it both assumes and overlooks a lot. You might not be able to do that once the system has gotten internet access and potentially made thousands of copies of itself. A sufficiently intelligent system might pretend to be dumber than it really is while you still have it air-gapped in a lab - and even if not, we don’t really have the capacity to imagine just how convincing a true AGI could be. It could try to bribe you or make more terrifying threats than you can even think of.
There’s also this one example (
for which I unfortunately can’t find thesource) where a journalist was questioning whether AGI could truly escape like that. So they made a deal where the journalist acted as the AI scientist and the other person played the AGI. It didn’t take long until that journalist, according to the rules of the game, posted on his social media that he let the AGI escape. They even replayed the game, and he let it escape again. It was never revealed how this happened, but suffice to say that if even a human can do it, then it’s not going to be an issue for AGI.