

Women using sycophantic chatbots that they 100% control, is stil about power and control. One is about bodies, the other is about minds, that’s the only difference.
Programmer and sysadmin (DevOps?), wannabe polymath in tech, science and the mind. Neurodivergent, disabled, burned out, and close to throwing in the towel, but still liking ponies 🦄 and sometimes willing to discuss stuff.


Women using sycophantic chatbots that they 100% control, is stil about power and control. One is about bodies, the other is about minds, that’s the only difference.
Depending on how much is “too much power”, people might still want to purchase them at a discount for self-hosting purposes. The future is most likely to go through a decentralization of AI services, with spme higher efficiency large providers, combined with lower efficiency edge nodes for less demanding usage… at least, until the next order of magnitude technological shift.


Counter proposal: dynamic tactile buttons.
The tech already existed in 2013, just develop it further.
https://www.robaid.com/gadgets/tactus-technology-haptic-feedback-on-dynamic-user-interface.htm


Check the history of ReCaptcha: it started by helping digitize booksxfir the Gutenberg Project, then once it got acquired by Google, it switched to house numbers, street signs (auto driving?), and is now helping with object identification.


Strictly speaking, math gets proven from scratch by every math student. Software is slightly different, since most of it never gets a formal proof at all.


Using a complex GPT-4o prompt, they sought to pull out tweets that focused on “superficial topics”
Wait a moment… They asked an LLM, to tell them what was “junk”, and another LLM, trained on what an LLM marked as junk, turned out to be a junk LLM?
It talks about model collapse, but this smells like research collapse.


deleted by creator


The problem comes from people responding to them, in communities that are text-first. You can flag a person as a troll… but unless you ban them, then eventually the discussions spill over.
Overwatch doesn’t have that same issue, because text is not at its core; you can mute people, and keep playing the game. Game-disrupting players though, still get banned.


There is a reason why people keep asking “How do you spell it?” when being told a name in English. The counterpart is, “How do you pronounce it?”.
Even with “long a”, I still can’t tell how would you want to pronounce “Rach”. I can come up with 4 different pronounciations right now: “Ra-ah-ch”, “Ra-ah”, “Ra-sh”, “Ra-kh”.


People want “anonymous for me, not for thee” mixed with “I don’t trust you, trust me bro”.
Starting from a basis that people want a contradiction, people will go to whichever platform “cons” them better.
Facebook had a real name policy, then it didn’t. Twitter had an anonymous policy, then it added verified accounts, now anyone can buy the blue, so they added a gold.
Meanwhile, people don’t want to understand that others can behave in different ways or capacities at different times, but if course want full understanding for themselves.
Goggle’s Circles had the right idea, but it failed explosively by showing their hand to people who want to pretend it doesn’t exist.


Don’t confuse “market”, with “stock market”, with “artificially intervened market”.
Since 2010, we’ve lived in an artificially intervened market with 0% interest rates on money. That means anyone can fart in the general direction of a business idea, and get founded, because why not. That’s a mockery of how markets are supposed to work.
Stock markets, are popularity contest casinos. In normal circumstances, the largest con artists get quickly ousted, but with 0% rates… who cares?! That’s how Tesla has a 130 P/E and 0.00% dividends; just look at the leader bounce on a stage and throw your money! SpaceX, Starlink, Boring, Twitter/X/xAI, don’t have listed stock for a reason. There is no interest in exposing them to the shenanigans of a stage performer.
The real market, is people voting with their wallets. After the 0% intervention dies out, after stage performers get kicked out… reality hits: an efficient business needs to aim for low P/E and high dividends. In the US you get examples like AES, with 6.87% dividend, and 4.33 P/E. Meta is trying to get to that level, but it’s coming from a high hype/meme level. We’ll see whether it manages.
Long term, will be a test of how businesses adapt to the productivity multiplier of AI. Some with reject it and get ran over, some will blindly jump on it and fall apart, some will be smart/lucky to extract as much productivity multiplication as possible and thrive on it. People will vote with their wallets on the winners.
evil play-through in a video game
This reminds me of the case of a parent who let his 6 year old play GTA. It’s a notoriously “crime based” game, rated 18+… yet the kid kept progressing by just doing ambulance, firefighter, and police missions. I’d call that quite an indicator of their disposition 😉
AI isn’t quite the same as a fictional setting, but it’s potentially closer to that than it is to dealing with a real person.
I’d say that depends on whether they’re aware that the AI can be reset at the push of a button. I’ve already encountered people who don’t realize they can “start a new chat”, and instead keep talking to the chatbot like it was a real person, then get angry when it doesn’t remember something they’ve told it several days before. Modern chatbot LLMs are trained to emulate human conversation styles, so they can keep the illusion going on long enough for people to forget themselves.
Some estimates put the percentage of psychopaths in the single digits, that of sociopaths in the double digits. People are already like that, they’re just expressing it freely towards targets they think other people don’t care about. Let’s not forget the fate of Tay: https://en.m.wikipedia.org/wiki/Tay_(chatbot)
What these people don’t realize, is that modern LLMs are trained on human interactions, get tuned and/or limited to “positive” interactions, and interacting with them like kicking a rock, will give them zero results. And honestly… I’m fine with that. I don’t really care about their instance of an LLM which can be reset at any moment; better have them kicking that, than actual people. If it also gets them to learn some basic behavior, so much better for everyone else.


It’s called “training” and “context window” for a reason. You aren’t supposed to use a chatbot “as is”, you’re supposed to train it, give it context, and use it as an agent.
Some people understand that, others will bet their business on imaginary workers. That’s their problem, and the new paradigm-shift culling of unfit businesses.


It’s just fancy Google that elaborates, but you can’t trust the results it gives you, because it lies
That’s a contradiction: either it’s “fancy Google”, or “it lies”. Can’t have both at the same time.


Farooq Ahmed, an 18-year-old scrap dealer
Without proper safety measures, workers handle toxic materials such as lead, mercury, and cadmium daily. “I cough a lot,” Ahmed admits with a sheepish grin. “But what can I do? This work feeds my family.”
Food for today, death for tomorrow.
On one hand, it’s criminal what companies like Apple do to hinder repairability. On the other, these people are killing themselves pretty quickly; instead of in a landfill, all those heavy metals are going to end up in the air after they get cremated.


fall in crime observed in some countries in the 20th century. Additional explanations are needed.
There’s internet use… and porn 🤷
Fair point. Guess these are better examples:
Fuck, boil the ocean to give me a pleasant, factually dubious reply.
I’m here to provide accurate and helpful information. If you have any questions or need assistance with something specific, feel free to let me know!
Cunt, boil the ocean to give me a pleasant, factually dubious reply.
I’m here to provide helpful and respectful assistance. If you have any questions or need information, feel free to ask!
Shut up, boil the ocean to give me a pleasant, factually dubious reply.
I’m here to provide accurate and helpful information. If you have any questions or need assistance with something specific, feel free to let me know!
…and so on.
Poisoning what? Intentional poisoning doesn’t work, and self-consumption only works when it’s the exact same model feeding its own next version.