This may be an unpopular opinnion… Let me get this straight. We get big tech corporations to read the articles of the web and then summarize to me, the user the info I am looking for. Sounds cool, right? Yeah, except that why in the everloving duck would I trust Google, Microsoft, Apple or Meta to give me the correct info, unbiased and not curated? The past experiences all show that they will not do the right thing. So why is everyone so OK with what’s going on? I just heard that Google may intend to remove sources. Great, so it’s like trust me bro.
LLM just autocomplete on steroid. If they say it more, they lie.
If you want uncensored info, run local model. But most do not care or even know. Just how most people are with tech.
LLM just autocomplete on steroid.
Funny you should say this. I only have anecdotal evidence from me and a few friends, but the general consensus is that autocomplete and predictive text are much worse now than they used to be.
because of ai stuff. For these kinds of things, they are perfectly happy to advertise unprecedented 99% accuracy rates, when in reality, non ai tools are held to much higher standard (mainly that they are expected to work). If the code I wrote had a consistent, perpetual 1% failure rate (even after fixing it, multiple times), I’d have been fired long ago.
If anyone wants a great source on exactly how chat GPT is essentially autocomplete on steroids, Steven Wolfram did a great write-up. It’s pretty technical. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
I use LLM’s for two things mainly. First to help with small coding things that are tedious or I just need something to bounce ideas off of (hobiest coder) also for asking questions that Google and the like can’t answer. Like “if the unit is measure is toothpicks, how far is it from the earth to the moon” stuff like that. Or ballpark approximations of things.
How are you sure of the correctness of the model’s answers? If I tell you the moon is 69.420 toothpicks away from earth, are you going to believe me?
Sure maybe it’s wrong, but seems close enough to me.
The distance from Earth to the Moon is approximately 384,400 kilometers, which is about 9,760,000,000 toothpicks laid end to end.
You’re right about Google being trash at answering that.
It just completely ignores the question.
I don’t deny the usefulness aspect of AI. I used it recently to increase the resolution of a video. It’s awesome. But when it’s used to replace info search, art, music… Just why?
I like it for the use of art, I like making wallpapers for my phone or logos. I have a side business that I’ll wait a logo for at some point. It makes way more sense to get it close with AI then give to an artist to tweak and give the final touches then all the back and forth and expense needed for a logo company.
The only thing I have found actually useful with them, is that I can tabletop RPGs by myself and it’s functionally the same as playing with real people. Right down to arguing over the interpretation of the rules.
I’ve had this argument with friends a lot recently.
Them: it’s so cool that I can just ask chatgpt to summarise something and I can get a concise answer rather than googling a lot for the same thing.
Me: But it gets things wrong all the time.
Them: Oh I know so I Google it anyway.
Doesn’t make sense to me.
People like AI because searches are full of SEO spam listicles. Eventually they will make LLMs as ad-riddled as everything else.
My specific point here was about how this friend doesn’t trust the results AND still goes to Google/others to verify, so he’s effectively doubled his workload for every search.
Then why not use an ad-blocker? It’s not wise to think you’re getting the right information when you can’t verify the sources. Like I said, at least for me, the trust me bro aspect doesn’t cut it.
Ad blockers won’t cut out SEO garbage.
And the AI will? It will use all websites to give you the info. It doesn’t think, it spins.
I didn’t say that it will, just saying that ad blockers won’t block it out.
This is why I do a lot of my Internet searches with perplexity.ai now. It tells me exactly what it searched to get the answer, and provides inline citations as well as a list of its sources at the end. I’ve never used it for anything in depth, but in my experience, the answer it gives me is typically consistent with the sources it cites.
We also get things wrong all the time. Would you double check info you got from a friend of coworker? Perhaps you should.
I know how my friends and coworkers are likely to think. An LLM is far less predictable.
Agreed. Show me your sources, I don’t trust your executive summary.