Mr. Torvalds is truly a generous man, giving the current AI market an analysis of 10% usefulness is probably a decimal or two more than will end up panning out once the hype bubble pops.
Decided to say something popular after his snafu, I see.
Ai bad gets them every time.
Copilot by Microsoft is completely and utterly shit but they’re already putting it into new PCs. Why?
Investors are saying they’ll back out if no AI in products. So tech leaders will talk talk and all deal with ai.
Copilot + Pcs tho…
Yup.
I don’t know why. The people marketing it have absolutely no understanding of what they’re selling.
Best part is that I get paid if it works as they expect it to and I get paid if I have to decommission or replace it. I’m not the one developing the AI that they’re wasting money on, they just demanded I use it.
That’s true software engineering folks. Decoupling doesn’t just make it easier to program and reuse, it saves your job when you need to retire something later too.
Their goal isn’t to make AI.
The goal of both the VCs and the startups is to make money. That’s why.
It’s not even to make money, they already do that. They need GROWTH. More money this quarter than last or the stockholders don’t get paid.
Growth doesn’t mean revenue over cost anymore, it just means number go up. The easiest way to create growth from nothing is marketing tulips to venture capital and retail investors.
The people marketing it have absolutely no understanding of what they’re selling.
Has it ever been any different? Like, I’m not in tech, I build signs for a living, and the people selling our signs have no idea what they’re selling.
The worrying part is the implications of what they’re claiming to sell. They’re selling an imagined future in which there exists a class of sapient beings with no legal rights that corporations can freely enslave. How far that is from the reality of the tech doesn’t matter, it’s absolutely horrifying that this is something the ruling class wants enough to invest billions of dollars just for the chance of fantasizing about it.
Just like Furbys
No AI is a very real thing… just not LLMs, those are pure marketing
The latest llms get a perfect score on the south Korean SAT and can pass the bar. More than pure marketing if you ask me. That does not mean 90% of business that claim ai are nothing more than marketing or the business that are pretty much just a front end for GPT APIs. llms like claud even check their work for hallucinations. Even if we limited all ai to llms they would still be groundbreaking.
Korean SAT are highly standardized in multiple choice form and there is an immense library of past exams that both test takers and examiners use. I would be more impressed if the LLMs could show also step by step problem work out…
Claud 3.5 and o1 might be able to do that; if not, they are close to being able to do that. Still better than 99.99% of earthly humans
You seem to be in the camp of believing the hype. See this write up of an apple paper detailing how adding simple statements that should not impact the answer to the question severely disrupts many of the top model’s abilities.
In Bloom’s taxonomy of the 6 stages of higher level thinking I would say they enter the second stage of ‘understanding’ only in a small number of contexts, but we give them so much credit because as a society our supposed intelligence tests for people have always been more like memory tests.
Exactly… People are conflating the ability to parrot an answer based on machine-levels of recall (which is frankly impressive) vs the machine actually understanding something and being able to articulate how the machine itself arrived at a conclusion (which, in programming circles, would be similar to a form of “introspection”). LLM is not there yet
it is basically like how self improvement folks are using quantum
and that 10% isnt really real, just a gabbier dr.sbaitso
Idk man, my doctors seem pretty fucking impressed with AI’s capabilities to make diagnoses by analyzing images like MRI’s.
then you are a fortunate rarity. most posts about the tech complain about ai just rearranging what it is told and regurgitating it with added spice
I think that is because most people are only aware of its use as what are, effectively, chat bots. Which, while the most widely used application, is one of its least useful. Medical image analysis is one of the big places it is making strides in. I am told, by a friend in aerospace, that it is showing massive potential for a variety of engineering uses. His firm has been working on using it to design, or modify, things like hulls, air frames, etc. Industrial uses, such as these, are showing a lot of promise, it seems.
thats good. be nice if all the current ai developers would aim that way
He is correct. It is mostly people cashing out on stuff that isn’t there.
Like with any new technology. Remember the blockchain hype a few years back? Give it a few years and we will have a handful of areas where it makes sense and the rest of the hype will die off.
Everyone sane probably realizes this. No one knows for sure exactly where it will succeed so a lot of money and time is being spent on a 10% chance for a huge payout in case they guessed right.
It has some application in technical writing, data transformation and querying/summarization but it is definitely being oversold.
There’s an area where blockchain makes sense!?!
Cryptocurrencies can be useful as currencies. Not very useful as investment though.
Git is a sort of proto-blockchain – well, it’s a ledger anyway. It is fairly useful. (Fucking opaque compared to subversion or other centralized systems that didn’t have the ledger, but I digress…)
Yep, Ik ai should die someday.
I had a professor in college that said when an AI problem is solved, it is no longer AI.
Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they’re just tools we use without thinking about them.
I’m sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it’s just the keyboard app on my phone.
There’s a name for it the phenomenon: the AI effect.
The approach of LLMs without some sort of symbolic reasoning layer aren’t actually able to hold a model of what their context is and their relationships. They predict the next token, but fall apart when you change the numbers in a problem or add some negation to the prompt.
Awesome for protein research, summarization, speech recognition, speech generation, deep fakes, spam creation, RAG document summary, brainstorming, content classification, etc. I don’t even think we’ve found all the patterns they’d be great at predicting.
There are tons of great uses, but just throwing more data, memory, compute, and power at transformers is likely to hit a wall without new models. All the AGI hype is a bit overblown. That’s not from me that’s Noam Chomsky https://youtu.be/axuGfh4UR9Q?t=9271.
I’ve often thought LLMs could replace all of the C-suites and upper and middle management.
Funny how no companies push that as a possibility.
I almost expect that we’ll see some company reveal it has been letting an AI control the top level decision making for the business itself, including if and when to reveal the AI.
But the funny thing will be that all the executives and board members still have jobs and huge stock awards. They will all pat each other on the back for getting paid more money to do less work, by being bold and taking a risk to let the computer do half their job for them.
What happened to Linus? He looks so old now…
If you find out what happened, let me know, because I think it’s happening to me too.
Time
He got old.
I guess having 3 kids will do that to you.
That, and developing software for 30+ years.
That and leading an open source project for 30 years.
THE open source project.
Whether you’re leading a project or not, time will have pretty much the same impact. He’s in his mid-50s, and he looks pretty good for that age.
I mean he’s aging quite well given his position… Many people burn out way earlier.
Not especially old, though; he looks like a 54yo dev. Reminds me of my uncles when they were 54yo devs.
As a 46 year old dev I’m starting to look that way too.
[citation needed]/s
I told him not to go to that beach.
That’s an excessive amount of aging is what folks are seeing. Not that he’s just old.
He’s lost a lot of weight in 4 years so that’s probably exacerbating the wtf.
He’s 54, I think he looks pretty average for that age. He looks like an old dad, because he is.
He has a real Michael McKean vibe
Wow, yeah that’s a big difference from how I remember him
It’s like he aged 10 years in the past 2 years… damn
People age. You don’t look the same as in 2010 either, I know that without having any idea what you look like.
he aged
Source?
What happened to he is happening now to you.
Oxidative stress is a bitch
He’s 54 years old
So basically just like linux. Except linux has no marketing…So 10% reality, and 90% uhhhhhhhhhh…
That says more about your ignorance than anything about AI or Linux.
What
Some Linux bad Windows good troll
Did I fall into a 1999 Slashdot comment section somehow?
Never heard of Android I guess?
90% angry nerds fighting each other over what answer is “right”
You’re aware Linux basically runs the Internet, right?
You’re aware Linux basically runs the
InternetWorld, right?Billions of devices run Linux. It is an amazing feat!
So basically just like linux. Except linux has no marketing
Except for the most popular OS on the Internet, of course.
I am thinking of deploying a RAG system to ingest all of Linus’s emails, commit messages and pull request comments, and we will have a Linus chatbot.
Hold on there Satan… let’s be reasonable here.
In a way he’s right, but it depends! If you take even a common example like Chat GPT or the native object detection used in iPhone cameras, you’d see that there’s a lot of cool stuff already enabled by our current way of building these tools. The limitation right now, I think, is reacting to new information or scenarios which a model isn’t trained on, which is where all the current systems break. Humans do well in new scenarios based on their cognitive flexibility, and at least I am unaware of a good framework for instilling cognitive flexibility in machines.