My understanding is that it’s a difficult feature to support and they can’t guarantee it works well. That’s the only explanation I’ve ever seen, cause to me it’s almost critical for working on a laptop.
My understanding is that it’s a difficult feature to support and they can’t guarantee it works well. That’s the only explanation I’ve ever seen, cause to me it’s almost critical for working on a laptop.
I dont get why hibernate isn’t a more popular feature, I use it extensively as I hate having to set everything back up on each restart.
Its also one of my biggest issues with using Linux as it’s usually broken there.
Fair point then about the arguement around safety. For me the bigger issue is control. Cars with kill switches and conditions to use is a slippery slope. Just look at what’s happened with software and media. Don’t want to have to pirate my car or load custom firmware so I can use it as I want.
I don’t think there is a car where the seat belt is tied to anything besides a little notification beep. Seems like a different situation if the “safety” feature dictates how the car is used.
Yeah that’s right, seems my link didn’t populate right.
Do you still use WASM? I’ve been exploring the space and wasn’t sure what the best tools are for developing in that space.
Isn’t that just the difference between weight and mass?
Definitely sounds like it could be real. If I had to guess their mounting a drive (or another partition) and it’s defaulting to read only. When restarting it resets the original permissions as they only updated the file permissions, but not the mount configuration.
Also reads like some of my frustrations when first getting into Linux (and the issues I occasionally run into still).
Yeah, that’s what I was thinking. Just need to throw some foil on it and you’ve got a very expensive new buddy.
This is just the estimates to train the model, so it’s not accounting for the cost to develop the system for training, collecting the data, etc. This is just pure processing cost, which is staggeringly large numbers.
I think you’re missing the point. No LLM can do math, most humans can. No LLM can learn new information, all humans can and do (maybe to varying degrees, but still).
AMD just to clarify by not able to do math. I mean that there is a lack of understanding in how numbers work where combining numbers or values outside of the training data can easily trip them up. Since it’s prediction based, exponents/tri functions/etc. will quickly produce errors when using large values.
Here’s an easy way we’re different, we can learn new things. LLMs are static models, it’s why they mention the cut off dates for learning for OpenAI models.
Another is that LLMs can’t do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it’s almost guaranteed to fail.
Yes, they are very impressive models, but they’re a long way from AGI.
LLMs do suck at math, if you look into it, the o1 models actually escape the LLM output and write a python function to calculate the output, I’ve been able to break their math functions by asking for functions that use math not in the standard Python library.
I know someone also wrote a wolfram integration to help solve LLMs math problems.
Not sure if you’re serious, but they were making a joke because Intel, who makes chips, is a competitor to TMSC the chip manufacturer from the article.
So they played on that relationship by treating the word Intel in your “thanks for the Intel” comment as meaning the company.
Just read up more about the systems and always thought they charged you more, didn’t realize that for the time being they are zero interest loans.
Seems unsustainable, but sounds like they’re using the credit card technique of charing the storefront. It’ll be interesting to see where the bnpl industry goes.
This is why I hate the way the media and people talk about these issues. Here you say Lebanon, but the title is talking about Hezbollah. But honestly I’m sure Israel looks at it as Hezboollah is just a part of Lebanon. Why isn’t Israel allowed to defend itself from missiles being launched from Lebanon.
I mean it’s a legitimate political group in Lebanon that’s firing missiles at Israel. Why is that considered okay, what is Israel supposed to do?
Why be the bad guy when you can just enable them.
All the evolution in AI right now is just trying different model designs and/or data. It’s not one model that is being continuous refined or modified. Each iteration is just a new set of static weights/numbers that defines it’s calculations.
If the models were changing/updating through experience maybe what you’re writing would make sense, but that’s not the state of AI/ML development.
The download feature is always in some state of broken, but it has gotten a lot better over the past couple of years. If you haven’t tried it in a year or so, you may have better luck now.