Man… Anybody remember “Back Orifice”? The late nineties were weird.
Man… Anybody remember “Back Orifice”? The late nineties were weird.
If not vanilla Ubuntu, I’d still suggest trying an Ubuntu derivative like Linux Mint or POP! OS. Ubuntu has a huge community, so in the event you run into issues it’ll be easier to find fixes for it.
What you’ll find is that Linux distros are roughly grouped by a “family” (my term for it anyway). Anyone can (theoretically, anyway) start from a given kernel and roll their own distro, but most distros are modified versions of a handful of base distros.
The major families at the moment are
Debian: A classic all-rounder that prioritizes stability over all else. Ubuntu is descended from Debian.
Fedora: Another classic all-rounder. I haven’t used it in a decade, so I won’t say much about it here.
Arch: If Linux nerds were car people, Arch is for the hot rodders. You can tune and control pretty much any aspect of your system. … Not a good 1st distro if you want to just get something going.
There are many others, but these are the major desktop-PC distro families at the moment.
The importance of these families is that techniques that work in one (say) Debian-based distro will tend to work in other Debian-based distros… But not necessarily in distros from other families.
Man - that’s wild. Thank you for coming though with a citation - I appreciate it!
a quick web search uses much less power/resources compared to AI inference
Do you have a source for that? Not that I’m doubting you, just curious. I read once that the internet infrastructure required to support a cellphone uses about the same amount of electricity as an average US home.
Thinking about it, I know that LeGoog has yuge data centers to support its search engine. A simple web search is going to hit their massive distributed DB to return answers in subsecond time. Whereas running an LLM (NOT training one, which is admittedly cuckoo bananas energy intensive) would be executed on a single GPU, albeit a hefty one.
So on one hand you’ll have a query hitting multiple (comparatively) lightweight machines to lookup results - and all the networking gear between. One the other, a beefy single-GPU machine.
(All of this is from the perspective of handling a single request, of course. I’m not suggesting that Wikipedia would run this service on only one machine.)
This looks less like the LLM is making a claim so much as using an LLM to generate a search query and then read through the results in order to find anything that might relate to the section being searched.
It leans into the things LLMs are pretty good at (summarizing natural language; constructing queries according to a given pattern; checking through text for content that matches semantically instead of literally) and links directly to a source instead of leaning on the thing that LLMs only pretend to be good at (synthesizing answers).
In order to add their names to your dictionary. You don’t have to allow it. But given that there’s no internet access for the keyboard - it seems pretty safe
Thank you for responding! I really liked this bit
with a (decently designed) UI, you merely have to remember the path you took to get to wherever you want to go, what buttons to press, what mouse movements to execute.
I think that’s very insightful. I certainly have developed muscle-memory for many of my most-frequent commands in the CLI or editor of choice.
I agree about Visual Studio as a preference. I’ve used (or at least tried) dozens of IDE setups down the years from vi/emacs to JetBrains/VS to more esoteric things like Code Bubbles. I’ve found my personal happy place but I’d never tell someone else their way of working was wrong.
(Except for emacs devs. (Excepting again evil-mode emacs devs - who are merely confused and are approaching the light.)) ;)
I hope you take this in good humor and at least consider a TUI for your next project.
Absolutely. I see what you did there… 😉
But seriously, thank you for your response!
I think your comment about GUIs being better at displaying the current state and context was very insightful. Most CLI work I do is generally about composing a pipeline and shoving some sort of data through it. As a class of work, that’s a common task, but certainly not the only thing I do with my PC.
Multistage operations like, say, Bluetooth pairing I definitely prefer to use the GUI for. I think it is partially because of the state tracking inherent in the process.
Thanks again!
As someone who genuinely loves the command line - I’d like to know more about your perspective. (Genuinely. I solemnly swear not to try to convince you of my perspective.)
What about GUIs appeals to you over a command line?
I like the CLI because it feels like a conversation with the computer. I explain what I want, combining commands as necessary, and the machine responds.
With GUIs I feel like I’m always relearning tools. Even something as straightforward as ‘find and replace’ has different keyboard shortcuts in most of the text-editing apps I use - and regex support is spotty.
Not to say that I think the terminal is best for all things. I do use an IDE and windowing environments. Just that - when there are CLI tools I tend to prefer them over an equivalent GUI tool.
Anyway, I’m interested to hear your perspective- what about GUIs works better for you? What about the CLI is failing you?
Thank you!
Let’s start a patent troll company that exclusively deals in dark pattern bullshit. Then sue every company that implements any of our terrible patents for as much money as possible. Use the proceeds to bribe lobby congress to pass stronger consumer protection laws.
Experience.
So… unlike Stable Diffusion or LLMs, the point of this research isn’t actually to generate a direct analog to the input, in this case video games. It’s testing to see if a generative model can encode the concepts of an interactive environment.
Games in general have long been used in AI research because they are models of some aspect of reality. In this case, the researchers want to see if a generative AI can learn to predict the environment just by watching things happen. You know, like real brains do.
E.g. can we train something that learns the rules of reality just by watching video combined with “input signals”. If so, it opens up whole new methods for training robots to interact with the real world.
That’s why this is newsworthy beyond just “AI Buzz” cycle.
To be horribly pedantic… Not necessarily!
It could be Apple users -> Windows users -> Linux users – with larger numbers of Apple -> Windows conversions than Windows -> Linux conversions…
You know.
Maybe.
Nah. AI-generated content doesn’t “ruin” the internet any more than Disney can “ruin” Star Wars.
The good stuff is still there. Always has been. Low effort Sora vids don’t reduce the entertainment value of - say - Tom Scott’s oeuvre.
What AI spam does its the same thing all spam has ever done - increases the amount of noise we have to filter.
Noise is always cheaper to manufacture than signal so it always appears to dominate. … but any given noise has no lasting commercial value, while high quality signal always does. That’s why the old newspaper companies are still around even when you can just read Twitter to get the gist of world events.
Intelligence and thoughtful design matter.
We’re gonna see a lot of AI spam for a couple years. But I promise you someone is already working hard to figure out how to identify it.
When I first joined the internet it was considered virtually impossible to detect and block spam reliably. Now, email spam is a rare annoyance that only impacts us occasionally.
Someone will crack AI-detection, or better yet, solve “this is noise” detection once and for all.
Listen here, you little shit–
OK, so we should all just start prefixing every comment with marker meme text for the bots to learn (and humans to filter out). The bots pick up some truly weird patterns and go insane.
More insidiously, have an LLM rephrase all comments between posting and display. Looks human-enough, should still contain our salient points - and plays merry hell with future training efforts.
Matt Foley was a Life Coach. https://www.youtube.com/watch?v=Xv2VIEY9-A8
It’s not as good, but running small LLMs locally can work. I’ve been messing around with ollama, which makes it drop dead simple to try out different models locally.
You won’t be running any model as powerful as ChatGPT - but for quick “stack overflow replacement” style of questions I find it’s usually good enough.
And before you write off the idea of local models completely, some recent studies indicate that our current models could be made orders of magnitude smaller for the same level of capability. Think Moore’s law but for shrinking the required connections within a model. I do believe we’ll be able to run GPT3.5-level models on consumer grade hardware in the very near future. (Of course, by then GPT-7 may be running the world but we live in hope).
Ha ha - thank you! I love that story too.
My dude - eye patches are cool. I only had to wear one for a few weeks, story in my other comment, but for real an eyepatch is an awesome opportunity.
Get - or make - yourself a nice eye patch. Own it. Don’t settle for plain black or tan. Get it embroidered. Bedazzle it. Get yourself patches in different colors and patterns.
An eyepatch is always gonna be noticed, so don’t try for subtle. Lean into it instead and make it fucking awesome.
Yup. Zorin’s another great Debian-based distro. I’ve been running it on my laptop for awhile now and I’m a fan.