I came across this article in another Lemmy community that dislikes AI. I’m reposting instead of cross posting so that we could have a conversation about how “work” might be changing with advancements in technology.

The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work “you and I do today” (including Altman himself), doesn’t look like work.

The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

As humanity’s core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn’t seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they’re made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

These days we have fewer bookkeepers - most companies don’t need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn’t have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

  • 6nk06@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    104
    arrow-down
    3
    ·
    10 days ago

    At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

    No and no. Have you ever coded anything?

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      6
      ·
      10 days ago

      Yeah, I have never spent “days” setting anything up. Anyone who can’t do it without spending “days” struggling with it is not reading the documentation.

      • HarkMahlberg@kbin.earth
        link
        fedilink
        arrow-up
        52
        arrow-down
        1
        ·
        10 days ago

        Ever work in an enterprise environment? Sometimes a single talented developer cannot overcome the calcification of hundreds of people over several decades who care more about the optics of work than actual work. Documentation cannot help if its non-existent/20 years old. Documentation cannot make teams that don’t believe in automation, adopt Docker.

        Not that I expect Sam Altman to understand what it’s like working in a dumpster fire company, the only job he’s ever held is to pour gasoline.

        • killeronthecorner@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          edit-2
          9 days ago

          Dumpster fire companies are the ones he’s targeting because they’re the mostly likely to look for quick and cheap ways to fix the symptoms of their problems, and most likely to want to replace their employees with automations.

        • kescusay@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          9 days ago

          Well, if I’m not, then neither is an LLM.

          But for most projects built with modern tooling, the documentation is fine, and they mostly have simple CLIs for scaffolding a new application.

          • galaxy_nova@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 days ago

            I mean if you use the code base you’re working in as context it’ll probably learn the code base faster than you will, although I’m not saying that’s a good strategy, I’d never personally do that

            • kescusay@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              9 days ago

              The thing is, it really won’t. The context window isn’t large enough, especially for a decently-sized application, and that seems to be a fundamental limitation. Make the context window too large, and the LLM gets massively offtrack very easily, because there’s too much in it to distract it.

              And LLMs don’t remember anything. The next time you interact with it and put the whole codebase into its context window again, it won’t know what it did before, even if the last session was ten minutes ago. That’s why they so frequently create bloat.

      • Bo7a@piefed.ca
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        9 days ago

        I know this was aimed at someone else. But my response is “Every day.” What is your follow-up question?

    • nucleative@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      31
      ·
      10 days ago

      If your argument attacks my credibility, that’s fine, you don’t know me. We can find cases where developers use the technology and cases where they refuse.

      Do you have anything substantive to add to the discussion about whether AI LLMs are anything more than just a tool that allows workers to further abstract, advancing all of the professions it can touch towards any of: better / faster / cheaper / easier?

      • HarkMahlberg@kbin.earth
        link
        fedilink
        arrow-up
        18
        ·
        9 days ago

        Yeah, I’ve got something to add. The ruling class will use LLMs as a tool to lay off tens of thousands of workers to consolidate more power and wealth at the top.

        LLMs also advance no profession at all while it can still hallucinate and be manipulated by it’s owners, producing more junk that requires a skilled worker to fix. Even my coworkers have said “if I have to fix everything it gives me, why didn’t I just do it myself?”

        LLMs also have dire consequences outside the context of labor. Because of how easy they are to manipulate, they can be used to manufacture consent and warp public consciousness around their owners’ ideals.

        LLMs are also a massive financial bubble, ready to pop and send us into a recession. Nvidia is shoveling money into companies so they can shovel it back into Nvidia.

        Would you like me to continue on about the climate?

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        I’ve got something to add: in every practical application AI have increased liabilities and created vastly inferior product, so they’re not more than just a tool that allows workers to further abstract because they are less than that. This in addition to the fact that AI companies can’t turn a profit, so it’s not better, not faster, not cheaper, but but it is certainly easier (to do a shit job).

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    61
    ·
    10 days ago

    Cool, know what job could easily be wiped out? Management. Sam Altman is a manager.

    Therefore, Sam Altman doesn’t do real work. Fuck you, asshole.

  • m-p{3}@lemmy.ca
    link
    fedilink
    English
    arrow-up
    46
    ·
    edit-2
    9 days ago

    CEO isn’t an actual job either, it’s just the 21st century’s titre de noblesse.

  • Leon@pawb.social
    link
    fedilink
    English
    arrow-up
    41
    ·
    9 days ago

    At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

    I’d not put an LLM in charge of developing a framework that is meant to be used in any sort of production environment. If we’re talking about them setting up the skeleton of a project, then templates have already been around for decades at this point. You also don’t really set up new projects all that often.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 days ago

      Fuck, I barely let AI make functions in my code because half the time the fuckin idiot can’t even guess the correct method name and parameters when it can pull up the goddamned help page like I can or even Google the basic syntax.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        A year ago AI answers were only successfully compiling for me about 60% of the time. Now they’re up over 80%, and I’m no longer in the loop when they screw up, they get it right on the first try 80% of the time, then 96% of the time by the 2nd try, 99% by the third try, 99.84% of the time by the 4th try, and the beauty is: they retry for themselves until they get something that actually compiles.

        Now we can talk about successful implementation of larger feature sets…

    • CeeBee_Eh@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 days ago

      I tried to demo an agentic AI in Jetbrains to a coworker, just as a “hey look at this neat thing that can make changes on its own”. As the example I told it to convert a constructor in c# to a primary constructor.

      So it “thought” and made the change, “thought” again and reverted the change, “thought” once again and made the change again, then it “thought” for a 4th time and reverted the changes again. I stopped it there and just shook my head.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 days ago

        I had similar experiences a few months back, like 6-8. Since Anthropic Sonnet 4.0 things have changed significantly. 4.5 is even a bit better. Competing models have been similarly improving.

    • kent_eh@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 days ago

      If we’re talking about them setting up the skeleton of a project, then templates have already been around for decades at this point.

      That’s what LLMs are good at - taking old work (without consent) and regurgitating it while pretending it’s new and unique.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 days ago

      Most of what LLMs present as solutions have been around for decades, that’s how they learned them: from source material they train to.

      So far, AI hasn’t surprised me with anything clever or new, mostly I’m just reminding it to follow directions, and often I’m pointing out better design patterns than what it implements on the first go around.

      Above all: you don’t trust what an LLM spits out any more than you trust a $50/hr “consultant” from the local high school computer club to give you business critical software… you test it, if you have the ability you review it at the source level, line by line. But there ARE plenty of businesses out there running “at risk” with sketchier software developers than the local computer club, OF COURSE they are going to trust AI generated code further than they should.

      Get the popcorn, there will be some entertaining stories about that over the coming year.

    • aesthelete@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 days ago

      This is my take with it too. They seem to be good at creating “high fidelity” mock-ups, and creating a basic framework for something, but try to even get them to change a background color or something and they just lie to you.

      They’re basically a good tool for stubbing stuff out for a web application…which, it’s insane that we had to jump through all of these hoops and spend unknown billions in order to get that. At this point, I would assume that we have a rapid application development equivalent for web apps…but maybe not.

      All of the “frameworks” involved in front-end application delivery certainly don’t seem to provide any benefit of speeding up development cycles. Front-end development seems worse today than when I used to be a full-time full stack engineer (and I had fucking IE6 to contend with at the time).

    • nucleative@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      2
      ·
      10 days ago

      From the article:

      “The thing about that farmer,” Altman said, is not only that they wouldn’t believe you, but “they very likely would look at what you do and I do and say, ‘that’s not real work.'”

      I think he pretty much agrees with you.

      • Korhaka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        9 days ago

        You drive a tractor up and down a field, is that really any more work than the rest of us?

  • SapphironZA@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    29
    ·
    8 days ago

    Executive positions are probably the easiest to replace with AI.

    1. AI will listen to the employees
    2. They will try to be helpful by providing context and perspective based on information the employee might not have.
    3. They will accept being told they are wrong and update their advice.
    4. They will leave the employee to get the job done, trusting that the employee will get back to them if they need more help.
  • Curious Canid@lemmy.ca
    link
    fedilink
    English
    arrow-up
    28
    ·
    9 days ago

    Sam Altman is a huckster, not a technologist. As such, I don’t really care what he says about technology. His purpose has always been to transfer as much money as possible from investors into his own pocket before the bubble bursts. Anything else is incidental.

    I am not entirely writing off LLMs, but very little of the discussion about them has been rational. They do some things fairly well and a lot of things quite poorly. It would be nice if we could just focus on the former.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 days ago

      hes probably afraid, its going to burst too fast and is left holding the bag, thats why GATES, musk, MS, google is trying to stem the bleeding.

  • MonkderVierte@lemmy.zip
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    edit-2
    9 days ago

    Talking psychology, please stop calling it AI. This raises unrealistic expectations. They are Large Language Models.

    • jungle@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      8
      ·
      9 days ago

      In computer science machine learning and LLMs are part of AI. Before that other algorithms were considered part of AI. You may disagree, probably because all the hype around LLMs, but they are AI

        • jungle@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 days ago

          No, I saw it, but I was replying to the “please stop calling it AI” part. This is a computer science term, not a psychology term. Psychologists have no business discussing what computer scientists call these systems

          • MonkderVierte@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            9 days ago

            What do i even answer here…

            Who talks even about computer scientists? It’s the public and especially company bosses who get wrong expectations about “intelligence”. It’s about psychology, not about scientifically correct names.

            • jungle@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              1
              ·
              edit-2
              9 days ago

              Ah, I see. We in the software industry are no longer allowed to use our own terms because outsiders co-opted them.

              Noted.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              9 days ago

              The solution to the public misusing technical terms isn’t to change the technical terms, but to educate the public. All of the following fall under AI:

              • pathing algorithms of computer opponents, but probably not the decisions that computer opponents make (i.e. who to attack; that’s usually based on manually specified logic)
              • the speech to text your phone used before Gemeni or whatever it’s called now on Android (Gemeni is also AI, just a different type of AI)
              • home camera systems that can detect people vs animals, and sometimes classify those animals by species
              • DDOS protection systems and load balancers for websites probably use some type of AI

              AI is a broad field, and you probably interact with non-LLM variants every day, whether you notice or not. Here’s a Wikipedia article that goes through a lot of it. LLMs/GPT are merely one small subfield in the larger field of AI.

              I don’t understand how people went from calling the computer player in their game “AI” (or even older, “CPU”), which nobody mistook for actual intelligence, to now people believing AI means something is sentient. Maybe it’s because LLMs are more convincing since they do a much better job at languages, idk, but it’s the same category of thing under the hood. ChatGPT isn’t “thinking,” and when it claims to “think,” it’s basically turning a prompt into a set of things to “think” about (basically generates and answers related prompts), and then uses that set of things in its context to provide an answer. It’s not actually “thinking” as people do, it’s merely following a set of statistically-motivated steps based on your prompt to generate a relevant answer. It’s a lot more complex than that Warcraft 2 bot you played against as a kid, but it’s still following steps a human designed, along with some statistical methods to adapt to things the developer didn’t encounter.

              • MangoCats@feddit.it
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 days ago

                The problem with AI in a “popular context” is that it has been a forever moving target. Old mechanical adding machines were better at correctly summing columns of numbers than humans, at the time they were considered a limited sort of artificial intelligence. All along the spectrum it continues. 5 years ago, image classifiers that can sit and watch video feeds 24-7, accurately identifying things that happen in the feed with better than human accuracy (accounting for human lack of attention, coffee breaks, distracting phone calls, etc.) those were amazing feats of AI - at the time, and now they’re “just image classifiers” much as Alpha-Zero “just plays games.”

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  8 days ago

                  The first was never “AI” in a CS context, and the second has always and will always be “AI” in a CS context. The definition has been pretty consistent since at least Alan Turing, if not earlier.

                  I don’t know how to square that circle. To me it’s pretty simple, a solution or approach is AI if it simulates (or creates) intelligence, and an intelligent system is one that uses data (learns) from its environment to achieve its goals. Anything from an A* pathiing algorithm to actual general AI are “AI,” yet people assume the most sophisticated end of the spectrum.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        6
        ·
        9 days ago

        Granting them AI status, we should recognize that they “gained their abilities” by training on the rando junk that people post on the internet.

        I have been working with AI for computer programming, semi-seriously for 3 months, pretty intensively for the last two weeks. I have also been working with humans for computer programming for 35 years. AI’s “failings” are people’s failings. They don’t follow directions reliably, and if you don’t manage them they’ll go down rabbit holes of little to no value. With management, working with AI is like an accelerated experience with an average person, so the need for management becomes even more intense - where you might let a person work independently for a week then see what needs correcting, you really need to stay on top of AI’s “thought process” on more of a 15-30 minute basis. It comes down to the “hallucination rate” which is a very fuzzy metric, but it works pretty well - at a hallucination rate of 5% (95% successful responses) AI is just about on par with human workers - but faster for complex tasks, and slower for simple answers.

        Interestingly, for the past two weeks, I have been having some success with applying human management systems to AI: controlled documents, tiered requirements-specification-details documents, etc.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          9 days ago

          It comes down to the “hallucination rate” which is a very fuzzy metric, but it works pretty well - at a hallucination rate of 5% (95% successful responses) AI is just about on par with human workers - but faster for complex tasks, and slower for simple answers.

          I have no idea what you’re doing, but based on my own experience, your error/hallucination rate is like 1/10th of what I’d expect.

          I’ve been using an AI assistant for the better part of a year, and I’d laugh at the idea that they’re right even 60% of the time without CONSTANTLY reinforcing fucking BASIC directives or telling it to provide sources for every method it suggests. Like, I can’t even keep the damned thing reliably in the language framework I’m working on without it falling back to the raw vendor CLI in project conversations. I’m correcting the exact same mistakes week after week because the thing is braindead and doesn’t understand that you cannot use reserved keywords for your variable names. It just makes up parameters to core functions based on the question I ask it, regardless of documentation until I call it’s bullshit and it gets super conciliatory and then actually double checks it’s own work instead of authoritatively lying to me.

          You’re not wrong that AI makes human style mistakes, but a human can learn, or at least generally doesn’t have to be taught the same fucking lesson at least once a week for a year (or gets fired well before then). AI is artificial, but there absolutely isn’t any intelligence behind it, it’s just a stochastic parrot that somehow comes to plausible answers that the algorithm expects that you want to hear.

          • aesthelete@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            9 days ago

            You’re not wrong that AI makes human style mistakes, but a human can learn, or at least generally doesn’t have to be taught the same fucking lesson at least once a week for a year (or gets fired well before then).

            This is the point nobody seems to get. Especially people that haven’t worked with the technology.

            It just does not have the ability to learn in any meaningful way. A human can learn a new technique and move to master simple new techniques in a couple of hours. AI just keeps falling back on its training data no matter how many times you tell it to stop. It has no other option. It would need to be re-trained with better material in order to consistently do what you want it to do, but nobody is really re-training these things…they’re using the “foundational” models and at most “fine-tuning” them…and fine-tuning only provides a quickly punctured facade…it eventually falls back to the bulk of its learning material.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            6
            ·
            9 days ago

            your error/hallucination rate is like 1/10th of what I’d expect. I’ve been using an AI assistant for the better part of a year,

            I’m having AI write computer programs, and when I tried it a year ago I laughed and walked away - it was useless. It has improved substantially in the past 3 months.

            CONSTANTLY reinforcing fucking BASIC directives

            Yes, that is the “limited context window” - in my experience people have it too.

            I have given my AIs basic workflows to follow for certain operations, simple 5 to 8 step processes, and they do them correctly about 19 times out of 20, but that 5% they’ll be executing the same process and just skip a step - like many people tend to as well.

            but a human can learn

            In the past week I have been having my AIs “teach themselves” these workflows and priorities. Prioritizing correctness over speed, respecting document hierarchies when deciding which side of a conflict needs to be edited, etc. It seems to be helping somewhat. I had it research current best practices on context window management and apply it to my projects, and that seems to have helped a little too. But, while I type this, my AI just ran off and started implementing code based on old downstream specs that should have been updated to reflect top level changes we just made, I interrupted it and told it to go back and do it the right way, like its work instructions already tell it to. After the reminder it did it right : limited context window.

            The main problem I have with computer programming AIs is: when you have a human work on a problem for a month, you drop by every day or two to see how it’s going, clarify, course correct. The AI does the equivalent work in an hour and I just don’t have the bandwidth to keep up at that speed, so it gets just as far off in the weeds as a junior programmer locked in a room and fed Jolt cola and Cheetos through a slot in the door would after a month alone.

            An interesting response I got from my AI recently regarding this phenomenon was: it provided “training seminar” materials for our development team telling them how to proceed incrementally with the AI work and carefully review intermediate steps. I already do that with my “work side” AI project, it didn’t suggest it. My home side project where I normally approve changes without review is the one that suggested the training seminar.

    • lechekaflan@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 days ago

      What do we need the mega rich for anyway?

      Supposedly the creation and investment of industries, then managing those businesses which also supposedly provide employment for thousands who make the things for them. Except they’ll find ways to cut costs and maximize profit. Like looking for cheaper labor while at the same time thinking of building the next megayacht for which to flex off at Monte Carlo next summer.

  • TheFogan@programming.dev
    link
    fedilink
    English
    arrow-up
    24
    ·
    10 days ago

    You know what, he actually wouldn’t be horrificly wrong if he were actually pushing for something there. Lets say hypothetically our jobs, aren’t real work, and it’s no big deal that they are replaced… the actual intents of progression of technology… was originally that when the ratio of work needed to be done and people shifts… we’d work less for more pay etc… but no we just capitalism it and say “labor is in high supply, so we need to cut it’s price until people can find use for it”.

    • ZoteTheMighty@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      10 days ago

      I feel like he’s really onto something about real work, but he’s missing the point of society. The purpose of our economy is to employ everyone, thus minimizing the negative societal effects of supporting unemployed people, and enabling people to improve their lives. If you optimize a society to produce more GDP by firing people, you’re subtracting value, not adding it.

      • squaresinger@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        9 days ago

        I think you are a step further down in the a/b problem tree.

        The purpose of society is that everyone can have a safe, stable and good life. In our current setup this requires that most people are employed. But that’s not a given.

        Think of a hypothetical society where AI/robots do all the work. There would be no need to employ everyone to do work to support unemployed people.

        We are slowly getting to that direction, but the problem here is that our capitalist society isn’t fit for that setup. In our capitalist setup, removing the need for work means making people unemployed, who then “need to be supported” while the rich who own/employ robots/AI benefit without putting in any work at all.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    9 days ago

    Sam, I say this will all my heart…

    Fuck you very kindly. I’m pretty sure what you do is not “a real job” and should be replaced by AI.

    • SugarCatDestroyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      9 days ago

      People worked to survive like an engine that needs oil to run. When our civilization collapses, then people will accept reality.

  • sobchak@programming.dev
    link
    fedilink
    English
    arrow-up
    17
    ·
    8 days ago

    The problem is the capitalist investor class, by and large, determines what work will be done, what kinds of jobs there will be, and who will work those jobs. They are becoming increasingly out of touch with reality as their wealth and power grows and seem to be trying to mold the world into something, somewhere along the lines of what Curtis Yarven advocates for, that most people would consider very dystopian.

    This discussion is also ignoring the fact that currently, 95% of AI projects fail, and studies show that LLM use hurts the productivity of programmers. But yeah, there will almost surely be breakthroughs in the future that will produce more useful AI tech; nobody knows what the timeline for that is though.

    • lemmeLurk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      8 days ago

      But isn’t the investment still driven by consumption in the end? They invest in what makes money, but in the end things people are willing to spend money on make money.

      • Ogy@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 days ago

        You’d think so, but unfortunately not. Venture captial is completely illogical, designed around boom or bust “moonshot” ideas that are supposed to completely change everything. So this money isn’t driven by actual consumption, rather speculation. I can’t really speak to other forms of investment but I suspect it doesn’t get a whole lot better. The economy has become far too financialised with a fiat currency that is completely separate from actual intrinsic value. That’s why a watch can cost more than a family home, which isn’t true consumption - just this weird concept of “wealth”

      • sobchak@programming.dev
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 days ago

        They invest in things they think they will be able to sell later for a higher price. Expected consumption is sometimes part of their calculations. But, they are increasingly not in touch with reality (see blockchain, metaverse, Tesla, etc). Sometimes they knowingly take a loss to gain power over the masses (Twitter, Washington Post). They are also powerful enough to induce consumption (bribe governments for contracts, laws, bailouts, and regulations that ensure their investments will be fruitful). They are powerful enough to heavily influence which politicians will get elected, choosing who they want to bribe. They are powerful enough to force the businesses they are invested in to buy/sell to each other. The largest, most profitable companies, produce nearly nothing, they use their positions of being near-monopolies to extract rent (i.e. enshittification/technofeudalism).