• melpomenesclevage@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    3 hours ago

    I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.

    The fact nothing got optimized, and it still didn’t collapse, after deepseek? kind of gave the whole game away. there’s something else going on here. this isn’t about the technology, because there is no meaningful technology here.

    I have been called a killjoy luddite by reddit-brained morons almost every time.

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 hours ago

    The funny thing is with so much money you could probably do lots of great stuff with the existing AI as it is. Instead they put all the money into compute power so that they can overfit their LLMs to look like a human.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    ·
    edit-2
    6 hours ago

    It’s ironic how conservative the spending actually is.

    Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?

    No.

    Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.

    Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.

    • tetris11@lemmy.ml
      link
      fedilink
      English
      arrow-up
      57
      ·
      edit-2
      8 hours ago

      I like my project manager, they find me work, ask how I’m doing and talk straight.

      It’s when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.

      • killeronthecorner@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        edit-2
        6 hours ago

        COs are corporate politicians, media trained to only say things which are completely unrevealing and lacking of any substance.

        This is by design so that sensitive information is centrally controlled, leaks are difficult, and sudden changes in direction cause the minimum amount of whiplash to ICs as possible.

        I have the same reaction as you, but the system is working as intended. Better to just shut it out as you described and use the time to think about that issue you’re having on a personal project or what toy to buy for your cat’s birthday.

      • spooky2092@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 hours ago

        The number of times my CTO says we’re going to do THING, only to have to be told that this isn’t how things work…

      • MonkderVierte@lemmy.ml
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        7 hours ago

        Right, that sweet spot between too less stimuli so your brain just wants to sleep or run away and enough stimuli so you can’t just zone out (or sleep).

  • Not_mikey@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    2
    ·
    11 hours ago

    The actual survey result:

    Asked whether “scaling up” current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was “unlikely” or “very unlikely” to succeed.

    So they’re not saying the entire industry is a dead end, or even that the newest phase is. They’re just saying they don’t think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they’re betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe

    Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they’d probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.

    • Prehensile_cloaca @lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      The bigger loss is the ENORMOUS amounts of energy required to train these models. Training an AI can use up more than half the entire output of the average nuclear plant.

      AI data centers also generate a ton of CO². For example, training an AI produces more CO² than a 55 year old human has produced since birth.

      Complete waste.

    • 10001110101@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      I think most people agree, including the investors pouring billions into this.

      The same investors that poured (and are still pouring) billions into crypto, and invested in sub-prime loans and valued pets.com at $300M? I don’t see any way the companies will be able to recoup the costs of their investment in “AI” datacenters (i.e. the $500B Stargate or $80B Microsoft; probably upwards of a trillion dollars globally invested in these data-centers).

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.

      • relic_@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        50 minutes ago

        No there’s some ideas out there. Concepts like heirarchical reinforcement learning are more likely to lead to AGI with creation of foundational policies, problem is as it stands, it’s a really difficult technique to use so it isn’t used often. And LLMs have sucked all the research dollars out of any other ideas.

    • cantstopthesignal@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      9 hours ago

      It’s becoming clear from the data that more error correction needs exponentially more data. I suspect that pretty soon we will realize that what’s been built is a glorified homework cheater and a better search engine.

      • Sturgist@lemmy.ca
        link
        fedilink
        English
        arrow-up
        27
        ·
        9 hours ago

        what’s been built is a glorified homework cheater and an better unreliable search engine.

    • stormeuh@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      10 hours ago

      I agree that it’s editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.

      They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It’s often implied (e.g. you’ll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).

      With that context I think it’s fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won’t be able to deliver AGI on the timeline they are promising.

      • morrowind@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        47 minutes ago

        Part of it is we keep realizing AGI is a lot more broader and more complex than we think

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        Yeah, it does some tricks, some of them even useful, but the investment is not for the demonstrated capability or realistic extrapolation of that, it is for the sort of product like OpenAI is promising equivalent to a full time research assistant for 20k a month. Which is way more expensive than an actual research assistant, but that’s not stopping them from making the pitch.

  • ABetterTomorrow@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    8 hours ago

    Current big tech is going to keeping pushing limits and have SM influencers/youtubers market and their consumers picking up the R&D bill. Emotionally I want to say stop innovating but really cut your speed by 75%. We are going to witness an era of optimization and efficiency. Most users just need a Pi 5 16gb, Intel NUC or an Apple air base models. Those are easy 7-10 year computers. No need to rush and get latest and greatest. I’m talking about everything computing in general. One point gaming,more people are waking up realizing they don’t need every new GPU, studios are burnt out, IPs are dying due to no lingering core base to keep franchise up float and consumers can’t keep opening their wallets. Hence studios like square enix going to start support all platforms and not do late stage capitalism with going with their own launcher with a store. It’s over.

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    12
    ·
    10 hours ago

    There are some nice things I have done with AI tools, but I do have to wonder if the amount of money poured into it justifies the result.

  • Ledericas@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 hours ago

    It’s because customers don’t want it or care for it, it’s only the corporations themselves are obsessed with it

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    1
    ·
    edit-2
    15 hours ago

    Technology in most cases progresses on a logarithmic scale when innovation isn’t prioritized. We’ve basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we’re in the “bells and whistles” phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      I remember listening to a podcast that’s about explaining stuff according to what we know today (scientifically). The guy explaining is just so knowledgeable about this stuff and he does his research and talk to experts when the subject involves something he isn’t himself an expert.

      There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.

      So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).

      Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.

      In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.

      • morrowind@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        45 minutes ago

        There’s been several smaller breakthroughs since then that arguably would not have happened without so many scientists suddenly turning their attention to the field.

  • deegeese@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    99
    ·
    16 hours ago

    Optimizing AI performance by “scaling” is lazy and wasteful.

    Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.

  • Tony Bark@pawb.social
    link
    fedilink
    English
    arrow-up
    88
    arrow-down
    6
    ·
    16 hours ago

    They’re throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      66
      ·
      15 hours ago

      I mean it’s pretty clear they’re desperate to cut human workers out of the picture so they don’t have to pay employees that need things like emotional support, food, and sleep.

      They want a workslave that never demands better conditions, that’s it. That’s the play. Period.

      • CosmoNova@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        11 hours ago

        And the tragedy of the whole situation is that they can‘t win because if every worker is replaced by an algorithm or a robot then who‘s going to buy your products? Nobody has money because nobody has a job. And so the economy will shift to producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another. Then the entire system will collapse like the Roman Empire and we start from scratch.

        • thatKamGuy@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          8 hours ago

          producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another.

          As seen in the retro-documentary Z!

      • TommySoda@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        edit-2
        14 hours ago

        If this is their way of making AI, with brute forcing the technology without innovation, AI will probably cost more for these companies to maintain infrastructure than just hiring people. These AI companies are already not making a lot of money for how much they cost to maintain. And unless they charge companies millions of dollars just to be able to use their services they will never make a profit. And since companies are trying to use AI to replace the millions they spend on employees it seems kinda pointless if they aren’t willing to prioritize efficiency.

        It’s basically the same argument they have with people. They don’t wanna treat people like actual humans because it costs too much, yet letting them love happy lives makes them more efficient workers. Whereas now they don’t want to spend money to make AI more efficient, yet increasing efficiency would make them less expensive to run. It’s the never ending cycle of cutting corners only to eventually make less money than you would have if you did things the right way.

        • Snot Flickerman@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          23
          ·
          edit-2
          14 hours ago

          Absolutely. It’s maddening that I’ve had to go from “maybe we should make society better somewhat” in my twenties to “if we’re gonna do capitalism, can we do it how it actually works instead of doing it stupid?” in my forties.

        • z3rOR0ne@lemmy.ml
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          14 hours ago

          The oligarchs running these companies have suffered a psychotic break. What the cause exactly is I don’t know, but the game theyre playing is a lot less about profits now. They care about control and power over people.

          I theorize it has to do with desperation over what they see as an inevitable collapse of the United States and they are hedging their bets on holding onto the reigns of power for as long as possible until they can fuck off to their respective bunkers while the rest of humanity eats itself.

          Then, when things settle they can peak their heads out of their hidie holes and start their new Utopian civilization or whatever.

          Whatever’s going on, profits are not the focus right now. They are grasping at ways to control the masses…and failing pretty miserably I might add…though something tells me that scarcely matters to them.

    • NoiseColor @lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      21
      ·
      15 hours ago

      I don’t think any designer does work without heavily relying on ai. I bet that’s not the only profession.

    • 0x01@lemmy.ml
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      33
      ·
      16 hours ago

      Nah, generative ai is pretty remarkably useful for software development. I’ve written dozens of product updates with tools like claudecode and cursorai, dismissing it as a novelty is reductive and straight up incorrect

        • tias@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          9
          ·
          14 hours ago

          As an experienced software dev I’m convinced my software quality has improved by using AI. More time for thinking and less time for execution means I can make more iterations of the design and don’t have to skip as many nice-to-haves or unit tests on account of limited time. It’s not like I don’t go through every code line multiple times anyway, I don’t just blindly accept code. As a bonus I can ask the AI to review the code and produce documentation. By the time I’m done there’s little left of what was originally generated.

          • _cnt0@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            6
            ·
            13 hours ago

            As an experienced software dev I’m convinced my software quality has improved by using AI.

            Then your software quality was extreme shit before. It’s still shit, but an improvement. So, yay “AI”, I guess?

            • tias@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              4
              ·
              4 hours ago

              That seems like just wishful thinking on your part, or maybe you haven’t learned how to use these tools properly.

              • _cnt0@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                4 hours ago

                Na, the tools suck. I’m not using a rubber hammer to get woodscrews into concrete and I’m not using “AI” for something that requires a brain. I’ve looked at “AI” suggestions for coding and it was >95% garbage. If “AI” makes someone a better coder it tells more about that someone than “AI”.

                • tias@discuss.tchncs.de
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  3 hours ago

                  Then try writing the code yourself and ask ChatGPT’s o3-mini-high to critique your code (be sure to explain the context).

                  Or ask it to produce unit tests - even if they’re not perfect from the get go I promise you will save time by having a starting skeleton.

                  Another thing I often use it for is ad hoc transformations. For example I wanted to generate constants for all the SQLSTATE codes in the PostgreSQL documentation. I just pasted the table directly from the documentation and got symbolic constants with the appropriate values and with documentation comments.

          • SpaceNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            7
            ·
            11 hours ago

            As an experienced software dev, I know better than to waste my time writing boilerplate that can be vomited up by an LLM, since somebody else has already written it and I should just use that instead.

        • 0x01@lemmy.ml
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          17
          ·
          16 hours ago

          They’re all pretty fired up at the update velocity tbh 🤷

            • 0x01@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              5
              ·
              6 hours ago

              Unit tests and good architecture are still foundational requirements, so far no bug reports with any of these updates. In fact a huge chunk of these ai updates were addressing bugs. Not sure why you’re so mad at what you imagine is happening and making so many broad assumptions!

            • NoiseColor @lemmy.world
              link
              fedilink
              English
              arrow-up
              11
              arrow-down
              14
              ·
              15 hours ago

              Don’t be an ass and realize that ai is a great tool for a lot of people. Why is that so hard to comprehend?

              • Snot Flickerman@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                22
                arrow-down
                1
                ·
                edit-2
                15 hours ago

                It’s not hard to comprehend. It’s that we literally have jackasses like Sam Altman arguing that if they can’t commit copyright violations at an industrial scale and pace that their business model falls apart. Yet, we’re still nailing regular people for piracy on an individual scale. As always individuals pay the price and are treated like criminals, but as long as you commit crime big enough and fast enough on an industrial scale, we shake our heads, go “wow” and treat you like a fucking hero.

                If the benefits of this technology were evenly distributed the argument might have a leg to stand on, but it is never evenly distributed. It is always used as a way to pay professionals less for work that is “just okay.”

                When a business buys the tools to use generative AI and they shitcan employees to afford it they have effectively used those employees labor against them to replace them with something lesser. Their labor was exploited to replace them. The people who actually deserve the bonus of generative AI are losing or being expected to be ten times more productive instead of being allowed to cool their heels because they worked hard enough to have this doohickey work for them. No, it’s always “line must go up, rich must get richer, fuck the laborers.”

                I’ll stop being an ass about it when people stop burning employees out who already work hard or straight up fire them and replace them with this bullshit when their labor is what allowed the business to afford this bullshit to begin with. No manager or CEO can do all this labor on their own, but they get the fruits of all the labor their employees do as though they did do it all on their own, and it is fucked up.

                I don’t have a problem with technology that makes our lives easier. I don’t have a problem with copyright violations (copyright as it exists is broken. It still needs to exist, just not in its current form).

                What I have a problem with is businesses using this as an excuse to work their employees like slaves or replacing the employees that allowed them to afford these tools with these tools.

                When everyone who worked hard to afford this stuff gets a paid vacation for helping to afford the tools and then comes back to an easier workload because the tools help that much, I’ll stop being a fucking ass about it.

                Like I said elsewhere, the bottom line is business owners want a slave that doesn’t need things like sleep, food, emotional support, and never pushes back against being abused. I’m tired of people pretending like it’s not what businesses want. I’m tired of people pretending this does anything except make already overworked employees bust even more ass.

      • neon_nova@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        17
        ·
        16 hours ago

        As someone starting a small business, it has helped tremendously. I use a lot of image generation.

        If that didn’t exist, I’d either has to use crappy looking clip art or pay a designer which I literally can’t afford.

        Now my projects actually look good. It makes my first projects look like a highschooler did them last minute.

        There are many other uses, but I rely on it daily. My business can exist without it, but the quality of my product is significantly better and the cost to create it is much lower.

  • LostXOR@fedia.io
    link
    fedilink
    arrow-up
    33
    arrow-down
    2
    ·
    16 hours ago

    I liked generative AI more when it was just a funny novelty and not being advertised to everyone under the false pretenses of being smart and useful. Its architecture is incompatible with actual intelligence, and anyone who thinks otherwise is just fooling themselves. (It does make an alright autocomplete though).

    • devfuuu@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      Like all the previous bubbles of scam that were kinda interesting or fun for novelty and once money came pouring in became absolut chaos and maddening.

    • Sheridan@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      15 hours ago

      The peak of AI for me was generating images Muppet versions of the Breaking Bad cast; it’s been downhill since.

    • torrentialgrain@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      15
      ·
      14 hours ago

      AGI models will enter the market in under 5 years according to experts and scientists.

      • morgunkorn@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        24
        ·
        14 hours ago

        trust me bro, we’re almost there, we just need another data center and a few billions, it’s coming i promise, we are testing incredible things internally, can’t wait to show you!

          • LostXOR@fedia.io
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            3 hours ago

            Around a year ago I bet a friend $100 we won’t have AGI by 2029, and I’d do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that’s still dumber than the average human. In comparison humans are “trained” with maybe ten thousand “tokens” and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.

            • pixxelkick@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              Humans are “trained” with maybe ten thousand “tokens” per day

              Uhhh… you may wanna rerun those numbers.

              It’s waaaaaaaay more than that lol.

              and take only a couple dozen watts for even the most complex thinking

              Mate’s literally got smoke coming out if his ears lol.

              A single Wh is 860 calories…

              I think you either have no idea wtf you are talking about, or your just made up a bunch of extremely wrong numbers to try and look smart.

              1. Humans will encounter hundreds of thousands of tokens per day, ramping up to millions in school.

              2. An human, by my estimate, has burned about 13,000 Wh by the time they reach adulthood. Maybe more depending in activity levels.

              3. While yes, an AI costs substantially more Wh, it also is done in weeks so it’s obviously going to be way less energy efficient due to the exponential laws of resistance. If we grew a functional human in like 2 months it’d prolly require way WAY more than 13,000 Wh during the process for similiar reasons.

              4. Once trained, a single model can be duplicated infinitely. So it’d be more fair to compare how much millions of people cost to raise, compared to a single model to be trained. Because once trained, you can now make millions of copies of it…

              5. Operating costs are continuing to go down and down and down. Diffusion based text generation just made another huge leap forward, reporting around a twenty times efficiency increase over traditional gpt style LLMs. Improvements like this are coming out every month.

              • LostXOR@fedia.io
                link
                fedilink
                arrow-up
                1
                ·
                2 hours ago

                True, my estimate for tokens may have been a bit low. Assuming a 7 hour school day where someone talks at 5 tokens/sec you’d encounter about 120k tokens. You’re off by 3 orders of magnitude on your energy consumption though; 1 watt-hour is 0.86 food Calories (kcal).