Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend’s events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

  • eestileib@sh.itjust.works
    link
    fedilink
    arrow-up
    30
    arrow-down
    7
    ·
    1 year ago

    Deep Mind is actually delivering shit like an estimate of the entire human proteome structure and creating the transcendently greatest go player of all time.

    Meanwhile these chucklefucks are using the same electricity demand as Belgium to replicate a math solver that could probably be assigned as half-term project in an undergraduate class and are pissing themselves about threatening humanity.

    The Valley has lost its goddamn mind.

    • AngrilyEatingMuffins@kbin.social
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      1 year ago

      The computer self corrected based on its understanding of math principles that it learned through text. It’s not about the math. It used reason.

      The computer had a thought. A rudimentary one, yes. But an actual thought.

      I don’t really know what to say if you don’t see why that’s an amazing discovery.

      Also the Belgium thing was if it continued growing at the rate it is, but the technology didn’t improve. The technology has already improved by two generations since that paper was written. It’s a crappy talking point and nothing else.

    • Redex@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      You are missing the very crucial part about how this is generalised. That’s like saying we don’t need to teach math to people anymore, we have calculators now. The AI isn’t too capable currently, but dismissing it would be like dismissing consumer PCs, because what are people gonna do with computers?

    • sincle354@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Valley bullshit aside, I do have to defend the expensive exploration of the generalized AI space purely because it’s embarassingly parallel. That is, it just gets so much better the more money and resources you throw at it. It couldn’t solve math without a few million dollars worth of supercomputer training time. We didn’t know it would create valid VHDL-to-csv-to-VBA scripts, but I got phind(.com) to make me one. And I certainly can’t tell Wolfram Alpha to package the math solution it generated as a Javascript function.

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Deep Mind is actually delivering shit like an estimate of the entire human proteome structure and creating the transcendently greatest go player of all time.

      Not to mention the huge advances in Chess AI. LeelaChessZero is the open-source implementation of the original AlphaZero idea Google came out with, and is rivaling Stockfish 15. Meanwhile, Torch is a new AI being developed that is now kicking Stockfish’s ass.

      Grandmasters and notices alike are learning a lot from chess AI, figuring out better ways to improve themselves, either by playing them outright, using them for post-game analysis, or watching two bots play and see the kind of creative strategies they can come up with.

      • Sparking@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Not really. The imementation of land is most the same, they just run continously on a per word (token) basis.

    • NounsAndWords@lemmy.world
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago

      A calculator does most of it too, but this is a LLM that can do lots of other things also, which is a big piece of the “general” part of AGI.

      Richard Feynman said “You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say, “How did he do it? He must be a genius!”

      We are close to a point where a computer that can hold all the problems in its “head” can test all of them against all of the tricks. I don’t know what math problems that starts to solve but I bet a few of them would be applicable to cryptology.

      But then again, I have no idea what I’m talking about and just making bold guesses based on close to no information.

      • malijaffri@feddit.ch
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Even so, I think I’ll hold off on calling anything AGI until it can at least solve simple calculus problems with a 90% success rate (reproducibly). I think that’s a fair criteria, in my opinion.

  • NotTheOnlyGamer@kbin.social
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    1 year ago

    I think it’s time to shut it down, hard. That’s the start of something that will not end well for human beings.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    This is the best summary I could come up with:


    Before his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

    The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman’s firing.

    According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board’s actions.

    The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup’s search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters.

    Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company.

    Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.


    The original article contains 293 words, the summary contains 169 words. Saved 42%. I’m a bot and I’m open source!