Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.

  • kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    7
    ·
    1 year ago

    LLMs are definitely not intelligent. If you understand how they work, you’ll realise why that is. LLMs reflect the intelligence in the work which they are trained on. No more, no less.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      1 year ago

      That’s especially fun when you ask the same question in two different languages and get different results or even just gibberish in the other, usually non-English language. It clearly has more training data in English than it does for some other languages.

    • Spzi@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      1 year ago

      That very much depends on what you define as “intelligent”. We lack a clear definition.

      I agree: These early generations of specific AIs are clearly not on the same level as human intelligence.

      And still, we can already have more intelligent conversations with them than with most humans.

      It’s not a fair comparison though. It’s as if we’d compare the language region of a toddler with a complete brain of an adult. Let’s see what the next few years bring.

      I’m not making that point, just mentioning it can be made on an academic level: There’s a paper about the surprising emergent capabilities of ChatGPT 4.0, titled “Sparks of AGI”.

    • SkepticalButOpenMinded@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      That might seem plausible until you read deeply into the latest cognitive science. Nowadays, the growing consensus is around “predictive coding” theory of cognition, and the idea is that human cognition also works by minimizing prediction error. We have models in our brains that reflect input that we’ve been trained on. I think anyone who understands human cognition and LLMs cannot confidently say that LLMs are or are not intelligent yet.