• onion@feddit.de
    link
    fedilink
    English
    arrow-up
    18
    ·
    5 months ago

    The AI is confidently wrong, that’s the whole problem. If there was an easy way to know if it could be wrong we wouldn’t have this discussion

    • Ashyr@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 months ago

      While it can’t “know” its own confidence level, it can distinguish between general knowledge (12” in 1’) and specialized knowledge that requires supporting sources.

      At one point, I had a chatGPT memory designed for it to automatically provide sources for specialized knowledge and it did a pretty good job.