• DessertStorms@kbin.social
    link
    fedilink
    arrow-up
    18
    ·
    1 year ago

    I’m pretty sure it’d be way nicer experience for the customers.

    Lmfao, in what universe? As if trained humans reading off a script they’re not allowed to deviate from isn’t frustrating enough, imagine doing that with a bot that doesn’t even understand what frustration is

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        1 year ago

        defacto instant reply

        Not with a good enough model, no. Not without some ridiculous expense, which is not what this is about.

        if trained right, way more knowledgeable that the human counterparts

        Support is not only a question of knowledge. Sure, for some support services, they’re basically useless. But that’s not necessarily the human fault; lack of training and lack of means of action is also a part of it. And that’s not going away by replacing the “human” part of the equation.

        At best, the first few iterations will be faster at leading you off, and further down the line once you get something that’s outside the expected range of issues, it’ll either go with nonsense or just makes you circle around until you’re moved through someone actually able to do something.

        Both “properly training people” and “properly training an AI model” costs money, and this is all about cutting costs, not improving user experience. You can bet we’ll see LLM better trained to politely turn people away way before they get able to handle random unexpected stuff.

        • testfactor@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          While properly training a model does take a lot of money, it’s probably a lot less money than paying 1.6 million people for any number of years.