• Xeelee@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Interesting. Is there an evidence based way to look at the trolley problem too or is it just to removed from reality to be able to do that? I always feel the trolley problem gets far too much attention in relation to is actual applicability.

    • Uriel-238@lemmy.fmhy.ml
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      At its core, the Trolley Problem is a paradox of deontological ethics, that is codes based on creed.

      Note that Batman is always framed to choose You can save Robin, or these five innocents, but don’t have time for both and then he usually chooses a third option. (And never has to kill Robin to save anyone and then process breaking his code.) It’d be super neat to see Batman in a situation where he has to make a harsh choice and see how he processes it. Comics are not often that brave.

      Note that deontologist ethicists struggle with lying to Nazi Jew-hunters to protect Jewish refugees ( Once upon a time in Nazi-occupied France… ) Kant, who was pre-German-Reich confronted the murderer at the door but his justifications to go ahead and direct the killer to his victim didn’t feel entirely sound to his contemporaries.

      But the Trolley problem is less about a right answer and more about how the answer changes with variations. Most people find it easy enough to pull the lever in the basic scenario, but will find it more challenging to, say:

      -Carve up a stranger to harvest him for organs so that five transplant patients can live

      -Take up the offer of militants in an undeveloped country to spare a band of refugees from summary execution, if you would personally choose and kill one of them, they’ll let the rest go free.

      The scenarios are meant to illustrate we are informed regarding our moral choices based on how we feel rather than by any formula or code or ideology. Only when the stakes get super high (e.g. evading nuclear holocaust or considering eating our dead in Donner pass) do we actually invoke intellectual analysis to make moral decisions.

      Edit: Completed a thought. Fixed markup.

      • Fushuan [he/him]@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        The way I was taught the trolley problem was to explain the vulnerabilities of AI in decision making if we didn’t consider the rival’s strategy. The issue is that regardless of what the rival does, betraying is more profitable on average.

        If the rival stays silent, betraying is best for us, and if they betray us, betraying is again the best move. If we consider the average scores, betraying is always the best move. However, if we cooperate, both remaining silent is the best overall, but this requires a more sophisticated AI.

        • Uriel-238@lemmy.fmhy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          The trolley problem and the prisoner’s dilemma are both thought experiments from before we were talking about AI programming. Yes, some people who are not familiar with AI try to contemplate how they might inform AI. German officials have suggested that maybe vehicle AI should be regulated to regard all lives as equal when choosing which to hit but such code would inform so few situations that it’s a waste of time and effort to focus on it.

          As for the prisoner’s dilemma, the question is not which is the right choice or how to beat it. Considering the investigation into the US regarding Trump’s retention of national security material, and the investigation into the January 6th raid on the US Capitol, and related efforts by Trump to retain power despite losing the election, we’re watching plenty of incidents in which people choose to betray their fellow conspirators for personal lenience.

          But what is curious, and the reason why the Prisoner’s Dilemma is regarded as a paradox is that most humans will not betray their fellow heister, even when it benefits them more to do so than not. And the current going theory as to why has to do with an evolved sense of loyalty to one’s tribal mates. It served us when we were small predators fighting off larger predators. When a buddy was cornered, rather than flee to save our own skin, we have an instinct to harass to save our buddy, even when it risks our own lives.

          Infamously, Frank Gusenberg, contract killer and victim of the St. Valentines Day massacre had been shot fourteen times at the incident, and yet when the feds asked him, he replied Nobody shot me. He died from his wounds shortly thereafter. It’s a common problem when investigating inter-gang crimes that witnesses won’t even betray members of rival gangs, let alone members of their own crew.