I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

  • solomonschuler@lemmy.zip
    link
    fedilink
    arrow-up
    1
    ·
    5 minutes ago

    I just mentioned to a friend of mine why I don’t use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.

    First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it’s incapable of creating thoughts outside from the data it’s trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.

    There are several issues I can think of that makes the LLM do poorly at it’s job. remember LLM’s are trained exclusively on the internet, as large as the internet is, it doesn’t have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT “whats the issue with my codebase” it will notice the code you provided isn’t what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.

    On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.

    This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it’s true. When I started using chatGPT to fix my codebases or to do this problem, it induced a lot of doubt in my knowledge and intelligence that I gathered these past years in college.

    The second reason why I don’t like LLMs are the business models of these companies. To reiterate, these tech billionaires make this bubble of delusions and fearmongering to get their userbase to stay. Titles like “chatGPT-5 is terrifying” or “openAI has fired 70,000 employees over AI improvements” they can do this because people see the title, reinvesting more money into the company and because employees heads are up these tech giants asses will of course work with openAI. It is a fucking money making loophole for these giants because of how many employees are fucking far up their employers asses. If I end up getting a job at openAI and accept it, I want my family to put me into a god damn psych ward, that’s how much I frown on these unethical practices.

    I often joke about this to people who don’t believe this to be the case, but is becoming more and more a valid point to this fucked up mess: if AI companies say they’ve fired X amount of employees for “AI improvements” why has this not been adopted by defense companies/contractors or other professions in industry. Its a rhetorical question, but it makes them conclude on a better trajectory than “the reason X amount of employees were fired was because of AI improvement”

  • solomonschuler@lemmy.zip
    link
    fedilink
    arrow-up
    1
    ·
    30 minutes ago

    I just mentioned to a friend of mine why I don’t use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.

    First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it’s incapable of creating thoughts outside from the data it’s trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.

    There are several issues I can think of that makes the LLM do poorly at it’s job. remember LLM’s are trained exclusively on the internet, as large as the internet is, it doesn’t have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT “whats the issue with my codebase” it will notice the code you provided isn’t what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.

    On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.

    This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it’s true, when I started

  • SoftestSapphic@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    36 minutes ago

    There isn’t a way to use AI in good faith.

    Either you are ignorant of the tech and its negative effects, or you arent.

  • PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    2 hours ago

    Depending on how hardcore you are about it, you can’t.

    Are you getting up in people’s face to tell them not to use it, or are you answering why you choose not to use it?
    Are you extremely strict in your adherence? Or are you more forgiving based on the application or user?

    There are two general points I like to make:

    1. Big companies are using it to steal the work of the powerless, en masse. It is making copyright strictly the tool of the powerful to use against the powerless.
    2. If these companies aren’t lying and will actually deliver what they say they’re going to deliver in the timeline they stated, then it’s going to cause mass unemployment, because even if (IF) this creates new jobs for every job it destroys, the market can’t move fast enough to invent these new careers in the timeline described. So either they’re lying or they’re going to cause great suffering, and a massive increase in wealth inequality.

    Energy usage honestly never seems to be a concern for people, so I don’t even try to make that argument.

    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      While I understand new data enters for ai are increasing power usage, it’s just highlighting the existing problems where there are decades of insufficient investment in infrastructure.

      You can’t get enough power to run a new data center? Where were you when I complained we needed additional transmission lines to keep bringing more renewable energy online? Where were you when I wanted the huge infrastructure project to import huge amounts of Canadian hydro? I bet you wish you had that now.

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        2
        ·
        53 minutes ago

        Where were you when I complained we needed additional transmission lines to keep bringing more renewable energy online?

        I’ve strongly argued for this in the past.

        All these tech bros with AI datacenters are putting their spare couch change together to build HVDC lines across the continent, right?

  • AA5B@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 hours ago

    Maybe part of the answer is to not be so strictly against it. AI is starting to be used in a variety of tools and not all your criticisms are valid for all of them. Being able to see where it is useful and maybe you even find it desirable helps explain that you’re not against the technology per se.

    For example Zoom has an ai tool that can generate meeting summaries. It’s pretty accurate with discussions although sometimes gets confused about who said what. That ai likely used much less power, might not have been trained on copyrighted content

  • canofcam@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    13 hours ago

    A discussion in good faith means treating the person you are speaking to with respect. It means not having ulterior motives. If you are having the discussion with the explicit purpose of changing their minds or, in your words, “alarming them to take action” then that is by default a bad faith discussion.

    If you want to discuss with a pro-AI person in good faith, you HAVE to be open to changing your own mind. That is the whole point of a good faith discussion - but rather, you already believe you are correct, and are wanting to enter these discussions with objective ammunition to defeat somebody.

    How do you actually discuss in good faith? You ask for their opinions and are open to them, then you share your own in a respectful manner. You aren’t trying to ‘win’ you are just trying to understand and in turn, help others to understand your own POV.

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      4 hours ago

      Chiming in here:

      Most of the arguments against ai - the most common ones being plagiarism, the ecological impact - are not things people making the arguments give a flying fuck about in any other area.

      Having issues with the material the model is trained on isn’t an issue with ai - it’s an issue with unethical training practices, copyright law, capitalism. These are all valid complaints, by the way, but they have nothing to do with the underlying technology. Merely with the way it’s been developed.

      For the ecological side of things, sure, ai uses a lot of power. Lots of data enters. So does the internet. Do you use that? So does the stock market. Do you use that? So do cars. Do you drive?

      I’ve never heard anyone say “we need less data centers” until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it’s something you give a fuck about. Which you don’t.

      If a model, once trained, is being used entirely locally on someone’s personal pc - do you have an issue with the ecological footprint of that? The power has been used. The model is trained.

      It’s absolutely valid to have an issue with the increased power consumption used to train ai models and everything else but these are all issues with HOW and not the ontological arguments against the tech that people think they are.

      It doesn’t make any of these criticisms invalid, but if you refuse to understand the nuance at work then you aren’t arguing in good faith.

      If you enslave children to build a house then the issue isn’t that youre building a house, and it doesn’t mean houses are evil, the issue is that YOURE ENSLAVING CHILDREN.

      Like any complicated topic there’s nuance to it and anyone that refuses to engage with that and instead relies on dogmatic thinking isn’t being intellectually honest.

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        3 hours ago

        I’ve never heard anyone say “we need less data centers” until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it’s something you give a fuck about. Which you don’t.

        AI data centers take up substantially more power than regular ones. Nobody was talking about spinning up nuclear reactors or buying out the next several years of turbine manufacturing for non-AI datacenters. Hell, Microsoft gave money to a fusion startup to build a reactor, they’ve already broken ground, but it’s far from proven that they can actually make net power with fusion. They actually think they can supply power by 2028. This is delusion driven by an impossible goal of reaching AGI with current models.

        Your whole post is missing out on the difference in scale involved. GPU power consumption isn’t comparable to standard web servers at all.

      • aesthelete@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        3 hours ago

        For the ecological side of things, sure, ai uses a lot of power. Lots of data enters. So does the internet. Do you use that? So does the stock market. Do you use that? So do cars. Do you drive?

        There are many, many differences between AI data centers and ones that don’t have to run $500k GPU clusters. They require a lot less power, a lot less space, and a lot less cooling.

        Also you’re implying here that your debate opponents are being intellectually dishonest while using the same weasely arguments that people that argue in bad faith constantly employ.

        • krooklochurm@lemmy.ca
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          3 hours ago

          The fact that a gou data center uses more power than one that does not does not matter at all.

          You’re completely missing the point.

          The sum total of power usage for all non ai data centers is an ecological issue whether ai data centers use more, the same, or less power.

          All data centers have an ecological footprint, all use shitloads of power, and it doesn’t matter if one kind is worse than any other kind.

          This is exactly what I was trying to point out in my comment.

          If I take a shit in a canoe that’s a problem. Not an existential one but a problem. If I dump another ten pounds of shit in the canoe it doesn’t mean the first pound of shit goes away.

          If I dump two pounds of shit in the canoe then the first pound of shit is still in the canoe. The first pound of shit doesn’t stop being an issue because now there are two more.

          You can have an issue with shit in the canoe on principle, which is fine. Then it’s all problematic.

          But if you’re fine with having one pound of shit in the canoe, and find with three, but not okay with eleven, then the issue isn’t shit in the canoe, it’s the amount of shit in the canoe. They’re distinct issues.

          But it’s NOT intellectually honest to be okay with having one pound of shit in the canoe and not being okay with the other two. You can’t point at the two pounds of shit and say: this abominable! While ignoring the other pound of shit. Because it’s all shit.

          • aesthelete@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            2 hours ago

            But it’s NOT intellectually honest to be okay with having one pound of shit in the canoe and not being okay with the other two. You can’t point at the two pounds of shit and say: this abominable! While ignoring the other pound of shit. Because it’s all shit.

            Sure, because that’s a terrible analogy.

            Gen AI data centers don’t just require more power and space, they require so much more power and space that they are driving up energy costs in the surrounding area and are the data centers are becoming near impossible to build.

            People didn’t randomly become “anti-data center”. Many of them are watching their energy bills go up. I’m watching as they talk about building new coal plants to power “gigawatt” data centers.

            And it’s all so you can have more fucking chat bots.

          • Frezik@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            2
            ·
            2 hours ago

            When a family in the global south uses coal to cook their food, they release CO2. When a billionaire flies around the continent on a private jet, they also release CO2.

            Do you consider the two to be equivalent in need or output?

    • 🔍🦘🛎@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      Once you realize you can change your opinion about something after you learn about it, it’s like a super power. So many people only have the goal of proving themselves right or safeguarding their ego.

      It’s okay to admit a mistake. It’s normal to be wrong about things.

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      4 hours ago

      If nothing is taken from anyone and no profit is made from a model trained on publicly accessible data - can you elaborate on how that constitutes theft?

      Actually - if 100% copy righted content is used to train a model, which is released for free and never monetized - is that theft?

      • Treczoks@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 minutes ago

        Publicly accessible does not mean it is free of copyright. Yes, copyright law in it’s current form sucks and is in dire need to get reformed, preferably close to the original duration (14+14 years). But as the law currently stands, those LLM parrots are based on illegally acquired data.

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        People downloading stuff for personal use vs making money off of it are not the same at all. We don’t tend to condone people selling bootleg DVDs, either.

      • Katana314@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        2 hours ago

        Publically accessible does not mean publically reusable. You can find a lot of classic songs on YouTube and in libraries. You can’t edit them into your Hollywood movie without paying royalties.

        Showing them to an AI for them to repeat the melody with 90% similarity is not a free cheat to get around that.

        This is in part why the GPL and other licenses exist. Linus didn’t just put up Linux and say “Do whatever!” He explicitly said “You MAY copy and modify this work, but it must keep this license, this ownership, and you may NOT sell the transformed work”. That is a critical part of many free licenses, to ensure people don’t abuse them.

        • krooklochurm@lemmy.ca
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 hours ago

          If nothing is taken from anyone and no profit is made from a model trained on publicly accessible data - can you elaborate on how that constitutes theft?

          Actually - if 100% copy righted content is used to train a model, which is released for free and never monetized - is that theft?

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      4 hours ago

      Cool. So you’re in support of developing a model that financially compensates all of the rights holders used for its training data then?

      • Katana314@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        2 hours ago

        Sort this one with the girlfriend’s “would you still love me if I was a worm” philosophy. It’s so far outside of reality it’s not worth considering.

    • hansolo@lemmy.today
      link
      fedilink
      arrow-up
      8
      arrow-down
      4
      ·
      12 hours ago

      You mean commercial LLMs.

      AI as a term includes machine learning systems that go back decades.

  • captainlezbian@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    15 hours ago

    I want my creations to be precisely what I intend to create. Generative Ai makes it easier to make something at the expense of building skills and seeing their results

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    20 hours ago

    Very simple.

    It’s imprecise, and for your work, you’d like to be sure the work product you’re producing is top quality.

  • MourningDove@lemmy.zip
    link
    fedilink
    arrow-up
    17
    arrow-down
    2
    ·
    21 hours ago

    Just do what I do and say that you think it’s hot garbage that dehumanizes everything and everyone that use it.

    Then go on to not give a shit what they think about it.

  • Jhex@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    23 hours ago

    I’m just honest about it… “I don’t find it useful enough and do find it too harmful for the environment and society to use it”

    • runner_g@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      8
      ·
      22 hours ago

      And you then spend longer verifying the information its given you than you would have spent just looking it up to begin with.

  • LuigiMaoFrance@lemmy.ml
    link
    fedilink
    arrow-up
    30
    arrow-down
    2
    ·
    1 day ago

    If you want to explain your reasons ‘in good faith’ you should be honest, and not adopt other people’s reasons to argue the position you’ve already assumed.

    • aesthelete@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 hours ago

      Yeah the wording on this is wrong. The closest adjacent (honest) question would be “how can I appear to be arguing in good faith when I have a predetermined position on this technology?”.

      EDIT:

      I don’t even like GenAI myself and that’s how this comes off.

      If you’re looking for reasons: (1) sustainability / ecology, (2) market concentration, (3) intellectual theft, (4) mediocre output, (5) lack of guardrails, (6) vendor lock-in, (7) appears to drive some people insane, (8) drives down the quality of the Internet overall, (9) de-skills the people that use it, (10) produces probabilistic outputs and yet is used in applications that require deterministic outputs…I could go on for a while.

    • MajorasTerribleFate@lemmy.zip
      link
      fedilink
      arrow-up
      7
      ·
      22 hours ago

      It’s possible their intent is to solicit more concise, well-packaged versions of their existing position(s) that others have spent time honing.

      • dream_weasel@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        10 hours ago

        Ah yes. AI is just dressed up exploitation and thievery of other people’s ideas; a mashed up and uncreative slop. By the way, can I just aggregate prepackaged ideas about it from strangers to make my own argument? I don’t want to spend time crafting or refining it myself.

        Pretty wild position of you ask me.

        • MajorasTerribleFate@lemmy.zip
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          4 hours ago

          The main difference is that OP would be asking for this, whereas AI just took it without permission. Humanity has always had wiser folks who can package ideas, and some folks who agree with the message but don’t have the same skill to craft their own version. Division of labor has value :)

          • dream_weasel@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            Now hang on a minute lol. AI is just stolen garbage, but obviously we expect the “wiser folks” here in the agora of lemmy are going to give reasonable / acceptable answers? This is like the mcdonalds of philosophy here.

            So I see here two conjectures:

            1. AI is bad because it is creating a mashup of information (which may or may not be accurate) from sources it took without permission.
            2. People are within their rights to outsource the articulation of their opinions to experts (see also “wiser folks”. After all, this “division of labor” has always existed.

            1q.) So let’s say I take my 2 GPU workhorse PC and train basic language (not obviously line of reasoning, or guardrails, or other languages or anything like that) from a library of articles and professional documents I own or control. Then, by way of something like resource augmented generation (or similer idc) it gives me a well articulated argument of why AI is bad, is that reasonable? I would think this is a BETTER perspective than 2q below.

            2q.) In what way is mining the totally anonymous, unverifiable posts of literally any person with a keyboard on lemmy MORE valuable than a reasonable sounding argument from any generative AI or just pressing the middle button on your phone over and over? This sounds totally stupid. “Division of labor” has probably made all of us dumber. I (coincidentally) build language models as part of my job, somewhere in this thread is an AI expert who has read 1 newspaper article and is “training” on the information from other lemmy comments.

            At the same time we say “Holy shit AI bad, AI hallucinate, AI lies!” we are going to say it’s totally cool and reasonable to shout into the internet box where rando people can say anything they want, and that’s better?

            I mean I do like the smug argument and the smiley face, but the premise that “Gen AI sucks, hone your argument against AI using ask-fucking-lemmy” borders on content for c/selfawarewolves. It’s so ridiculous I practically expect you’re just trolling the thread.

            • MajorasTerribleFate@lemmy.zip
              link
              fedilink
              arrow-up
              1
              ·
              17 minutes ago
              1. AI is bad because, in its current state, it takes up way too many resources and contributes heavily to climate change. All that and it’s current output is often unreliable and/or displaces human labor.

              2. As with the use of AI, someone asking other people for information should verify what they are seeing. Assuming OP already has their beliefs more or less set, they’re potentially just looking for some more well-crafted arrangement of the ideas, and they have a preference to ask humans for that rather than AI.

              Re: division of labor, likely it contributes to people having less broad and more specialized knowledge. The benefit, however, is that we don’t need everyone to learn every single skill needed for self-sufficient living. I’d rather my surgeon to be a specialist in surgery, not needing to spend much of their time growing their own food, maintaining their home and clothing, and so on.

  • s@piefed.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    1 day ago

    “It’s a machine made to bullshit. It sounds confident and it’s right enough of the time that it tricks people into not questioning when it is completely wrong and has just wholly made something up to appease the querent.”