• gamer@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    4 hours ago

    Calculators made mental math obsolete. GPS apps made people forget how to navigate on their own.

    Maybe those are good innovations or not. Arguments can be made both ways, I guess.

    But if AI causes critical thinking skills to atrophy, I think it’s hard to argue that that’s a good thing for humanity. Maybe the end game is that AI achieves sentience and takes over the world, but is benevolent, and takes care of us like beloved pets (humans are AI’s best friend). Is that good? Idk

    Or maybe this isn’t a real issue and the study is flawed, or more realistically, my interpretation of the study is wrong because I only read the headline of this article and not the study itself?

    Who knows?

    • Imgonnatrythis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 hours ago

      It’s developed by the worst of us and taught by a bunch of shit it read on reddit. You’re thinking it might be benevolent?

      • sunzu2@thebrainbin.org
        link
        fedilink
        arrow-up
        2
        ·
        2 hours ago

        Do people cry when they step on ant?

        Do rich people care when they kill little people?

        Why would AI side with the ants or the little people lol

    • andrew_bidlaw@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      I perceive my advanced tools akin to a broom.

      I can mop floors alright, but I also don’t want to sit down with a cloth to do it.

      If I can’t do that myself, and it does that instead of me, that’s not just my tool, that’s my employee, and the one I now depend on.

      ‘AI’ companies sell us billions of hours of other people’s labor to replace our own need to interject our experience and ingrain themselves into our routine. Like the coming of ads, it’s already normalized. But this time, critical parts of our life has this black box dependancy and subscription.

  • DomeGuy@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 hours ago

    So, AI users exhibit a reduction in literally the one skill that the AI expects them to actually have?

    I should probably go read that link and see if it’s actual degradation or just selection.

    • DomeGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 hours ago

      Spoiler alert : it was just a survey of the reported confidence of folk who admitted to using AI.

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 hours ago

    Thankfully the slop generated by copilot et al is absolutely useless dreck. I’ve had a significant number of tasks end up broken because someone chased a dream promised by Ai slop. “Sure, you can do that in python.” “that’s definitely how that tool works.” etc.

  • CrowAirbrush@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    8 hours ago

    I think we already came to that conclusion ourselves, Tiktok made us aware i think…leading to terms like brainrot and slop.

    But it’s good to see it is recognized.

  • latenightnoir@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    7 hours ago

    Well, to be fair, I never had the idea of sticking pizza toppings with glue… That’s some next level Gordian Knot stuff, right there!

    • kryptonidas@lemmings.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      7 hours ago

      I think AI so far is detrimental to society.

      • It made it too easy to flood the world with bullshit.

      • Also it will make tracking peoples behaviors much easier while keeping plausible deniability on levels that past horrible regimes could only dream about.

      • It will be used to make replacing workers more easy.

      • It is being used to deny more healthcare (eg Luigie’s case)

      Pro’s

      • Can be used for good (eg in the medical field) by finding issues sooner and making better cures

      • Using AI to actually learn though is a great tool.

      • Other scientific advancements

      All in all I think it with social media is one of the biggest reasons the US is in the state it is.

      • trashgirlfriend@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 hours ago

        So the pros are

        • thing we have no way of knowing how it works, therefore no way of relying on it
        • thing that helps you do something you then have to do anyway by yourself (if you want to learn something from generative model output you still need to fact check it)
        • vague promise it will lead to anything useful in the future
        • kryptonidas@lemmings.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          5 hours ago

          You don’t always have to know how it works to rely on it. Most people could not tell you how a computer works but they are able to do better work.

          We can verify that it’s better in some tasks than people. E.g give doctors and the AI 1000 MRI scans of potential cancer patients and it can determine it more accurate than doctors. So there they already are a help.

          It’s already used in advancing different fields, for example reading texts of ancient burned scrolls without opening the scroll since that would break them.

          https://www.theguardian.com/science/2025/feb/05/ai-helps-researchers-read-ancient-scroll-burned-to-a-crisp-in-vesuvius-eruption

          But also medicine creation etc.

          And with learning, yeah like books, they help you to learn faster but are not a requirement. Same here I can learn much faster now. But I will verify what it tells me.

          — But I’m not sure if all of that outweighs the shit. 💩 The genie is already out of the bottle, no putting it back.

          • trashgirlfriend@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 hours ago

            You don’t always have to know how it works to rely on it. Most people could not tell you how a computer works but they are able to do better work.

            We can verify that it’s better in some tasks than people. E.g give doctors and the AI 1000 MRI scans of potential cancer patients and it can determine it more accurate than doctors. So there they already are a help.

            I’d really prefer if my doctor knew why they say I have cancer!

            • kryptonidas@lemmings.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 hours ago

              That would be nice but as the “proud owner of medical issues” it’s much more often: “You have this, we don’t know why you have it, this is how we can manage it”.

              You still want your doctor to be knowledgeable of course, but you also want them to use the best tools at their disposal. Most of them probably couldn’t tell you how an mri machine works exactly either.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        7
        ·
        7 hours ago

        Memorization used to be a huge part of education hundreds of years ago before books were common. It’s the origin of oral defence for doctorates. That excluded a huge part of the population who were great at logic and analysis.

        Books became a bicycle for the brain. Imo, AI is the same. Skills such as structuring sentences into perfectly grammatically correct forms will atrophy in exchange for the focus to be on the idea.

        • trashgirlfriend@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 hours ago

          Books became a bicycle for the brain. Imo, AI is the same. Skills such as structuring sentences into perfectly grammatically correct forms will atrophy in exchange for the focus to be on the idea.

          “In the future all our thoughts will be filtered through phone keyboard next word suggestion, and this is a good thing!”

        • zeca@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 hours ago

          i think that being used to properly structure sentences is important for reasoning well.

          i agree that the effects of books and writing were probably beneficial to the brain, although they might have atrophied the memory and something else. But im not sure about tv, radio, internet and AI.

          • thesohoriots@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            Ehhh yes and no. There’s prescriptive grammar (how it ought to be) and descriptive grammar (how it’s actually used within communities). This is where the ideas of code switching and such come in. You can certainly reason well in a Creole, if that’s what your community speaks and how you are taught, e.g. Belizean Creole.

            • zeca@lemmy.eco.br
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 hours ago

              yes, i wasnt advocating you should know any specific grammar. and that distinction is a good point. I meant that learning a prescriptive grammar decently is an important tool for reasoning. im not saying that descriptive grammars are bad, just defending that prescriptive grammars arent as useless as people seem to judge them.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            6 hours ago

            The majority of grammar rules are arbitrary and unrelated to the expression of an idea. For example, does it really matter if you treat an inanimate object like a pencil as feminine or masculine? It’s an object. Yet in Spanish/French/etc., there are grammar rules that define every inanimate object as being either feminine or masculine.

            However, without a common grammar, it’s impossible to communicate accurately. For that use case, AI functions as a language translator.

            • zeca@lemmy.eco.br
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              yes, its very arbitrary, but these are sets of rules that you can use to structure your thoughts. language helps us reason. it doesnt matter that it is arbitrary. definitions in mathematics are very arbitrary, but they are a foundation we can lean on to reason about abstract ideas. Being arbitrary is not a testament of uselessness. Different languages, lead to different foundations for structuring ideas. But dominating at least one of those foundations can be very important cognitively.

              • Blue_Morpho@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                5 hours ago

                Gendering an explicitly non gendered inanimate object helps structure your thoughts?

                I’d argue that following those grammar rules damages your thoughts.

    • chobeat@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      6
      ·
      7 hours ago

      yeah and it does harm. Any technology amputated a part of us. The point is deciding if it’s worth the cost.