• gaja@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    2 days ago

    I am educated on this. When an ai learns, it takes an input through a series of functions and are joined at the output. The set of functions that produce the best output have their functions developed further. Individuals do not process information like that. With poor exploration and biasing, the output of an AI model could look identical to its input. It did not “learn” anymore than a downloaded video ran through a compression algorithm.

    • Enkimaru@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      2 days ago

      You are obviously not educated on this.

      It did not “learn” anymore than a downloaded video ran through a compression algorithm. Just: LoLz.

      • gaja@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 days ago

        I’ve hand calculated forward propagation (neural networks). AI does not learn, its statically optimized. AI “learning” is curve fitting. Human learning requires understanding, which AI is not capable of.

        • nednobbins@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 day ago

          Human learning requires understanding, which AI is not capable of.

          How could anyone know this?

          Is there some test of understanding that humans can pass and AIs can’t? And if there are humans who can’t pass it, do we consider then unintelligent?

          We don’t even need to set the bar that high. Is there some definition of “understanding” that humans meet and AIs don’t?

          • gaja@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 day ago

            It’s literally in the phrase “statically optimized.” This is like arguing for your preferred deity. It’ll never be proven but we have evidence to make our own conclusions. As it is now, AI doesn’t learn or understand the same way humans do.

            • nednobbins@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              22 hours ago

              So you’re confident that human learning involves “understanding” which is distinct from “statistical optimization”. Is this something you feel in your soul or can you define the difference?

              • gaja@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                21 hours ago

                Yes. You learned not to touch a hot stove either from experience or a warning. That fear was immortalized by your understanding that it would hurt. An AI will tell you not to touch a hot stove (most of the time) because the words “hot” “stove” “pain” etc… pop up in its dataset together millions of times. As things are, they’re barely comparable. The only reason people keep arguing is because the output is very convincing. Go and download pytorch and read some stuff, or Google it. I’ve even asked deepseek for you:

                Can AI learn and understand like people?

                AI can learn and perform many tasks similarly to humans, but its understanding is fundamentally different. Here’s how AI compares to human learning and understanding:

                1. Learning: Similar in Some Ways, Different in Others

                • AI Learns from Data: AI (especially deep learning models) improves by processing vast amounts of data, identifying patterns, and adjusting its internal parameters.
                • Humans Learn More Efficiently: Humans can generalize from few examples, use reasoning, and apply knowledge across different contexts—something AI struggles with unless trained extensively.

                2. Understanding: AI vs. Human Cognition

                • AI “Understands” Statistically: AI recognizes patterns and makes predictions based on probabilities, but it lacks true comprehension, consciousness, or awareness.
                • Humans Understand Semantically: Humans grasp meaning, context, emotions, and abstract concepts in a way AI cannot (yet).

                3. Strengths & Weaknesses

                AI Excels At:

                • Processing huge datasets quickly.
                • Recognizing patterns (e.g., images, speech).
                • Automating repetitive tasks.

                AI Falls Short At:

                • Common-sense reasoning (e.g., knowing ice melts when heated without being explicitly told).
                • Emotional intelligence (e.g., empathy, humor).
                • Creativity and abstract thinking (though AI can mimic it).

                4. Current AI (Like ChatGPT) is a “Stochastic Parrot”

                • It generates plausible responses based on training but doesn’t truly “know” what it’s saying.
                • Unlike humans, it doesn’t have beliefs, desires, or self-awareness.

                5. Future Possibilities (AGI)

                • Artificial General Intelligence (AGI)—a hypothetical AI with human-like reasoning—could bridge this gap, but we’re not there yet.

                Conclusion:

                AI can simulate learning and understanding impressively, but it doesn’t experience them like humans do. It’s a powerful tool, not a mind.

                Would you like examples of where AI mimics vs. truly understands?

                • nednobbins@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  8 hours ago

                  That’s a very emphatic restatement of your initial claim.

                  I can’t help but notice that, for all the fancy formatting, that wall of text doesn’t contain a single line which actually defines the difference between “learning” and “statistical optimization”. It just repeats the claim that they are different without supporting that claim in any way.

                  Nothing in there, precludes the alternative hypothesis; that human learning is entirely (or almost entirely) an emergent property of “statistical optimization”. Without some definition of what the difference would be we can’t even theorize a test

      • hoppolito@mander.xyz
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        2 days ago

        I am not sure what your contention, or gotcha, is with the comment above but they are quite correct. And additionally chose quite an apt example with video compression since in most ways current ‘AI’ effectively functions as a compression algorithm, just for our language corpora instead of video.

        • nednobbins@lemmy.zip
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          They seem pretty different to me.

          Video compression developers go through a lot of effort to make them deterministic. We don’t necessarily care that a particular video stream compresses to a particular bit sequence but we very much care that the resulting decompression gets you as close to the original as possible.

          AIs will rarely produce exact replicas of anything. They synthesize outputs from heterogeneous training data. That sounds like learning to me.

          The one area where there’s some similarity is dimensionality reduction. Its technically a form of compression, since it makes your files smaller. It would also be an extremely expensive way to get extremely bad compression. It would take orders of magnitude more hardware resources and the images are likely to be unrecognizable.

          • gaja@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            Google search results aren’t deterministic but I wouldn’t say it “learns” like a person. Algorithms with pattern detection isn’t the same as human learning.

            • nednobbins@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              You may be correct but we don’t really know how humans learn.

              There’s a ton of research on it and a lot of theories but no clear answers.
              There’s general agreement that the brain is a bunch of neurons; there are no convincing ideas on how consciousness arises from that mass of neurons.
              The brain also has a bunch of chemicals that affect neural processing; there are no convincing ideas on how that gets you consciousness either.

              We modeled perceptrons after neurons and we’ve been working to make them more like neurons. They don’t have any obvious capabilities that perceptrons don’t have.

              That’s the big problem with any claim that “AI doesn’t do X like a person”; since we don’t know how people do it we can neither verify nor refute that claim.

              There’s more to AI than just being non-deterministic. Anything that’s too deterministic definitely isn’t an intelligence though; natural or artificial. Video compression algorithms are definitely very far removed from AI.

              • hoppolito@mander.xyz
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 days ago

                One point I would refute here is determinism. AI models are, by default, deterministic. They are made from deterministic parts and “any combination of deterministic components will result in a deterministic system”. Randomness has to be externally injected into e.g. current LLMs to produce ‘non-deterministic’ output.

                There is the notable exception of newer models like ChatGPT4 which seemingly produces non-deterministic outputs (i.e. give it the same sentence and it produces different outputs even with its temperature set to 0) - but my understanding is this is due to floating point number inaccuracies which lead to different token selection and thus a function of our current processor architectures and not inherent in the model itself.

                • nednobbins@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 day ago

                  You’re correct that a collection of deterministic elements will produce a deterministic result.

                  LLMs produce a probability distribution of next tokens and then randomly select one of them. That’s where the non-determinism enters the system. Even if you set the temperature to 0 you’re going to get some randomness. The GPU can round two different real numbers to the same floating point representation. When that happens, it’s a hardware-level coin toss on which token gets selected.

                  You can test this empirically. Set the temperature to 0 and ask it, “give me a random number”. You’ll rarely get the same number twice in a row, no matter how similar you try to make the starting conditions.