LOOK MAA I AM ON FRONT PAGE

  • auraithx@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    3
    ·
    2 months ago

    The paper doesn’t say LLMs can’t reason, it shows that their reasoning abilities are limited and collapse under increasing complexity or novel structure.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      2 months ago

      I agree with the author.

      If these models were truly “reasoning,” they should get better with more compute and clearer instructions.

      The fact that they only work up to a certain point despite increased resources is proof that they are just pattern matching, not reasoning.

      • auraithx@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        5
        ·
        2 months ago

        Performance eventually collapses due to architectural constraints, this mirrors cognitive overload in humans: reasoning isn’t just about adding compute, it requires mechanisms like abstraction, recursion, and memory. The models’ collapse doesn’t prove “only pattern matching”, it highlights that today’s models simulate reasoning in narrow bands, but lack the structure to scale it reliably. That is a limitation of implementation, not a disproof of emergent reasoning.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      2 months ago

      The paper doesn’t say LLMs can’t reason

      Authors gotta get paid. This article is full of pseudo-scientific jargon.