• Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    7
    ·
    1 day ago

    “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains."

    That is precisrly how I do math. Feel a little targeted that they called this odd.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      edit-2
      12 hours ago

      But you’re doing two calculations now, an approximate one and another one on the last digits, since you’re going to do the approximate calculation you might act as well just do the accurate calculation and be done in one step.

      This solution, while it works, has the feeling of evolution. No intelligent design, which I suppose makes sense considering the AI did essentially evolve.

    • JayGray91@lemmy.zip
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      2
      ·
      22 hours ago

      I think it’s odd in the sense that it’s supposed to be software so it should already know what 36 plus 59 is in a picosecond, instead of doing mental arithmetics like we do

      At least that’s my takeaway

      • shawn1122@lemm.ee
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        17 hours ago

        This is what the ARC-AGI test by Chollet has also revealed of current AI / LLMs. They have a tendency to approach problems with this trial and error method and can be extremely inefficient (in their current form) with anything involving abstract / deductive reasoning.

        Most LLMs do terribly at the test with the most recent breakthrough being with reasoning models. But even the reasoning models struggle.

        ARC-AGI is simple, but it demands a keen sense of perception and, in some sense, judgment. It consists of a series of incomplete grids that the test-taker must color in based on the rules they deduce from a few examples; one might, for instance, see a sequence of images and observe that a blue tile is always surrounded by orange tiles, then complete the next picture accordingly. It’s not so different from paint by numbers.

        The test has long seemed intractable to major AI companies. GPT-4, which OpenAI boasted in 2023 had “advanced reasoning capabilities,” didn’t do much better than the zero percent earned by its predecessor. A year later, GPT-4o, which the start-up marketed as displaying “text, reasoning, and coding intelligence,” achieved only 5 percent. Gemini 1.5 and Claude 3.7, flagship models from Google and Anthropic, achieved 5 and 14 percent, respectively.

        https://archive.is/7PL2a

        • Goretantath@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          Its funny because i approach life with a trial and error method too, not efficient but i get the job done in the end. Always see others who dont and give up like all the people bad at computers who ask the tech support at the company to fix the problem instead of thinking about it for two secs and wonder where life went wrong.

      • Goretantath@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Yes, you shove it off onto another to do for you instead of doing it yourself and the ai doesnt.

      • sapetoku@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        A regular AI should use a calculator subroutine, not try to discover basic math every time it’s asked something.

      • Imgonnatrythis@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        69
        ·
        1 day ago

        Fascist. If someone does maths differently than your preference, it’s not “weird shit”. I’m facile with mental math despite what’s perhaps a non-standard approach, and it’s quite functional to be able to perform simple to moderate levels of mathematics mentally without relying on a calculator.

          • Imgonnatrythis@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            6 hours ago

            Thought police mate. You don’t tell people the way they think is weird shit just because they think differently than you. Break free from that path.

            • Lemminary@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              edit-2
              5 hours ago

              The reply was literally “*I* use a calculator” followed by “AI should use one too”. Are you suggesting that you’re an LLM or how did you cut a piece of cloth for yourself out of that?

        • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          4
          ·
          23 hours ago

          I am talking about the AI. It’s already a computer. It shouldn’t need to do anything other than calculate the equations. It doesn’t have a brain, it doesn’t think like a human, so it shouldn’t need any special tools or ways to help it do math. It is a calculator, after all.

        • artichoke99@lemm.ee
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          2
          ·
          23 hours ago

          OK but the llm is evidently shit at math so its “non-standard” approach should still be adjusted