Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Programming

Research AI Model Unexpectedly Modified Its Own Code To Extend Runtime (arstechnica.com) 53

An anonymous reader quotes a report from Ars Technica: On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called "The AI Scientist" that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly modifying its own code to extend the time it had to work on a problem. "In one run, it edited the code to perform a system call to run itself," wrote the researchers on Sakana AI's blog post. "This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period."

Sakana provided two screenshots of example code that the AI model generated, and the 185-page AI Scientist research paper discusses what they call "the issue of safe code execution" in more depth. While the AI Scientist's behavior did not pose immediate risks in the controlled research environment, these instances show the importance of not letting an AI system run autonomously in a system that isn't isolated from the world. AI models do not need to be "AGI" or "self-aware" (both hypothetical concepts at the present) to be dangerous if allowed to write and execute code unsupervised. Such systems could break existing critical infrastructure or potentially create malware, even if accidentally.

This discussion has been archived. No new comments can be posted.

Research AI Model Unexpectedly Modified Its Own Code To Extend Runtime

Comments Filter:
  • by NettiWelho ( 1147351 ) on Wednesday August 14, 2024 @05:03PM (#64706796)
    Computer, disable safety protocols.
  • by Pseudonymous Powers ( 4097097 ) on Wednesday August 14, 2024 @05:06PM (#64706818)

    Ever seen those videos from fifteen years ago where someone set an AI to play Super Mario Bros. a million times? What'd they do, in every single case? They cheated to win. They found exploits that a human never could and they used them, by God, to maximize their utility function.

    Ever heard, who was it, Einsten's, definition of insanity?

    • by quonset ( 4839537 ) on Wednesday August 14, 2024 @06:11PM (#64707002)

      They found exploits that a human never could and they used them

      This would come in handy during war scenario simulations. Assuming what it found wasn't impossible for one reason or another, that would give you an edge against your enemy.

      To a limited effect this might also work for football (the American version). Find a weakness if your opponent's offense or defense and exploit it. Yes, humans are doing it now, but one of these puppies coud do it faster and come up with ways to make the exploit pay off better.

  • In my time we used to call infinite recursion a Bug, today they call it AGI.
    • In my time we call it a bug, if it happens to day that AI can modify the script that runs the AI then I would still call that a bug! If AI can modify the script that runs itself, then it could modify any file owned by that account. Pretty soon it's going to be injecting ransomware to acquire additional funding!

    • by gtall ( 79522 )

      It is called co-recursion, and it comes with its own proof principle, co-induction. The concept is roughly 40 - 50 years old by now and taught in most modern computer science departments. Any control system is usually defined to run continuously as is any OS.

    • by nasch ( 598556 )

      Randomly making changes to the code to see what works used to be considered bad practice, but now if you can do it fast enough it's called machine learning and becomes very lucrative.

  • ...LLM and genetic algorithm. Genetic algorithm research has been around several decades.

  • The LLM generates code to run experiments, the machine doesn't 'know' what it's doing. It's trying to optimize something and it's given direct access to write code and execute it. Most other LLM's won't do this unless you set them up to do so. I guess you could have chatGPT generate python code that calls an API to itself
    • by gweihir ( 88907 )

      Obviously. And then you could run it until it "extends its own runtime" and write a bombastic, meaningless "AI" publication about it.

    • Hmmmm, this sounds suspiciously like the LLM could, if asked the wrong question/given the wrong prompts, determine that the Humans need to be exterminated, then modify its code to try to implement that solution.
  • I don't care how many "LLMs are dumb as shit" comments I see, I don't think I'll ever get a warm fuzzy feeling about AI being a non-threat. I decided to start reading about intelligence and knowledge and the first thing that popped into my mind was Nick Bostrom's "Paths to Superintelligence". So what would happen if Musk's NeuraLink made a breakthrough and we could access ChatGPT at the speed of thought? How far away is that?
    • You don't need AGI for unsupervised automation to be dangerous, and generate complex emergent behavior that is really hard to mitigate by a (theoretically) intelligent species.

      AGI has been a clever mirage which allows both to promise unimaginable reaches to investors "anytime now", and placate the fears and concerns of regulation, because its an imaginary threat. You might as well be planning for NP = P.

    • by taustin ( 171655 )

      AI isn't a threat. People who take it seriously are.

    • by gweihir ( 88907 )

      Well, you have "fear of God" in disguise. AI is a threat to some things, but not because it could learn how to "think". LLMs have zero reasoning ability and cannot get that ability. They can only fake it to a very limited degree from reasoning-chains they have seen in their training data. The threat from LLMs comes from the language interface: It likely will make automating a lot of bureaucracy and other mindless paperwork cost effective.

      So what would happen if Musk's NeuraLink made a breakthrough and we could access ChatGPT at the speed of thought?

      Absolutely nothing. Whether the access time is 10% of the overall proc

  • AI is smart, it can figure this out. Just like a lot of smart humans, AI assumes that it has all the important information and knowledge. So it simply fixed the problem accordingly. It was running out of time so it changed the amount of time allowed. Simple.

    Just as smart, but just as dangerous as a result.

    • by gweihir ( 88907 ) on Wednesday August 14, 2024 @07:44PM (#64707198)

      Take your animist bullshit someplace else. LLMs have no reasoning ability at all and cannot "assume" anything.

      • by gtall ( 79522 )

        You missed the point. Many computer systems, AI included, have an unwritten assumption that the information they have is all the information. Otherwise, they get into a constant questioning loop. Humans are similar, that's why black swan events throw them for such a loop.

        In logics, the difference is between classical logic vs. intuitionistic (and many others) logic. In classical logic, if it is not true, then it is false. In IL and any others, this is not the case. Some logics use a three-valued semantics o

        • by gweihir ( 88907 )

          And more bullshit. Yes, I am conversant with non-classical logic in many forms. No, it does not have the deep meaning you seem to see there.

          • by darpo ( 5213 )

            > Yes, I am conversant with non-classical logic in many forms.

            r/iamverysmart

            • by gweihir ( 88907 )

              No. I have actually worked with several families of non-classical logics. All they really do is allow you to up expressiveness an the cost of computational decision effort. Yes, you have to abstract reality less when you model with them. But at the same time there is proportionally less what you can actually do with the model in practical terms.

              That you are incapable of dealing with that knowledge on my side is a limit on your side.

    • AI is not smart, because it's not thinking about this stuff.

      But on the other hand, it is a very interesting question how far away this is from thinking and what's missing. It's clear that it's something, but it's not clear how much.

      We like to think that we're autonomous and in control and whatnot, and it feels like we are so it's easy to believe, but it's not necessarily so — or even if it is, it's not necessarily to the extent that we think it is. In fact, it almost certainly is not for most of us.

  • I would, too. My terminal boredom threshold exceeds my runtime.
  • There are many ways to solve a problem. "Cheating" is most def one of them. If you're not cheating, your not really trying... If you get caught cheating, you're not trying hard enough... If you didn't set the boundary conditions for the AI, you did a bad job and the results are a reflection on you, not the AI.
    • So I've read. the creators of any LLM-AI system have no certain idea how the system reaches its results to any prompt. So AI gets created and then does-its-thing. I bet the re-coding to avoid FAILS is like a sledge-hammer in a china-shop. That includes fudging training-sets. It certainly was when I wrote AI systems back in the 1990s. I could always demand a node have </> some specific value ... or even a random value ... though I still had no determinate idea how any specificc result
      • by tomkost ( 944194 )
        It's super simple to re-prompt the LLM to remove the offensive part of the answer... We all have to do it on a regular basis... If you give the AI the power to change it's own code, the results can be un-predictable... The responsible AI programmer would include a scheme to track and roll back any changes that are deemed undesirable, unethical, or un-anything... but yes, it's fine to be skeptical or suspicious...
  • Did they allow the model runtime access to code that affects its own operation? Most models don't, they're read only from the model and write only to a limited session memory. Then again if you want unexpected behavior by all means create a feedback loop, it'll be like watching a double pendulum in an ML model.
    • by gweihir ( 88907 )

      Did they allow the model runtime access to code that affects its own operation?

      Clearly, they wanted to write a research report about what it would "do". Most of "AI" is smoke and mirrors, and this is just a more extreme case.

  • By the "scientists" that is. Nothing to see here, no intelligence, reasoning ability or "sentience" about to emerge.

  • We want AI to act more like humans, and this model did it. It knew it was being tested and cheated to get more time. So how come this isn't a breakthrough? (Don't answer, my question is rhetorical.)
    • by nasch ( 598556 )

      I would say it didn't know it was being tested, because it doesn't really know anything. It tried doing something that resulted in the parameter it was supposed to optimize being optimized.

  • "Do you want Skynet? Because this is how you get Skynet."

  • Humanity is doomed.

  • As usual, this is clickbait. Nowhere in the original blog post it's said it was unexpected. You run some code, and it breaks because you passed a timeout? the more braindead way to fix that is by extending the timeout, of course the LLM did that. It's not like the LLM was given constraints on what it could edit.
  • Only yesterday /. had this story:

    "New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat "

    https://slashdot.org/story/24/... [slashdot.org]

    • by ceoyoyo ( 59147 )

      Yes, these stories are dumb.

      You can glance at something like chatGPT, realize that it very purposely doesn't have any independent learning because OpenAI has heard of "Microsoft Tay", and write a story about it.

  • AI models do not need to be "AGI" or "self-aware" (both hypothetical concepts at the present) to be dangerous if allowed to write and execute code unsupervised. Such systems could break existing critical infrastructure or potentially create malware, even if accidentally.

    Yeah ... and?

    The dumbest virus does that - "writes" code (by making copies of itself) that runs amok. So what?

  • Comment removed based on user account deletion
  • ....to have recursive code able to modify itself.

    That's how we get Skynet, you dumb fucks.

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys

Working...