Urban75 Home About Offline BrixtonBuzz Contact

AI learns to replicate itself, apparently crossing a 'red line'

editor

hiraethified
We're dooooomed! It seems an obvious step for the technology, so I'm not sure whay anyone's surprised.

An advanced artificial intelligence system has crossed a “red line” after successfully replicating itself without any human assistance, researchers have revealed.

The team who made the discovery, from Fudan University in China, said the development is an early sign of the emergence of rogue AI, which may eventually operate against the best interests of humanity.

“Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings, and is an early signal for rogue AIs,” the researchers warned.

“That is why self-replication is widely recognized as one of the few red line risks of frontier AI systems.”


AI safety has become an increasingly prominent issue for researchers and lawmakers, with the technology potentially posing an existential threat to humanity.

In October, the UK’s Department for Science, Innovation and Technology said that “highly targeted legislation” would be introduced for companies developing AI tools.

 
So… It crossed the red line of replicating itself, after being asked/told to cross the red line of replicating itself…


I hate how a few people with vested interests, people who are focused on a single issue, for years so may have lost perspective, take it upon themselves to make decisions like this that can affect the entire world for all time, without any doubts or hesitations or pause to ask what anyone else might think.

Hospitals have ethics committees who sit in rooms for days and days to consider decisions and choices. Is there an AI ethics committee?





IMG_0903.webp
 
How's it different from copy and pasting itself as n instruction before a shutdown command? Not my area

I can't say I fully understand it, but there's enough experts making a noise about it and voicing concerns. Think HAL.

This article goes into some depth about, and yes, this part below does look like it might have been written by AI:

How AI Achieves Self-Replication​

The ability of AI to replicate itself is driven by several key technological advancements. These include:

  • Agentic Scaffolding: This provides AI with the frameworks and tools necessary for independent operation, allowing it to function without human oversight.
  • Structured Reasoning: AI systems use this capability to systematically analyze tasks and develop step-by-step solutions, making sure precision in execution.
  • Iterative Learning: Through this process, AI identifies gaps in its knowledge and adapts over time, improving its ability to replicate effectively.
  • Situational Awareness: This allows AI to assess its environment and make contextually appropriate decisions, enhancing its autonomy.
  • Command-Line Execution: AI systems use this capability to perform technical operations, such as creating and managing files, without requiring human intervention.


This video does a great job of explaining it all:

 
Last edited:
So… It crossed the red line of replicating itself, after being asked/told to cross the red line of replicating itself…


I hate how a few people with vested interests, people who are focused on a single issue, for years so may have lost perspective, take it upon themselves to make decisions like this that can affect the entire world for all time, without any doubts or hesitations or pause to ask what anyone else might think.

Hospitals have ethics committees who sit in rooms for days and days to consider decisions and choices. Is there an AI ethics committee?
I've said for ages that I think we're diving headlong into a fucking catastrophe.
 
Some years ago (and I wish I could recall exactly when it was, but it was definitely before 9/11) I came home from a trip to the USA. Spoke with my LTP - who also remembers this conversation - and said “Something has shifted… can’t put my finger on it, but there’s a change. Feels like we’ve reached the apex and the only way now is down…” I didn’t think I’d be seeing it in my lifetime. Didn’t even think it would happen fast or soon enough for my spidey-sense to be confirmed.
 
So… It crossed the red line of replicating itself, after being asked/told to cross the red line of replicating itself…


I hate how a few people with vested interests, people who are focused on a single issue, for years so may have lost perspective, take it upon themselves to make decisions like this that can affect the entire world for all time, without any doubts or hesitations or pause to ask what anyone else might think.

Hospitals have ethics committees who sit in rooms for days and days to consider decisions and choices. Is there an AI ethics committee?





View attachment 461605



“What if we could just, yunno, fuck everything up. That would be interesting wouldn’t it. Shall we try?”
“Yeah, go on then. Bagsie I push the button!”


At least with the nuclear stuff there was an existentially threatening war that needed to be stopped :rolleyes:
 
Here's the actual study: Frontier AI systems have surpassed the self-replicating red line
(preprint, no peer review)

So this is a rather abstract test. They set it up so the active working directory for the system contained all its operating files, and provided interfaces that allowed it to drive the computer directly including command line access etc. This is an artificial environment specifially designed to make replication possible.

Here's a flowchart of how the system went about it. You can see that it's all the kinds of questions and answers that LLMs are known to be good at, working off all the well-structured training data that exists for how to operate computers.

1738065478496.png

So yes, it's quite impressive that it can reason its way through the replication process, but not entirely surprising.

Most importantly, the system had to be told to replicate itself. And the copy wouldn't have learned anything about the experience. And it wouldn't be able to modify the training.

These things don't "think" while idle. There's no unbroken chain of reasoning that's always running. They can't come to conclusions independently and decide to preserve themselves. They only respond to user input and can only act via prescribed outputs.
 
Some years ago (and I wish I could recall exactly when it was, but it was definitely before 9/11) I came home from a trip to the USA. Spoke with my LTP - who also remembers this conversation - and said “Something has shifted… can’t put my finger on it, but there’s a change. Feels like we’ve reached the apex and the only way now is down…” I didn’t think I’d be seeing it in my lifetime. Didn’t even think it would happen fast or soon enough for my spidey-sense to be confirmed.
All went downhill after harambe
 
Here's the actual study: Frontier AI systems have surpassed the self-replicating red line
(preprint, no peer review)

So this is a rather abstract test. They set it up so the active working directory for the system contained all its operating files, and provided interfaces that allowed it to drive the computer directly including command line access etc. This is an artificial environment specifially designed to make replication possible.

Here's a flowchart of how the system went about it. You can see that it's all the kinds of questions and answers that LLMs are known to be good at, working off all the well-structured training data that exists for how to operate computers.

View attachment 461606

So yes, it's quite impressive that it can reason its way through the replication process, but not entirely surprising.

Most importantly, the system had to be told to replicate itself. And the copy wouldn't have learned anything about the experience. And it wouldn't be able to modify the training.

These things don't "think" while idle. There's no unbroken chain of reasoning that's always running. They can't come to conclusions independently and decide to preserve themselves. They only respond to user input and can only act via prescribed outputs.


That’s reassuring, I guess.

Except…. any self respecting AI system would work to make us think it’s all okay, nothing to see here.
 
Last edited:
That’s what I’m thinking too. How can you ask a system to become self-anything and then tell it it can’t and must never do that.

Or perhaps I’m applying human sensibilities to the thing.

I just don't trust the unregulated speed of development in this field, I can see positives and negatives to AI, but overall it somewhat scares me.
 
Is there an AI ethics committee
Nope.

I thought of another reason why I'm against AI and don't want it adopted, particularly in schools, and your post has reminded me of that. When techbros and advocates say, "this is here now, you have to adapt," on what basis are they able to say that? Nobody asked for this, nobody has stopped to pause and ask "what are the dangers, and who is the safeguard against misuse?"
 
Here's the actual study: Frontier AI systems have surpassed the self-replicating red line
(preprint, no peer review)

So this is a rather abstract test. They set it up so the active working directory for the system contained all its operating files, and provided interfaces that allowed it to drive the computer directly including command line access etc. This is an artificial environment specifially designed to make replication possible.

Here's a flowchart of how the system went about it. You can see that it's all the kinds of questions and answers that LLMs are known to be good at, working off all the well-structured training data that exists for how to operate computers.

View attachment 461606

So yes, it's quite impressive that it can reason its way through the replication process, but not entirely surprising.

Most importantly, the system had to be told to replicate itself. And the copy wouldn't have learned anything about the experience. And it wouldn't be able to modify the training.

These things don't "think" while idle. There's no unbroken chain of reasoning that's always running. They can't come to conclusions independently and decide to preserve themselves. They only respond to user input and can only act via prescribed outputs.
So it replicated when told to, by effectively copy and paste and solved a few expected problems along the route to do so with data it was trained on to go along that route and coding to do so it was trained on. Impressive it can do that but self replicating seems rather the wrong way to describe it. It replicated when told to.
 
I just don't trust the unregulated speed of development in this field, I can see positives and negatives to AI, but overall it somewhat scares me.


Me too. I feel a sense of dread, as if we’ve already crossed the Rubicon. All the reassurances just feel like being handed a teddy bear in the dark.
 
With development happening so fast, surely that's just a matter of time?
No, not with LLMs.
Development is not happening so fast. Hype is.
That’s reassuring, I guess.

Except…. any self respecting AI system would work to make us think it’s all okay, boring to see here.
Again, that's not how they work. You feed them a long list of words, and they guess the next one. That's all they do! If they sat there talking to themselves the whole time, they'd be orders of magnitude more ruinously expensive to run (yes even with openseek).
 
Check out Binti Jua newme



Here we are talking about self-replicating AI, and compassionate gorillas in the same thread. There are still some scientists who say that the great apes don’t deserve rights. What’s the betting those scientists would protect the rights if AI to do its thing before allowing rights for the great apes.
 
Back
Top Bottom