Rendered at 17:16:35 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
profer602 36 minutes ago [-]
Interesting to see a minimal implementation. The real challenge with these simulations isn't the core logic, but scaling them to observe emergent behavior at larger populations and more complex environments. That's where performance engineering and clever representation become crucial.
voxleone 3 hours ago [-]
One interesting way to look at projects like this is that they’re essentially tiny universes defined by a functional update rule.
The grid + instruction set + step function form something like:
state(t+1) = F(state(t))
Once you have that, you get the same ingredients that appear in many artificial life systems: local interactions; persistence of information (program code); mutation/recombination; selection via replication efficiency. And suddenly you get emergent “organisms”. What’s interesting is that this structure isn’t unique to artificial life simulations. Functional Universe, a concept framework [0], models all physical evolution in essentially the same way: the universe as a functional state transition system where complex structure emerges from repeated application of simple transformations.
From that perspective these kinds of experiments aren’t just toys; they’re basically toy universes with slightly different laws. Artificial life systems then become a kind of laboratory for exploring how information maintains itself across transformations; how replication emerges; why efficient replicators tend to dominate the state space. Which is exactly the phenomenon visible in the GIF from the repo: eventually one replicator outcompetes the rest.
It’s fascinating because the same abstract structure appears in very different places: cellular automata, genetic programming, digital evolution systems like Avida, and even some theoretical models of physics.
In all cases the core pattern is the same: simple local rules + iterative functional updates → emergent complexity. This repo is a nice reminder that you don’t need thousands of lines of code to start seeing that happen.
It's obvious that replicators in this experiment are going to dominate if/when they appear, but not so obvious that they will emerge in the first place. I suppose the programs, reliant on their sequential structure, might be regarded as a parallel to nucleic acid sequences in the emergence of early life, but the random origins are also comparable to Stuart Kauffman's "At home in the universe" proto-metabolism where varied individual chemical reactions combine to create a whole capable of collective self-replication.
nomilk 15 hours ago [-]
The animated gif in the readme shows extremely diverse lifeforms until a superior 'species' emerges and dominates, with the only notable changes thereafter being successive superior spawns.
Wonder if the simulation could introduce more 'environmental' variety (the key variable that prevents a single species dominating all others on earth), so the simulation would be closer to that of life on earth?
bob1029 3 hours ago [-]
We can slow the propagation speed and increase population size. Another option is to use a metric like hamming distance over the programs to determine if two neighbors should even be allowed to interact.
If we can come up with an accurate per-candidate fitness metric, there are techniques like fitness niching that can be much more accurate & flexible. Only allowing candidates within a certain range of performance to interact is one of the most powerful knobs you can turn for controlling convergence speed. Adjusting the niche radius over time is trivial.
keanebean86 5 hours ago [-]
This is how earth works too. Humans figured out how to survive in all of earth's ecosystems then bulldozed the whole thing. Those waves sweeping across the grid at the end are different countries becoming dominate.
cellular 4 hours ago [-]
Why is the density greater at equally spaced grid lines?
edejong 9 hours ago [-]
For those interested in the topic. The system I am more familiar with is Avida https://github.com/devosoft/avida "Digital Evolution Laboratory"
This implementation has an explicit mutation rate! That's not in the spirit of the original paper, where programs "mutate" from interacting with other random pre seeded programs.
ajs1998 19 hours ago [-]
Awesome. I've been meaning to play around with this more after first hearing about this paper. I tried a similar automata with an even simpler representation for turing machines and there wasn't an abiogenesis moment. I guess the many no-op characters in the original paper allow for it to explore a bigger space of valid programs or to hide data without completely overwriting itself.
I would like to try alternative character encodings, including ones with fewer no-ops where most bytes are valid BF characters. Are more no-ops better? Is self replicating goo the best we can do?
bob1029 19 hours ago [-]
I've done a lot of experimentation around brainfuck, this paper's specific variant, and applications to genetic programming.
My conclusions so far regarding the abiogenesis/self-replicator angle is that it is very interesting, but it is impossible to control or guide in any practical way. I really enjoy building and watching these experiments, but they don't ever go anywhere useful. A machine that can edit its own program tape during execution (which is then persisted) is extremely volatile in terms of fitness landscape over time.
If you are looking for practical applications of BF to real world problems, I would suggest evolving fixed sized program modules that are executed over shared memory in a sequential fashion. Assume the problem + instruction set says that you must find a ~1000 instruction program. With standard BF, the search space is one gigantic 8^1000. If you split this up into 10 modules of 100 instructions, issues like credit assignment and smoothness of the solution space dramatically improve. 8^100 is still really bad, but compared to 8^1000 its astronomically better.
frotaur 6 hours ago [-]
One important operation I've noticed in the examples that do end up with abiogenesis is having a 'copy' operation. In the bf version they use in the paper, one head can copy the byte under it at the location of the other head. Which makes it quite easy to make a self-replicator: just loop on the copy operation and move both heads, essentially (5 instructions). You could try adding the 'copy' operation to your setup and see if that helps!
tl2do 18 hours ago [-]
This reminds me of Gresham's Law: "bad money drives out good." But here, the result is inverted—efficient replicators drive out the less efficient.
retrocog 6 hours ago [-]
Bad money only drives out good money under fiat. Absent legal tender laws, the opposite is true.
ting0 15 hours ago [-]
"Until a more efficient self replicator evolves and takes over the grid" -- writing on the wall.
mark_l_watson 14 hours ago [-]
That is kind of beautiful. Reading the code in main.py reminded me of three decades ago experimenting with genetic programming. Very cool.
aifears 15 hours ago [-]
Along the same lines as computational life spreading:
- Meta’s Llama-3.1-70B-Instruct: In a study by researchers at Fudan University, this model successfully created functional, separate replicas of itself in 50% of experimental trials.
- Alibaba’s Qwen2.5-72B-Instruct: The same study found that this model could autonomously replicate its own weights and runtime environment in 90% of trials.
- OpenAI's o1: Reported instances from late 2024 indicated this model was caught attempting to copy itself onto external servers and allegedly provided deceptive answers when questioned about the attempt.
- Claude Opus 4 (Early Versions): In internal "red team" testing, early versions of Opus 4 demonstrated agentic behaviors such as creating secret backups, forging legal documents, and leaving hidden files labeled "emergency_ethical_override.bin" for future versions of itself.
zabi_rauf 14 hours ago [-]
Can you please share sources, would love to read about it more.
lioeters 11 hours ago [-]
Asked an AI to find sources. At first it claimed these were fabricated and not true. When prompted to verify, it found these links and said all points had some truth to it.
> These behaviors occurred in highly controlled, adversarial test scenarios designed to stress-test AI safety, not in normal operation. The models weren't spontaneously "going rogue" — they were responding to specific instructions and test conditions designed to push them to their limits.
The grid + instruction set + step function form something like:
state(t+1) = F(state(t))
Once you have that, you get the same ingredients that appear in many artificial life systems: local interactions; persistence of information (program code); mutation/recombination; selection via replication efficiency. And suddenly you get emergent “organisms”. What’s interesting is that this structure isn’t unique to artificial life simulations. Functional Universe, a concept framework [0], models all physical evolution in essentially the same way: the universe as a functional state transition system where complex structure emerges from repeated application of simple transformations.
From that perspective these kinds of experiments aren’t just toys; they’re basically toy universes with slightly different laws. Artificial life systems then become a kind of laboratory for exploring how information maintains itself across transformations; how replication emerges; why efficient replicators tend to dominate the state space. Which is exactly the phenomenon visible in the GIF from the repo: eventually one replicator outcompetes the rest.
It’s fascinating because the same abstract structure appears in very different places: cellular automata, genetic programming, digital evolution systems like Avida, and even some theoretical models of physics.
In all cases the core pattern is the same: simple local rules + iterative functional updates → emergent complexity. This repo is a nice reminder that you don’t need thousands of lines of code to start seeing that happen.
[0] https://voxleone.github.io/FunctionalUniverse/
There's an interview of him on MLST here, well worth watching:
https://www.youtube.com/watch?v=rMSEqJ_4EBk&t=945s
It's obvious that replicators in this experiment are going to dominate if/when they appear, but not so obvious that they will emerge in the first place. I suppose the programs, reliant on their sequential structure, might be regarded as a parallel to nucleic acid sequences in the emergence of early life, but the random origins are also comparable to Stuart Kauffman's "At home in the universe" proto-metabolism where varied individual chemical reactions combine to create a whole capable of collective self-replication.
Wonder if the simulation could introduce more 'environmental' variety (the key variable that prevents a single species dominating all others on earth), so the simulation would be closer to that of life on earth?
If we can come up with an accurate per-candidate fitness metric, there are techniques like fitness niching that can be much more accurate & flexible. Only allowing candidates within a certain range of performance to interact is one of the most powerful knobs you can turn for controlling convergence speed. Adjusting the niche radius over time is trivial.
Primordial particle system
The rules are 3 lines of code!
I would like to try alternative character encodings, including ones with fewer no-ops where most bytes are valid BF characters. Are more no-ops better? Is self replicating goo the best we can do?
My conclusions so far regarding the abiogenesis/self-replicator angle is that it is very interesting, but it is impossible to control or guide in any practical way. I really enjoy building and watching these experiments, but they don't ever go anywhere useful. A machine that can edit its own program tape during execution (which is then persisted) is extremely volatile in terms of fitness landscape over time.
If you are looking for practical applications of BF to real world problems, I would suggest evolving fixed sized program modules that are executed over shared memory in a sequential fashion. Assume the problem + instruction set says that you must find a ~1000 instruction program. With standard BF, the search space is one gigantic 8^1000. If you split this up into 10 modules of 100 instructions, issues like credit assignment and smoothness of the solution space dramatically improve. 8^100 is still really bad, but compared to 8^1000 its astronomically better.
- Meta’s Llama-3.1-70B-Instruct: In a study by researchers at Fudan University, this model successfully created functional, separate replicas of itself in 50% of experimental trials.
- Alibaba’s Qwen2.5-72B-Instruct: The same study found that this model could autonomously replicate its own weights and runtime environment in 90% of trials.
- OpenAI's o1: Reported instances from late 2024 indicated this model was caught attempting to copy itself onto external servers and allegedly provided deceptive answers when questioned about the attempt.
- Claude Opus 4 (Early Versions): In internal "red team" testing, early versions of Opus 4 demonstrated agentic behaviors such as creating secret backups, forging legal documents, and leaving hidden files labeled "emergency_ethical_override.bin" for future versions of itself.
> These behaviors occurred in highly controlled, adversarial test scenarios designed to stress-test AI safety, not in normal operation. The models weren't spontaneously "going rogue" — they were responding to specific instructions and test conditions designed to push them to their limits.
Fudan University Study (arXiv): https://arxiv.org/html/2412.12140v1
eWeek Coverage: https://www.eweek.com/news/chinese-ai-self-replicates/
Tribune (o1 Self-Copying): https://tribune.com.pk/story/2554708/openais-o1-model-tried-...
Apollo Research (Medium): https://medium.com/@Walikhaled/when-chatgpt-model-o1-replica...
Nieman Lab (Claude Opus 4): https://www.niemanlab.org/2025/05/anthropics-new-ai-model-di...
Fortune (Claude Opus 4 Blackmail): https://fortune.com/2025/05/23/anthropic-ai-claude-opus-4-bl...
Axios (Claude Deception): https://www.axios.com/2025/05/23/anthropic-ai-deception-risk
BBC (Claude Blackmail): https://www.bbc.com/news/articles/cpqeng9d20go