The Emergence Ladder
The Emergence Ladder
The experiments collectively reveal not a binary (geometry vs. dynamics) but a gradient — an emergence ladder with distinct rungs, each requiring more from the substrate than the last. The ladder tells us precisely which psychological phenomena are computationally cheap, which are expensive, and which require something our substrates do not yet have.
| Rung | What emerges | What it requires | Experimental evidence |
|---|---|---|---|
| 1. Geometric structure | Affect dimensions, valence gradients, arousal variation | Multi-agent survival under uncertainty | V10: all 7 conditions, RSA |
| 2. Representation compression | Low-dimensional internal codes, abstraction | Internal state + selection | Exp 3: from cycle 0 |
| 3. World models | Predictive information about environment beyond current observation | Evolutionary selection, amplified by bottleneck | Exp 2: 100x at bottleneck |
| 4. Computational animism | Agent-model template applied to everything, participatory default | Minimal — present from cycle 0 | Exp 8: animism score > 1.0 in all 20 snapshots |
| 5. Affect geometry alignment | Internal structure maps to behavior | Extended evolutionary selection | Exp 7: seed 7, RSA 0.01 to 0.38 |
| 6. Temporal integration | Memory, anticipation, history-dependence | Memory channels + selection for longer retention | V15: memory decay decreased 6x, stress doubled |
| 7. Biological dynamics | Integration rising under threat | Bottleneck selection + composition (symbiogenesis) | V13/V18: robustness > 1.0 at bottleneck only |
| 8. Counterfactual sensitivity | Detachment, imagination, planning | Closed-loop agency (action-environment-observation) | V20: (wall broken, 70× Lenia baseline) |
| 9. Self-models | Privileged self-knowledge, recursive modeling | Agency + reflective capacity | V20: SMsal > 1.0 in 2/3 seeds (agents know self better than environment) |
| 10. Normativity | Internal asymmetry between cooperative and exploitative acts | Agency + social context + capacity to act otherwise | BLOCKED — no asymmetry |
Rungs 1–7 are pre-reflective. They describe what a system does without requiring that the system know what it does. A Lenia pattern can have affect geometry, world models, temporal memory, and biological-like integration dynamics without anything we would call awareness. These rungs correspond to the pre-reflective background of experience — the felt sense that precedes and underlies thought.
Rungs 8–10 are reflective. They require the system to act on the world and observe the consequences of its own actions. Counterfactual sensitivity is the capacity to represent what would happen if one acted differently. Self-models are the capacity to represent oneself as an agent among agents. Normativity is the capacity to distinguish what one should do from what one could do. All three require agency — a closed causal loop between self and world.
The wall at rung 8 was the sharpest negative finding of the Lenia program. V20 crossed it. Protocell agents with genuine action-observation loops achieve from initialization — the wall is architectural, not evolutionary. The finding: everything below rung 8 emerges from existence under pressure; everything at rung 8 and above requires embodied action. In computational terms, agency means . V20 provides this. Lenia patterns lack it — they do not choose, they unfold.
The rung 7→8 transition has a name: reactivity versus understanding. Reactivity maps present state to action through decomposable channels — each sensory feature drives its own behavioral response, and the channels can in principle be separated without loss. Everything below rung 8 is reactive in this sense. Understanding maps the possibility landscape to action — comparing what would happen under alternative choices, where the comparison itself is inherently non-decomposable because it spans whatever partition you impose on the system. V22 and V23 demonstrate this computationally: scalar prediction (V22) is reactive — orthogonal to integration; multi-target prediction (V23) creates per-channel specialization — integration actually decreases. Neither improves because both are decomposable. Rung 8 requires predictions whose answer depends on the interaction between information sources, not each source separately. This is understanding: associations with the possibility landscape as a whole, not with individual aspects of the present.
The computational mechanism behind the wall is now clear: counterfactual reasoning requires temporal heterogeneity within the system. Some components must be "in the present" (sensing current state) while others are simultaneously "in the possible future" (simulating consequences of actions not taken). This requires per-component temporal models — each element processing its own history through its own dynamics — rather than a shared update rule applied uniformly. Lenia patterns fail because all cells evolve under identical FFT convolution; there is no way for one region to be sensing while another imagines. Biology achieves temporal heterogeneity through neural diversity and recurrent circuits. Recent engineering work on neural architectures with per-neuron temporal models and internal processing ticks confirms this: the capacity for adaptive computation (allocating more processing to harder problems, less to easier ones) emerges only when individual components have private temporal dynamics. The wall at rung 8 is, at bottom, a wall of temporal homogeneity.
With the wall broken, the cascade proceeds. World models appear (rung 3 now accessible in real-time, not just across evolutionary time). Self-models emerge in 2/3 seeds (SMsal > 1.0 — privileged self-knowledge over environment knowledge). Affect geometry is nascent but requires bottleneck selection to fully develop, consistent with the Lenia finding that rung 7 needs the furnace.