Part VII: Empirical Program

The Emergence Ladder

The Emergence Ladder

The experiments collectively reveal not a binary (geometry vs. dynamics) but a gradient — an emergence ladder with distinct rungs, each requiring more from the substrate than the last. The ladder tells us precisely which psychological phenomena are computationally cheap, which are expensive, and which require something our substrates do not yet have.

RungWhat emergesWhat it requiresExperimental evidence
1. Geometric structureAffect dimensions, valence gradients, arousal variationMulti-agent survival under uncertaintyV10: all 7 conditions, RSA ρ>0.21\rho > 0.21
2. Representation compressionLow-dimensional internal codes, abstractionInternal state + selectionExp 3: deff7/68d_{\text{eff}} \approx 7/68 from cycle 0
3. World modelsPredictive information about environment beyond current observationEvolutionary selection, amplified by bottleneckExp 2: Cwm\mathcal{C}_{\text{wm}} 100x at bottleneck
4. Computational animismAgent-model template applied to everything, participatory defaultMinimal — present from cycle 0Exp 8: animism score > 1.0 in all 20 snapshots
5. Affect geometry alignmentInternal structure maps to behaviorExtended evolutionary selectionExp 7: seed 7, RSA 0.01 to 0.38
6. Temporal integrationMemory, anticipation, history-dependenceMemory channels + selection for longer retentionV15: memory decay decreased 6x, Φ\intinfo stress doubled
7. Biological dynamicsIntegration rising under threatBottleneck selection + composition (symbiogenesis)V13/V18: robustness > 1.0 at bottleneck only
8. Counterfactual sensitivityDetachment, imagination, planningClosed-loop agency (action-environment-observation)V20: ρsync=0.21\rho_{\text{sync}} = 0.21 (wall broken, 70× Lenia baseline)
9. Self-modelsPrivileged self-knowledge, recursive modelingAgency + reflective capacityV20: SMsal > 1.0 in 2/3 seeds (agents know self better than environment)
10. NormativityInternal asymmetry between cooperative and exploitative actsAgency + social context + capacity to act otherwiseBLOCKED — no ΔV\Delta V asymmetry

Rungs 1–7 are pre-reflective. They describe what a system does without requiring that the system know what it does. A Lenia pattern can have affect geometry, world models, temporal memory, and biological-like integration dynamics without anything we would call awareness. These rungs correspond to the pre-reflective background of experience — the felt sense that precedes and underlies thought.

Rungs 8–10 are reflective. They require the system to act on the world and observe the consequences of its own actions. Counterfactual sensitivity is the capacity to represent what would happen if one acted differently. Self-models are the capacity to represent oneself as an agent among agents. Normativity is the capacity to distinguish what one should do from what one could do. All three require agency — a closed causal loop between self and world.

The wall at rung 8 was the sharpest negative finding of the Lenia program. V20 crossed it. Protocell agents with genuine action-observation loops achieve ρsync0.21\rho_{\text{sync}} \approx 0.21 from initialization — the wall is architectural, not evolutionary. The finding: everything below rung 8 emerges from existence under pressure; everything at rung 8 and above requires embodied action. In computational terms, agency means MI(action;future observationcurrent state)>0\text{MI}(\text{action}; \text{future observation} \,|\, \text{current state}) > 0. V20 provides this. Lenia patterns lack it — they do not choose, they unfold.

The rung 7→8 transition has a name: reactivity versus understanding. Reactivity maps present state to action through decomposable channels — each sensory feature drives its own behavioral response, and the channels can in principle be separated without loss. Everything below rung 8 is reactive in this sense. Understanding maps the possibility landscape to action — comparing what would happen under alternative choices, where the comparison itself is inherently non-decomposable because it spans whatever partition you impose on the system. V22 and V23 demonstrate this computationally: scalar prediction (V22) is reactive — orthogonal to integration; multi-target prediction (V23) creates per-channel specialization — integration actually decreases. Neither improves Φ\intinfo because both are decomposable. Rung 8 requires predictions whose answer depends on the interaction between information sources, not each source separately. This is understanding: associations with the possibility landscape as a whole, not with individual aspects of the present.

The computational mechanism behind the wall is now clear: counterfactual reasoning requires temporal heterogeneity within the system. Some components must be "in the present" (sensing current state) while others are simultaneously "in the possible future" (simulating consequences of actions not taken). This requires per-component temporal models — each element processing its own history through its own dynamics — rather than a shared update rule applied uniformly. Lenia patterns fail because all cells evolve under identical FFT convolution; there is no way for one region to be sensing while another imagines. Biology achieves temporal heterogeneity through neural diversity and recurrent circuits. Recent engineering work on neural architectures with per-neuron temporal models and internal processing ticks confirms this: the capacity for adaptive computation (allocating more processing to harder problems, less to easier ones) emerges only when individual components have private temporal dynamics. The wall at rung 8 is, at bottom, a wall of temporal homogeneity.

With the wall broken, the cascade proceeds. World models appear (rung 3 now accessible in real-time, not just across evolutionary time). Self-models emerge in 2/3 seeds (SMsal > 1.0 — privileged self-knowledge over environment knowledge). Affect geometry is nascent but requires bottleneck selection to fully develop, consistent with the Lenia finding that rung 7 needs the furnace.