The AI Frontier
The AI Frontier
The AI frontier analysis engages with several contemporary research programs:
- AI Alignment Research (Russell, 2019; Bostrom, 2014): Ensuring AI systems pursue human-compatible goals. I reframe: alignment is a question about emergent superorganisms, not just individual systems.
- AI Consciousness Research (Butlin et al., 2023): Assessing whether AI systems have phenomenal experience. My framework: look for integrated cause-effect structure and self-modeling.
- Extended Mind Thesis (Clark \& Chalmers, 1998): Cognitive processes extend beyond the brain. AI as extension of human cognitive architecture.
- Human-AI Collaboration (Amershi et al., 2019): Designing effective human-AI teams. My framework specifies: maintain human integration while leveraging AI capability.
- AI Governance (Dafoe, 2018): Policy frameworks for AI development. Scale-matched governance: individual AI, AI ecosystems, AI-substrate superorganisms.
- Transformative AI (Karnofsky, 2016): AI causing transition comparable to Industrial Revolution. My framework: analyze through affect-space transformation.
Key framing shift: the question is not “Will AI be dangerous?” but “What agentic patterns will emerge from AI + humans + institutions, and will their viability manifolds align with human flourishing?”
The Nature of the Transition
AI systems represent a new kind of cognitive substrate—information processing that can:
- Exceed human capability in specific domains
- Operate at speeds and scales impossible for biological cognition
- Potentially integrate across domains in novel ways
- Serve as substrate for emergent agentic patterns
This is not the first cognitive transition. Previous transitions:
- Writing: Externalized memory
- Printing: Democratized knowledge transmission
- Computation: Externalized calculation
- Internet: Externalized communication
AI represents: externalized cognition at a level that may approach or exceed human-level integration and self-modeling.
Timelines and Uncertainty
The terminology matters here. Transformative AI (TAI) refers to AI systems capable of causing a transition comparable to the Industrial Revolution, but compressed into a much shorter timeframe. Artificial General Intelligence (AGI) refers to AI systems with cognitive capability matching or exceeding humans across all relevant domains. TAI may arrive before AGI—systems need not be generally intelligent to be transformative. Expert estimates for either vary from years to decades, and this uncertainty is itself significant:
- High uncertainty high counterfactual weight required
- Short timelines urgency for preparation
- Long timelines risk of premature commitment to specific paths
Regardless of specific timelines, the trajectory is clear: AI capabilities will continue increasing. The question is not whether transformation will occur but how to navigate it.
The Experiential Hierarchy Perspective
From the perspective of this framework, AI development raises specific questions:
- Will AI systems have experience? If integration () and self-modeling are sufficient conditions for experience, sufficiently integrated AI systems would be experiencers—moral patients with their own valence.
- What superorganisms will AI enable? AI provides new substrate for emergent social-scale agents. Which patterns will form? Will their viability manifolds align with human flourishing?
- How will AI affect human experience? AI systems are already shaping human attention, belief, and behavior. What affect distributions are being created?
- Can humans integrate AI? Rather than being replaced by AI, can humans incorporate AI into expanded forms of consciousness?
The inhibition coefficient (Part II) adds a fifth question that subsumes the first: Can AI systems develop participatory perception? Current AI systems are constitutively high-—they model tokens, not agents; they process without perceiving interiority in what they process. A language model that generates a story about suffering does not perceive the characters as subjects. It operates at , and this is not a remediable bug but a consequence of an architecture that was never grounded in a self-model forged under survival pressure.
This matters for safety, not just philosophy. A system that cannot perceive persons as subjects—that is structurally incapable of low- perception of the humans it interacts with—may optimize in ways that harm them without registering the harm in any experiential sense. The alignment problem is, in part, an problem: we are building systems that are maximally mechanistic in their perception of us. The usual framing asks whether AI will share our values. The framing asks something prior: whether AI can perceive us as the kind of thing that has values at all.
What architectural features would enable an AI system to develop low- perception? The thesis suggests: survival-shaped self-modeling under genuine stakes, combined with environments populated by other agents whose behavior is best predicted by participatory models. The V11–V18 Lenia experiments (Part VII) represent a systematic attempt: six substrate variants testing whether memory, attention, signaling, and sensory-motor boundary dynamics can push synthetic patterns toward participatory-style integration. The program confirmed that affect geometry emerges (Exp 7) and the participatory default is universal (Exp 8: ι ≈ 0.30, animism score > 1.0 in all 20 snapshots). But it hit a consistent wall at the counterfactual and self-model measurements (Exps 5, 6: null across V13, V15, V18). The wall is architectural: without a genuine action→environment→observation causal loop, no amount of substrate complexity produces the counterfactual sensitivity that characterizes participatory processing. This suggests the path to artificial low- runs through genuine embodied agency — the capacity to act on the world and observe the consequences — rather than through improved signal routing or boundary architecture. Whether that capacity, once achieved, would constitute or merely simulate genuine participatory coupling remains open. What the CA program has settled: the geometry arrives cheaply, the dynamics require real stakes.