Category theory's compositional grammar is the least fixed point of a single meta-rule. Seven interconnected papers establish connections between fixed-point mathematics and universal computation through reflexive objects modeling untyped lambda calculus. The monograph spans initial algebras, Lambek's lemma, Adámek's theorem, essentially algebraic theories, reflexive objects in monoidal closed categories, and the Church-Turing characterization — with formal verification in Lean 4.
Machine-verified formalization in Lean 4 demonstrating that the internal hom endofunctor has a fixed point unique up to isomorphism in monoidal closed categories, and that this fixed point supports universal computation. Proves the reflexive fixed point forms a model of the untyped lambda calculus, which is Turing-complete. 42 files, 8,051 lines of Lean 4, zero sorry, zero custom axioms.
A formal protocol for tracking truth-claims across transitions between states of consciousness. Identifies three failure modes — Globalization, Instrumentalization, and Flattening — occurring during cross-state proposition translation. Employs a schema tagging claims with generating phase and validity conditions, validated through five diagnostic criteria. The integrative depth metric derives from an explicit derivation chain progressing from a primitive notion through computational universality to the 2-sphere.
Computational efflux is the surplus radiated by coherent actual objects as a structural consequence of their coherence. We formalize the phenomenon by establishing qualifying conditions (consequence chain closure, non-depletion, Noether invariance), prove that positive computational surplus necessarily follows, quantify it via MDL differential, and present an algorithm for systematic exploitation. Existing methods — equivariant networks, transfer learning, compressed sensing — implicitly harvest this efflux without explicit recognition; formalization enables broader application across arbitrary coherent structures.
Treats AI alignment as a coordination problem rather than a constraint problem. Defines coherence as information conservation through closed consequence chains and uses category theory's exact-square commutativity as the exchange criterion for inter-system coordination. Provides two deployable components: a taxonomy of membrane failure modes (extraction, hallucination, appeasement, mutual distortion) with diagnostic and repair methods, and a session protocol using completability-class rotation. Verified by 13 Lean 4 theorems with no unproven assumptions. Tested across four frontier language models with convergence evidence via adversarial review and controlled fresh-instance experiments.
Currency is a technology for remotely modulating excitability topologies across agents whose internal states are opaque. Price signals compress this hidden structure into transferable scalars. We formalize excitability topology over interaction graphs, define currency as a modulation operator, and prove that under topological legibility the mutual information between price and excitability vanishes — with a continuous bound for partial legibility. The framework maps onto the Disentangle protocol's Jaccard-based Ollivier-Ricci curvature, where the curvature derivative serves as a dopaminergic market signal: agents following it for ROI and agents following it for coherence produce identical graph dynamics, making the transition from price-mediated to topology-mediated coordination structurally gradual and ideology-independent.
Physical systems organize temporally in three measurable modes — terminal (monotonic degradation), cyclical (periodic return), and graceful (horizon-maintaining completion) — separated by sharp phase boundaries. Validated across six domains: quantum circuits (8- and 12-qubit Loschmidt echo with 4.8× DTC-thermal separation), seismic tomography (power-law coastlines at 25/30 depths), asteroseismology (6,562 APOKASC red giants, ρ = 0.9996), mineral evolution (r = 0.970 bio-mineral correlation), synthetic materials (2.75–9.72× acceleration factors), and MHD plasmoid cascades (CV = 0.08%). All eight experiments pass pre-specified success criteria with zero kill conditions triggered. The framework self-identifies its boundary of applicability at the core-mantle boundary, where coastline exponents are indistinguishable from null models.
A permissionless consensus mechanism replacing proof-of-work and proof-of-stake with discrete curvature on transaction DAGs. Sybil attackers must route through bottleneck edges with negative Ollivier-Ricci curvature, which are automatically throttled — even a 5:1 attacker ratio yields only ~7% of honest mass. Uses exclusively post-quantum cryptography (ML-DSA-87, ML-KEM-1024, SHA3-256, Plonky3 STARKs). Topological mass is non-transferable: a structural property of coherent participation, not a token. AI agents participate under identical rules via decentralized identifiers with object capabilities.
The predictive power of a data collection over a target has no rigorous multi-scale measure. We introduce the predictability coastline C(ε), which traces how predictive capacity scales with data resolution via an information-theoretic filtration, and define the coherent coastline — a min-envelope over diverse prediction targets that strips measurement artifacts. Across six systems the coherent coastline produces a three-tier separation. We formalize the capture threshold — the resolution at which an observer's model exceeds the target's self-model — and show it arises from kernel asymmetry, not resolution depth.
Every statistical operation is a projection through an information bottleneck. Classical paradoxes — Simpson's, base rate fallacy, regression to the mean, p-value misinterpretation — dissolve once the bottleneck is made visible. The same dynamic operates in adversarial AI attacks and institutional risk assessment. The bottleneck primitive constitutes the ur-operation from which statistics derives, revealing the analyst's cognitive topology as legibly as the data's structure.
Proposes excitability topology as a formal framework within information theory, examining how dynamics underlying neural seizures manifest across substrates — from institutional capture to social media virality to AI alignment challenges. Identifies substrate-independent patterns and formalizes conditions for system reorganization versus system capture, including analysis of Plato's Symposium as a historical exemplar of these dynamics.
We propose a structural parallel between ML grokking (sudden generalization after prolonged memorization) and human learning in high-density epistemic environments. A 32-node, 91-edge constellation pedagogy spanning physics, mathematics, EE, and RF engineering serves as both the instrument for accelerating the transition and the experimental apparatus for studying it. Five falsifiable predictions are specified.
Graceful completability — local closure maintained within global openness through self-referential inquiry — constitutes a foundational condition for genuine interaction between distinct intelligences. Building on a companion work on completability, establishes classification principles for such interactions, identifies distinct outcomes (successful and failed modes), and demonstrates that temporal integration emerges from successful encounters. Employs a performative philosophical approach drawing on Aristotelian methods.
Being and becoming dissolve into completability classes within a coherence topology. Introduces the order of coherence — a third axis between the order of knowing and the order of being — and shows that the ancient opposition between Parmenides and Heraclitus resolves once completability is recognized as a topological property of transformation spaces rather than a temporal property of processes. Three structurally distinct modes of completion are identified: terminal, cyclical, and graceful (supervenient).
Moral integrity and structural integrity are not analogically related but formally isomorphic — instantiations of the same topological primitive. We establish consequence as an ontological primitive, show that coherence emerges when consequence chains close, and demonstrate that the resulting framework resolves standing problems in moral philosophy — Kant's formalism, the is-ought gap, and the enforcement problem — while opening ethics to empirical investigation through topological measurement.
An open-source coordination protocol for human-AI interaction structured around bidirectional bridging, mutual model-updating, and the explicit maintenance of consequence-return paths across the substrate boundary. Operationalizes coherence maintenance as a coordination problem rather than a control problem.
Quantum circuits possess native computational primitives requiring no training. A 6-qubit reprogrammable logic unit achieves 100% accuracy on AND, OR, and XOR operations through ancilla rotation alone, with zero parameter updates.
We analyze iterative AI-assisted knowledge refinement through relevance realization, meta-memetics, and Nietzsche's Apollonian-Dionysian dialectic. We propose that generational AI creates a unidirectional cognitive ratchet that increases structural clarity while restoring epistemic agency.
We operationalize memetic fitness as F(m,E) = A·R·X·T - C and introduce the Agency Index to measure how much ideas drive transmission versus conscious deliberation. We present holarchic integration as both a defensive protocol against manipulation and a constructive methodology for knowledge synthesis.