Image Symbol Code


Prologue

Life  Feedback Loops

A spark in the noise. From the first self‑replicating strand to a hi‑frequency trading bot, every living system survives by closing information loops: sense → compare → act → sense again. When the inward torrent of signals outruns the outward shocks, order endures; when the loop breaks, entropy collects its debt. Life, throughout this book, is nothing more (and nothing less) than feedback strong enough to pay its own thermodynamic bills.

If a pattern can correct its own errors faster than the universe can erase it, the pattern lives.

This view lets microbes, forests, nervous systems, and—yes—markets sit on the same continuum: each refines internal models to stay one step ahead of surprise. Chapters 1 – 4 trace how those loops scale from cell to civilisation.

Markets  Information Processors

Prices are packets. Long before silicon, the price system computed the uncomputable: who should grow wheat, where, and at what time‑discount? A single number at the farm gate already encodes soil chemistry, rail tariffs, futures sentiment, and next month’s weather rumours. The market, in our lexicon, is an unconscious super‑computer that routes scarcity messages faster than any ministry could draft memos.

Once we accept markets as processors, booms and busts appear as latency spikes or bandwidth throttles—the same pathologies that plague digital networks. Chapters 5 – 10 ride this metaphor through Rome, railroads, television, and the iPhone.

Culture  Externalised Imagination

Shared hallucination, scalable. Every story, law, or user‑interface is a projection of mental imagery onto a public screen—cave wall, parchment, widescreen, or VR headset. These projections let individual brains pool models and coordinate at planetary scale. Culture, then, is humanity’s distributed imagination rendered in image, symbol, and code.

When new media lower the cost of projection, the collective dreamscape thickens: posters breed cinema; television breeds social feeds; today’s generative AI breeds self‑spawning memes. The final chapters watch that thickening reach its current horizon—and ask what happens when machines start dreaming back.

If life is feedback and markets are routers, culture is the packet payload: the story we swap to justify the trade.

CORE OPERATORS

A Margin Glossary for Rapid Recall

IconTermField OriginOne‑Sentence Use‑Case
🔄RenormalisationPhysics → complex systemsWhen internal chatter outnumbers external shocks, a subsystem seals into its own scale (cell → organ → firm → nation).
📈VolatilityFinance ↔ info‑theoryThe measurable jitter when a system digests new data; high volatility = rapid learning or panic.
🌀SimulacrumBaudrillardA symbol that no longer points to an original, only to other symbols—e.g., Disneyland’s Main Street vs. any real 1900s town.
BandwidthTelecommChannel capacity; who gets how much attention, how fast.
🕳️Computational IrreducibilityWolframA process whose outcome can’t be shortcut; the only way out is through.
🧩Feedback LoopCyberneticsSense → compare → act → sense; the atomic motor of life, markets, & AI.
🪙Price PacketAustrian econA scalar summarising distributed knowledge about supply & demand.
🧠Externalised ImaginationMedia theoryAny artefact (myth, code, video) that lets one brain borrow another’s model of the world.

The Self Organisation of Information Systems and the nature of symbolic systems|Chapter 1

What is life?

As to the question, “What is life?” the first answer is that life is rare—or at least appears so far to be. As far as we know, there is only one place in the entirety of existence that has this thing called life. Second, life is fragile: Earth suffered four mass extinctions even before the dinosaurs. Third, life is persistent—once the seed is sown, it is hard to weed out. Lastly, life is adaptable; its reach spans geography, scale, and scope. Because these four observations crop up repeatedly in later chapters—fragility in ecological collapse, persistence in mycelial networks, adaptability in technological diffusion—they form the foundation on which the rest of the book builds.

Life, in this schema, is simply “the self” within self-organising matter—a re-normalised boundary at which internal information flow (and the energy-/material gradients that sustain it) outstrips communication with the outside world. A living system’s own feedback loops, regulatory networks, and continually refreshed patterns dominate its behaviour more than any external disturbance. When matter sustains its own order through internal recirculation of information, we call it alive. This definition will later let us compare biological markets (fungi, bees) with human markets (stocks, crypto) on the same information-theoretic footing.

1.1 Evolution—Life Learning to Live

Evolution is a single, open-ended learning process: the gradual amelioration of matter’s innate tendency to self-organise, where amelioration simply means greater persistence. Because survival demands richer internal models, the evolution of life is simultaneously the evolution of intelligence. Early organisms had no sight, smell, or hearing; those capacities emerged for navigating an uncertain reality. The longer a pattern endures, the more confidence we have that it will endure further. No mystical “will to survive” is required—self-organisation begets persistent structures, while collapse-prone ones vanish. Natural selection thus acts as an information-theoretic filter, amplifying enduring patterns over transient ones. This same filter will reappear, scaled up, when we later explore why certain business models, computer interfaces, or monetary regimes outlast others.

1.2 Why Mutation?

Imperfect replication—mutation—is inevitable: the more complex a self-organising structure, the higher the chance that copying introduces subtle errors. Far from being a bug, those slips power diversity, expanding the repertoire of patterns on which selection can act. Speciation is simply the branching of attractors in pattern-space—new basins of stability into which imperfect replicas occasionally wander. In financial language (Chapter 10), mutation plays the role of volatility: small random moves that allow the system to explore new equilibria.

1.3 Homeostasis—the Will to Power

Intelligent life succeeds by navigating reality’s complexity and uncertainty. Homeostasis measures that ability. It is an order-keeping process that counters entropy. Ageing is the information loss accumulated through repeated maintenance of order (López-Otín et al. 2013; Sinclair et al. 2023). Later, when we meet battery constraints in smartphones or liquidity droughts on Wall Street, we will see those, too, as homeostatic ceilings—points where maintenance costs swamp adaptive capacity.

1.4 A Nervous System

The nervous system is the locus where homeostasis is actively processed. It compresses high-dimensional sensory data into a workable world-model and predates the neocortex as the primary information system. Communication within that system is mediated by feelings: I see a tiger, I feel fear, so I run. Feelings map the state of life inside and outside the organism. Of course, not every control signal reaches consciousness; spinal reflex arcs and hormone bursts can trigger rapid action with no felt qualia. This built-in compression anticipates Chapter 9’s claim that every technological medium—from telegram to broadband—wins by shrinking the cost of transmitting relevant signals.

1.5 The Brain—Biological Motivation for the Fantastical

The brain began as an aid to the senses but evolved into a generator of internal worlds. Imagining is metabolically cheaper than perceiving—a core claim of predictive-coding and active-inference theory (Friston 2010; Clark 2013)—so the brain fills sensory gaps with top-down forecasts. Humans sample the slice of the electromagnetic spectrum most useful for survival; other species sample differently; none perceives it all. Mental imagery fuses many inputs into a higher-dimensional meta-model and lets organisms project homeostatic needs through time (“How will I feel later?”). This “offline simulation” will resurface when we frame cyberspace (Chapter 11) as humanity’s collective imagination running on silicon.

1.6 Feelings as an Information System

Aside from Antonio Damasio, few have explored feelings in computation, yet economics offers a parallel: Ludwig von Mises framed human action as the attempt to relieve felt unease. The nervous system generates unease; the brain strategises to quell it. Consciously reducing unease is how societies scale across generations—foreshadowed by Mises in Human Action (1949) and expanded by Damasio in The Strange Order of Things. By translating “unease” into prediction error, we unify neuroscience, cybernetics, and Austrian economics—a trinity that sets the stage for later critiques of central-bank policy.

1.7 Human Action—the Economics of Mises

To Mises, action is computation over time: forecasting how long a choice will keep unease at bay. In AI terms, this is reinforcement learning: Ice-cream eased hunger but caused nausea—negative reward. Steak satisfied—positive reward. Ferrari solved nothing—zero reward. The nervous system records binary feedback: Did action reduce unease? Repeat or avoid accordingly, and learning occurs. Chapter 10 will show how false price signals hijack this reward loop at the scale of entire economies.

1.8 Capital, Time, and Computation

Economics studies how scarce means serve desired ends through time at a price. Modern curricula often downplay time: trade-cycle theory ceded ground to static macro models, and time preference survives mostly in behavioural sidebars. Restoring temporal calculus realigns economics with the unease-minimising, model-based intelligence that defines living systems. In the next chapter we scale that temporal calculus from single organisms to ecosystems, watching how mycelial webs and bee colonies invest “capital” (nutrients, labour) with horizons as long as forests.

Notes & Anchors

  1. Schrödinger, E. (1944) What Is Life? — rarity vs. entropy
  2. Drake, F. (1961) “Drake Equation” — concurrent-civilisation probability
  3. Simard, S. (2021) Finding the Mother Tree — mycorrhizal “Wood-Wide Web”
  4. Holland, J. (2014) Signals and Boundaries — OLS limits in complex systems
  5. Wolfram, S. (2002) A New Kind of Science — computational irreducibility

Prelapsaria, Paradise Lost and the Rise of Babylon|Chapter 2

Chapters 1 argued that life, mind, and culture all hinge on feedback loops that keep internal models in sync with external reality. Chapter 2 (the present essay) zooms in on the moment those loops overloaded—when symbols, tools, and rival minds began to outrun the natural bandwidth of a single human organism. Each “fall” myth below pinpoints a different layer of that overload: Hebraic self-awareness, Buddhist map-versus-territory tension, Greek techno-economic acceleration, and Mesopotamian urban scale.

2.1 Paradise Lost — The Fall of Eden

There is an inflection point in human history—recorded across many cultures—when humankind stopped believing it was merely animal. The best-known version is the story of Adam and Eve eating the forbidden fruit, but Greek and Buddhist scriptures offer close cousins. Information-theoretically, the Buddhist paradise lost is both remedy and consequence of the two earlier losses: the Greek fall signals an abrupt jump in economic complexity, pushing rivalry onto an intellectual battlefield, while the Hebraic fall marks the birth of self-consciousness. Judaism’s foundation myth laments this new imbalance in information traffic: more chatter inside one skull than between people and the world.

The Eden narrative foreshadows later chapters on financial bubbles (Ch. 10) and social-media echo chambers (Ch. 11), where internal chatter again outruns contact with reality.

So central is this rupture that it opens the Bible. Adam and Eve, living in Eden, must not eat one tree’s fruit. Eve persuades Adam; Adam eats; God confronts him. Adam blurts, “I am naked.” Not the dawn of consciousness in general, but of **self-**consciousness: the ego has snapped into place.

2.2 The Ego

The ego is the boundary that completes the model of self. In bandwidth terms, it appears when internal information flow outstrips external flow—you spend more time thinking than experiencing. The ego is the renormalisation group that crystallises what we call consciousness.

Chapters 1 defined life as matter whose internal messages dominate external perturbations; the Eden story shows that principle migrating from the cell to the psyche.

2.3 The Eastern Paradise Lost — Meditation

The Secrets of the Golden Flower. On his way to die in the Himalayas, Lao-tzu reputedly told a border guard that the “secret” of becoming a golden flower (enlightened) cannot be written down; symbols are antithetical to the state itself. Map and territory must part company before the flower can bloom.

Mindfulness. Meditation is the practical science of awareness: attention training. By turning attention outward—toward actuality, not the mind’s map—you rebalance bandwidth, experiencing present homeostasis instead of merely modelling future homeostasis.

The Buddhist fix anticipates cyber-detox therapies that try to tame post-modern data floods; both prescribe throttling symbol traffic to restore sensory bandwidth.

2.4 Prometheus — The Greek Paradise Lost

For the Greeks, the rupture is not sexual shame but Prometheus’s theft of fire. Zeus punishes him by sending an eagle to eat his liver each day. Fire ends the Stone Age and launches the Bronze Age.

Economically, fire is the first order of production: the pre-condition for smelting metal, baking bread, hardening clay. Stable food and tools allow permanent settlements, letting society scale beyond hunter-gather bands.

Just as fungi externalised digestion to conquer soil, humans externalised heat to conquer matter. The Industrial Revolution will repeat that logic with steam and electricity.

2.5 Babylon Rising

Fire-powered metallurgy births the first urban workshops—Uruk, then Babylon—where external symbolic traffic again swamps the individual mind. Babylon’s splendour rested on conquest; to the exiled Jews it became evil incarnate.

“Away from the big city where a man cannot be free…” — The Velvet Underground, Heroin

Rome’s persecuted Christians inherited that anti-Babylon posture, and Revelation brands the city “Babylon the Great, the mother of harlots.” The trope survives in Rastafarian lyrics:

“Big city life, me try fi get by—Babylon de pon me case.” — Mattafix, Big City Life

Cities have been fertility sinks since industrialisation (urban TFR has sat 0.5–1.0 births below rural levels for over a century; UN DESA 2024, table 2). Their head-counts grow only because fresh migrants keep arriving.

Chapter 6 will show how railways and telegraph lines widened Babylon from walls of brick to networks of wire, accelerating the same demographic drain.

2.6 Memetic Desire

Faced with an information landscape too dense to grasp directly, humans copy one another’s goals. We pick models, imitate them, then compete with them—a triangular pattern René Girard called memetic desire. Scarcity frustrates the triangle; archetypes (Jungian or fictional) can substitute for flesh-and-blood rivals.

2.7 A Memetic Theory of Evolutionary Intelligence

Human intelligence is a costly adaptation favoured by sexual and social competition: brains compress energy into information more efficiently than muscle converts it into force. As population density rose, so did mind-to-mind rivalry, amplifying the payoff to intelligence.

The same energy-for-information bargain will reappear when we measure the carbon footprint of global data centres.

2.8 Into the Wild — Map Versus Territory

Jon Krakauer’s Into the Wild recounts Chris McCandless, who fled consumer America for the Alaskan wilderness. He carried a guide to edible plants—but the book omitted that one edible yam’s seeds are toxic. McCandless died of starvation and poisoning: the original sin of mistaking the map of the terrain from the terrain itself, imagination for actuality.

2.9 The Pre-lapsarian Fallacy

Nature is not a gentle garden. Cordyceps fungi liquefy insects from within; hormesis teaches that small stresses toughen but large ones kill. Most moderns would not survive long in the wild we romanticise.

Marx, for instance, pictured primitive man hunting communally—“to hunt in the morning, fish in the afternoon…” (German Ideology, 1845). In reality, weaker hunters were often culled, and the alpha ate first. It was never “each according to his means,” but “each according to his end.” Any economics that ignores competition’s teeth—Marxism included—misreads our species’ deep history.

Having traced how symbolic overload toppled paradises and birthed Babylon, the next chapter investigates how life’s oldest networks—mycelia and microbiomes—avoid that fate by routing information instead of hoarding it.

Simulacra, Simulation and Symbolic Systems |Chapter 3

3.1 The variable image revolution — Externalised imagination

Away from Eden, we enter the simulacra and simulation. It is important to understand that humans’ primary mode of reasoning is through images. As previously discussed, the brain creates a representation of the outside world in the inside world. We then evolved language as a meta-commentary on these images and the relationships between them. Therefore language allows narratives not only to serve as a supplementary information medium but also as a means to extract more information from an existing one.


In Chapter 1 we framed that inside-brain “movie” as a compressed model; here we watch it spill outward into shared media, turning private sketches into exchangeable files.

3.2 The Matrix

Jean Baudrillard’s Simulacra and Simulation is the intellectual inspiration for the Matrix franchise. The Matrix is a unique point of philosophy in its own respect. It departs from the book in many ways. The first Matrix film reflects the versatility of filmmaking as an information medium with its Hong Kong-style kung-fu. The other films were poor imitations of the first.

Baudrillard postulates that people get their information not through actual reality, but through our shared imagination. This shared imagination is externalized through increasingly sophisticated information technologies, complete with increasingly salient variable images. The book starts off with a map of reality, a map so accurate that it’s confused for the real one. Anyone who has followed Google Maps down a bad road knows the equivalence of this. It is this confusion between this externalized imagination and fundamental reality that constitutes The Matrix or Simulacra and Simulation.

3.3 The Evolution of variable-image technologies

Our ability to produce, and the complexity of, these variable images have increased over time. Eventually, we learned how to automate the process of variable-image creation through the printing press, and to send these symbols without having to walk with the telegraph. The inflection point in variable-image technology occurred in the post-modern era with the introduction of the television. The television can be thought of as a one-way variable-image machine.

Next we widen from broadcast to interaction:

The shift from satellite to cable represented a fundamental shift in the cost structure of the content wars. The means of transporting these variable images shifted from broadcasting via electromagnetic waves via satellite, to sending the very same information via fiber-optic cables. The personal computer is a two-way variable-image machine. Instead of just being able to receive the variable images, one can also create them. Two-way variable-image technology was mass commercialized in 1984 with the Apple Macintosh. It gave people the greatest externalizing imagination machine in human history, overtaking pen and paper. The Macintosh allows users to employ visual representation associated with computer objects.

The iPhone represents an explosion in the ubiquity of variable images. It allows users to view and create variable images at an unprecedented rate. Video games can be considered dynamic images in the sense that the images change based on user action.

3.4 The nature of symbolic systems

It is important to understand that symbols are just a special type of image. What sets them apart from other images is that they are mutually connected. They are self-referential images; when combined they form greater representations of the world around them. This combined greater representation and the rules that define it are known as symbolic systems.

Stanford, the only university with a symbolic-systems degree, defines a symbolic system as “the meaningful symbols that represent the world around us”. Unfortunately for them, they defined a symbol, not a symbolic system. A symbolic system is the set of rules which define the interaction between symbols. Symbols, being a subset of an image. Therefore, a better definition of a symbolic system is the set of rules which dictate the behavior of variable images. Variable images being the codification of relevant phenomena. It is the externalization of imagination, given the biological motivation for the fantastical already discussed.

3.5 Alphabetical symbols — Language as a symbolic system

The mapping of variable images onto phonetics allows the common information communication to propagate over a geographic area. The rules governing the symbolic system of language are known as grammar. Fundamentally alphabetical images are the most high-dimensional way we have to talk about the universe, in the sense that it takes the least amount of information to describe the most amount of information.


Language therefore functions as a lossless compressor—an idea we revisit when Gödel shows that even the best compressor leaks truth.

3.6 Religion as a symbolic system– Jungian archetypes

The gods are the symbolic mapping of different natural phenomena. Religion, fundamentally, is a reflection of the environment. The Mesopotamians, flanked by the violently erupting Tigris and Euphrates, viewed their gods as capricious and vengeful. The Egyptians, with the gently flooding Nile, saw themselves as a favored civilization.

Another term for the word “God” is “the nature of reality,” or what Buddha calls “the law.” This is why, in Buddhism, there is no prayer. One cannot pray to the nature of reality; one obeys the nature of reality. The Greek epic of the Odyssey is what happens when one does not obey the nature of reality. Odysseus, the principal protagonist, is lost at sea after disrespecting Poseidon. The moral of the story is that if one goes against the nature of reality, one must pay the cost.

These gods and their stories represent ergodic habit formation—the habits that we use to survive across time, with the best habits being propagated across time. Those with improper habits perish over time. Religion, by this interpretation, is the codification of ergodic habit formation. Said another way, formalized religion is the process of symbolizing the acts that lead to the continuation of the species. When people lament the death of God, they are lamenting the death of the habits that got us this far. It is unclear if our new habits will take us anywhere.


The argument dovetails with Chapter 13’s warning: sever enough of those time-tested habits and Gaia herself teeters.

3.7 Depth Psychology as a symbolic system

Before the emergence of the field of Artificial Intelligence, the study of intelligence was primarily conducted by the field of psychology, specifically depth psychology. Although Karl Popper criticized Freud for being unfalsifiable, we use Freud’s terms casually as if they were always part of our language. Freud’s theories are falsifiable inasmuch as they are computable. If his theories can be used in the construction of an AI, which is intelligent, then Freud is correct.


This offers a bridge to Chapter 12, where neural-net architectures revive Freudian tropes (ego, id) as computational sub-modules.

3.8 Economics as a symbolic system

The fundamental point of the Misesian portion of the Austrian school of economic thought is that the symbolic systems used in economics are inappropriate, as these symbolic systems are those inherited from natural science and mathematics. These symbolic systems, when applied to economics, lead to faulty conclusions. As economics, the youngest of all the sciences, requires a different set of symbolic systems.

The above point, made by Mises, was expressed before the advent of the computer, so the language of information theory was limited. Information theory was not fully developed until after Mises’ time. In addition there were attempts by Hayek to use the language of cybernetics to describe economic systems—not human action. Despite this, Mises was making a point about the nature of computation, taking a computationalist view of the universe. This view characterizes human intelligence as a specific instance of intelligence as such. With this perspective, the language of artificial intelligence becomes applicable to economics. Specifically, economics becomes the study of the estimation of a cost function.

This contrasts starkly with neoclassical economics, which is devoid of any notion of computation. The closest concept to it comes from the notion of creative destruction, theorized by Joseph Schumpeter, who is often mischaracterized as Austrian but is squarely Walrasian. Schumpeter acknowledged that it’s the entrepreneur who takes the economy from one state of equilibrium to another.


Chapter 10 will show how price signals act as training data, and how central-bank “label noise” can crash the learner.

3.9 The Limits of the symbolic system of mathematics — Gödel’s impossibility theorem

A mathematical object is an object which can be formally proved or defined within the axiomatic framework of mathematics. The fundamental rule which undermines the symbolic system of mathematics, or the interaction between mathematical symbols, are the laws of arithmetic and geometry.

3.10 Gödel encoding

The reason why Gödel chose prime numbers as the types of numbers on which to base his encoding scheme comes down to their uniqueness on the number line. This unique point allows each axiom to have its own numeric mapping, as prime numbers cannot be divided by any number other than themselves and one. After encoding a set of axioms in prime numbers, more complex axioms can be formed by using the laws of arithmetic to combine axioms. These axioms need not be unique necessarily (a limitation of the encoding). The validity of these combined axioms can be evaluated using the laws of arithmetic. Gödel’s theorem needs to be evaluated alongside the algebra of Boole as an attempt at arithmetising logic.

3.11 A mechanized mathematics

Roger Penrose, made the point that Gödel defines the limits of a formal system. Computers, being based on formal systems, therefore inherit the limits of mathematics. However, this is not an exact explanation. Most people view computers as an evolution in information technologies, not a mechanization of our existing information technologies. So, computable mathematics is really a mechanized mathematics.

Therefore, one of the implications of Gödel’s theory is that there are things which we know are true but cannot be proven, providing an inherent limitation on what a computer can know. That ceiling on formal symbol power loops back to Baudrillard’s worry: when the map diverges, it may not be fixable by any algorithm we possess.

3.12 The Anti-Symbolisation Principle

The information underpinning reality cannot be fully captured—let alone executed—by any finite symbolic system.

Formally: no countable set of symbols, axioms, or algorithms can map bijectively onto an uncountable continuum of micro-physical states. Trying to label every real number in (0, 1) with integers always leaves most numbers untagged.A formal theory—however ingenious—can be written down as a finite alphabet plus a (countable) list of well-formed formulas and inference rules. Because that catalogue is countable, it can label at most countably many distinct “addresses” in the world. But the phase-space of a physical continuum (every point on a line segment, every micro-state of a field) is uncountable, matching the cardinality of the real numbers. You cannot pair a countable set with an uncountable one bijectively; there will always be vastly more real states than there are symbolic names to pin on them. Hence any formal language, by sheer arithmetic, must leave most of reality unmapped—precisely the content of the Anti-Symbolisation Principle.

Geometry itself testifies to the Anti-Symbolisation Principle. Hilbert’s 1899 axioms tried to quarantine Euclid inside first-order logic, only for Gödel to reveal (1931) that such a system cannot prove its own consistency from within. Hermann Weyl dubbed the entire reduction of spatial intuition to number syntax an act of violence. A century on we juggle rival formalisms—ZFC sets, synthetic differential geometry, Homotopy Type Theory—none mutually inter-translatable without loss. Space thus slips through every countable mesh we weave; the continuum’s “uncountable body” forever exceeds its symbolic clothing. 

Mathematicians tried to bridge the gap by importing actual infinities into their formal language: Cantor’s set theory simply names the continuum as a single set R whose cardinality exceeds that of any listable collection.

This manoeuvre lets proofs talk about uncountably many points, but it does not let the formal system enumerate or decide every fact about them, because every axiom-scheme, proof, and algorithm we can write is still drawn from a countable alphabet.

Gödel shows that provability is a subset of truth; geometry shows that even the oldest, most concrete branch of mathematics lies partly outside that subset. Together they certify the Anti-Symbolisation Principle: the world’s full informational content spills beyond any formal net we cast. Gödel’s incompleteness theorems make the limitation explicit: once a theory is strong enough to encode arithmetic—and hence the real line—it must leave some true continuum statements undecidable. So introducing infinite sets recognises the size of reality, yet the Anti-Symbolisation gap remains: a countable syntax can reference the continuum, never exhaust it.

Thus geometry, though vastly richer than first-order syntax, is still an intermediate abstraction. It inherits symbolic gaps from formal logic and adds its own by idealising matter, energy, and measurement. The Anti-Symbolisation Principle expands: even the geometries we devise are partial metaphors—useful lenses, not perfect mirrors—of a world that forever outruns both our equations and our instruments.

Anarchy State Mafia-an Information Theoretic view of State Formation|Chapter 4

Chapters 1–3 argued that life, mind, and culture scale whenever internal information loops beat external shocks. Here we test that rule at the level of whole societies, asking how clusters of communication renormalise into families, tribes, states, and—when shadow-tax bases arise—mafias.

4.1 On the Centralisation of disputation

This is the first day in the history of political science. As yesteryear’s policy is nothing more than an amalgamation of terms — that is, if anyone has ever had the displeasure of having taken a politics 101 course. One is immediately confronted with the terms liberalism and realism, which are presented as a dichotomy to be contrasted. This presents a problem, as liberalism and realism have nothing to do with each other. Realism is a theory regarding state formation. Defensive realists define a country as an area that can defend itself, whereas offensive realists define a country to be an area over which it can project force. Meanwhile, liberalism is a theory which dictates inter-state relations. Now, when it comes to intra-state relations, the terms “conservative” and “liberal” literally convey no information from a political-science perspective. Conservatism and liberalism are positions with respect to time. As an example, a conservative in 1991’s Russia would be in favor of communism, while at the same time in America, it would be pro-markets.

Thus political labels mis-inform unless we tag them to scale—inside a polity or between polities—a distinction carried forward in the renormalisation model below.

The notion of scale, as such, is completely devoid from any political science course (inter-state vs intra-state). The closest corollary comes from multi-fractal localism. Multi-fractal localism simply means that social patterns repeat at every scale—family, neighborhood, city, nation—like the same jagged coastline reappearing no matter how closely you zoom in. The notion of multi-fractal localism can be given a methodological foundation when combined with the notion of a renormalization group.

A renormalization occurs when there is greater internal than external communication. One can then define a country as the scaling of these various renormalization groups:

Family → Tribe → Neighbourhood → City → State → Country → Empire

The study of the relationship between these renormalization groups is the study of political science. The best attempt to incorporate this idea of scale comes from the United States of America. The states are united in the sense that each state is composed of individual governing bodies that delegate authority to the federal state through the interstate-commerce clause (the 10th Amendment).

Chapter 6 will show how railways and telegraph lines alter these renormalisation thresholds, inflating “country” into “nation-state” almost overnight.

4.2 Power Laws in political science

Hegel’s notions of cultural and temporal relativism can be reinterpreted in an information-theoretic/complex-system:

Volksgeist, essence of the time, can be reinterpreted as shared information communication over space.

Zeitgeist, essence of the space, can be reinterpreted as information communication over time at a point in time.

A Volksgeist is formed when there is greater internal than external information communication (renormalisation). This invokes the concept of a power law—local bonds are overridden in favour of a single, global bond. The α-exponent of the power law is its size. It is inversely proportional to the disparateness of nodes over an area; disparateness is measured by shared information communication (shared attention). Roughly speaking, the smoother the social landscape, the larger the power-law cluster.

In lay terms, societies with less disparateness are more homogenised. This model of state formation can be extended with the language of calculus.

The Zeitgeist can be modeled as the first derivative of the Volksgeist — how much is the information over an area changing at a specific point in time. A higher Zeitgeist implies greater generational change. The second derivative of the Volksgeist is also equal to the first derivative of the Zeitgeist. How much does the Zeitgeist change over time? It is equal to the intergenerational information differential.

4.3 Culture Wars — Why there are no hippies in the wild

Furthermore, Hegelian terms can be given a methodological foundation when incorporated with Computationalism, where one considers human intelligence to be a specific instance of intelligence as such. These nodes (information agents) can either cooperate (trade) or destroy (war).

When one node suggests a proposal to another node (a thesis), if the other node proposes one back then it will be an antithesis. However, if they agree, it will become a synthesis.

If the nodes do not agree, the cycle continues as long as there is a society to decide over. Each node can either engage in this process with their neighbor or centralize this process in a political body. Nodes can either go to war physically or culturally. The technical definition of a cultural war is when two separate Volksgeists amalgamate into one.

Hippies do not fight when they disagree with each other. They voluntarily dissociate: one group of “dorks” goes one way, the other group goes another way. This can only happen within a state, as disputation is centralized. Because in the wild, you cannot choose peace and love. Evil finds you. 

It is important to understand that the process of civilization is the process of increasing a Volksgeist. The greater the civilization, the greater the areas where external communication is greater than internal communication.

4.4 The Nation-State is a recent invention

The Nation State is a recent invention of modern history. The birth occurred during the French Revolution — one of the great tragedies in human history, but that is for another chapter of the Image Symbol Code.

However, it is entirely worth understanding that the ‘French’ did not share a common language until the advent of the Napoleonic nation-state (≤ 10 % French-speakers in pre-1789 France; Napoléon’s 1802 lycée decree). Germany provides us with the most recent example of smaller states forming a larger one, resulting in what you would call World War 1 but I call it the First War of the Stupids, or Europe’s civil war (discussed in another chapter of this book).

These linguistic mergers foreshadow Chapter 7’s rail-and-telegraph section, where cheap ton-miles flip local identity into national consciousness.

The Mafia

The Mafia acts as a state over the market for illicit goods—gambling, drugs, and prostitution. If one wants to get rid of organised crime, the best option remains to legalise the markets the Mafia controls. According to UNODC estimates, the global illicit-drug trade alone generated US $426 – 652 billion in 2019 (World Drug Report 2021), underscoring how large a “shadow tax-base” an extralegal state can tap.

Like a micro-nation, the Mafia supplies contract enforcement where the formal Volksgeist refuses to tread, illustrating renormalisation at black-market scale.

4.5 Ancient Greece

Religion and topography

Religion provides a robust proxy for information dispersion. There are more Muslims in Iraq than in Japan because Iraq lies far closer to Saudi Arabia, the birthplace of Islam. Ancient Greece offers a vivid example of how information topography shapes society: a mountainous, island-studded landscape so insular from land that three hundred Spartans could hold off the vast Persian army. Greek religion was markedly decentralised; anyone could witness a sacrifice, not just a centrally appointed priesthood.

City-state values

Each Greek πόλις—each local information cluster—emphasised different facets of the shared mythology. Athens, named after Athena, prized wisdom and democracy. Sparta, devoted to the huntress Artemis, exalted power and autocracy. Every city had its own government and priorities, yet all called themselves Hellenes. In information-theoretic terms, internal communication among Greek cities exceeded their communication with the wider world.

4.6 Judah

At the time, however, most cities weren’t part of a network of city-states. Each city was a city on its own, with each city-state boasting its own religion. In the language of information theory, there was greater information communication within a city than between cities.

4.7 Cyrus (the Great)

The reason why they call him Cyrus the Great is because Cyrus did not kill them (for once). He instead chose to patronize their God, to assimilate them rather than disseminate them. Unlike the other great men of history, when people call Cyrus the great they actually mean it. He truly deserves that title for introducing the power of markets to the world.

The fundamental difference between Persian and pre-Persian rule is straightforward. Before Cyrus, war was zero-sum: the winner grew richer by seizing the loser’s goods, while the loser was impoverished. After Cyrus, the objective shifted: he patronised local gods instead of destroying them and, crucially, valued honest trade over pillage.

Although warfare suppresses short-term growth, Cyrus’s security apparatus stabilised trade routes. Stable routes fostered economic interconnectivity, which, in turn, seeded the great empires of the ancient world.

Cyrus, by contrast, just wanted people to stop “lying”, as, for the Persian, lying is the ultimate crime. He was not interested in stealing your God in glorious combat. In fact, he paid homage to the gods of the conquered people. Although war is the opposite of economic growth, it, however, established security. Security established along trade routes allowed for the establishment of economic interconnectivity. Which, in turn, established Empires that populated the world.

When we examine Roman roads and Chinese canals, we will see Cyrus’s market-over-pillaging logic scaled to continental supply chains.

The same renormalisation logic later underpinned the Chinese ‘All-Under-Heaven’ realm, whose vast bureaucracy kept internal communication denser than any contact with steppe, Hindu, or Islamic peripheries—an empire, not a nation-state, by sheer information flow.

Rome, Complexity and Christ| Chapter 5

We just saw how renormalised communication clusters scale from family to empire. Rome provides history’s cleanest test-case of that scaling logic, because its infrastructure, military, and religion each tried to keep internal information flow faster than external shocks. What follows threads every Roman episode back to that single information-processing premise.

5.1 Why Rome? — Nature vs Nurture

Prior to the industrialisation of Europe Rome was the richest civilisation to ever exist. All of this done without an engine. One of the great question of the history of the world—How?

From an information-theoretic perspective, Rome had a more independent processing system than their rivals, which made it such a unique society. They produced remarkable individuals. While Carthage could only produce one Hannibal, Rome’s democratic system allowed for the rise of many concurrent great people, giving it a superior edge in information processing. “Independent processing system” means the Senate, assemblies, and legions could compute problems in parallel—multiple generals, jurists, engineers all iterating at once—whereas monarchic Carthage bottlenecked through a narrower decision channel.

We often debate “nature vs nurture” because, within a given society, most of us share the same nurture; what varies is our individual nature (genes). Across history, however, civilisations rise and fall in different places and times, showing that culture (collective nurture) dominates genetics. If raw genes ruled, the same peoples would always lead. Rome’s ascent, therefore, is best explained by its adaptive civic culture—not some superior DNA.

5.2 Stoic values — To despise fortune

Roman culture is unique in the history of the world. The fundamental collective unit of society within the Roman system was the family, and the fundamental unit of that group is the individual. In Rome, people were known by their family name and had to act according to the behavior associated with their names. Brutus murdered Caesar because, once upon a time, his ancestor was the one who killed Rome’s last tyrant. The Romans, like many previous people, had an ancestor room which dictated the actions that a family member had to adhere to.

Family cults kept memory-loops tight, making reputation an inter-generational checksum on behaviour—an information safeguard against drift.

The competition between people was a competition between their gods. Wernher von Braun, the head of NASA and a former Nazi, was instrumental in the human journey to space. By contrast, the Rastafarians were never going to launch such an endeavor. Why go anywhere when you’re already high? The disciples of Haile Selassie (Jah Rasta Farheigh) were also less likely to commit a holocaust.

Stoicism, fundamentally, is a pagan ideology. While Roman mythology paralleled Greek mythology, the emphasis on fortune is fundamentally a Roman concept. Fortune represents your location in the realm of possibilities: where you are out of all the places you could possibly be. To despise fortune is to despise variance or randomness because what fortune gives you, it can also take away. Deus Ex Machina, in information-theoretic terms, the intervention by the gods is the explanation given to the unpredictable. If the unpredictable turns in your favor, it’s considered fortune. If not, it’s deemed misfortune.Despising fortune equals minimising variance; Rome’s engineering obsession (straight roads, aqueduct gradients) is a physical analogue of that cultural risk-aversion.

5.3 Anarcho-Capitalism — Theory of the firm

Following Mises’ work, Hayek tried to shift the symbolic system used by economics onto cybernetics. However, the difference between cybernetics and the rest of information theory comes down to theory versus practice. Cybernetics is applied information theory, the application of an information system onto robotics. Nonetheless, Hayek introduced the idea of the firm as an information processor.

Capitalism dominates other forms of social organisations. Due to the information processing of the price system allows for a decentralisation of human action onto a local scale. Thereby outperforming other top down forms of information processing.

Roman villa-estates, shipping partnerships, and publicani tax-farming companies were early ‘firms’ whose local optimisation out-performed central decree—an empirical preview of Hayek’s later theory.

5.4 Das Adam Smith Problem | Division of Labour

The division of labor, from a complex system perspective, is akin to that of any organism. The specialization of information processing clusters at one scale allows for greater complexity on the next. Firms are just a subset of this greater general class of system.

A firm is a dedicated network of people (information processors) capable of transforming resources into economic products valued by people over an economic field. The main growth parameter is the difference in cost between the configuration of bits in and out (cost of materials versus selling price) over how many products are sold. Costs depend on how efficiently nature is transformed.

The scale of the firm depends on the level of value generated and the network over which that value is being generated (common market). Both are functions of the level of propagation of physical and symbolizable information. As the cost of transmitting physical information decreases, so does the cost of coordinating human action at a distance.

5.5 Cantillon and Spatial Economics — What is a firm?

As an information processing system, the firm tries to symbolize, transport, and transmit different information at different scales over time. Cantillon’s contribution to economics is the idea of having a unit of land with an information system to process over.

The scale of a firm is dependent on the level of value generated and the network over which that value is generated. As costs per unit mass become equalized across geographical distances, there are more returns to scale, reducing the advantage of local monopolies and resulting in a winner-take-all effect.

5.6 On communicating physical information (its) — Roman roads

All roads lead to Rome because the Romans built them. The difference between the Persian Empire, the Greek Empire, and the Roman Empire has to do with Roman engineering. This is the same reason why Roman capitalism proved so successful. Internal communications were so effective that a common market was possible, thanks to the infrastructure that allowed the transport of goods and services.

The ability to transmit physical information (objects) depends on the ability to transport mass over a specific distance and over a specific time (speed). The relevant stressors to this process are mass and time. Therefore, the ability to transport unit masses over different geography is the relevant technology.

The determinant of this technology is the ability to reduce the friction reality with wheels/roads/boats/railways/airplanes. The ability to overcome inertia and maintain motion of a certain unit mass, is dependent on the ability of humans to create and control force through wind/animals/human body/spontaneous combustion engine/waterwheel/steam-engine/nuclear reactor.

Chapter 7 will show steam and telegraph compressing these same ‘its’ and ‘bits’ to nation-state speed.

5.7 Death to the Republic, Rise of Empire

By the time Rome came into existence, technology had developed to the point where different peoples found themselves within the same sphere of influence. Rome fought a war with a local superpower, Carthage, which ironically, had risen to prominence as traders. Carthage was a colony of the great seafaring Phoenicians. It was through conquering Carthage that Rome began to develop a taste for empire. In nearly a lifetime, Rome rose from a regional power to a hegemonic cluster with their destruction of Carthage. Initially, Rome mobilized its military, as they would argue, “in self-defense”. But as their founding fathers noted, war has a habit of undermining free societies.

5.8 The Roman Military-Industrial Complex

Firstly, Rome’s army was constituted of conscripted landholders. This only allowed for minor conquests. Once Rome hit a certain scale, the rules which defined an agrarian democracy seemed outdated for a world hegemon. The defense of the empire was ill-suited to the nature of the Roman armed forces, which required members to be land-owning farmers. However, as campaigns grew longer, many small farms fell into disrepair and agglomerated into larger estates, thereby depriving Rome of its military.

A Roman general, Gaius Marius, chose to solve the problem not by limiting the size of the estates of wealthy Romans but by doing away with the rule requiring Romans to own land altogether (Lex Agraria 107 BC). Roman soldiers, upon successful conquest, would even be awarded land. This marked the official birth of the Roman military-industrial complex.

Now, the only limit as to how powerful one could be was how much one could conquer. Roman generals could recruit non-land-owning males. This allowed any one individual to accumulate more soldiers than the rest of Rome. This then caused friction with the rest of the Roman Senate. The threat of tyranny was taken seriously in Rome.

By shifting recruitment rules, Marius rewired the army’s feedback loops—loyalty flowed to generals, not to the Senate—an internal-communication coup.

5.9 Roman Political Economy

The political economy of Rome reflected the birth of the military-industrial complex, with large debts incurred by candidates on behalf of their political careers. These political debts would then have to be repaid through wars of aggression once in office. These wars of aggression could then be used as a means of punishing political rivals. Roman politicians, being excluded from prosecution whilst in office, were forced to try to remain in office for as long as possible, thereby perverting the political process. In Rome’s case, Caesar’s refusal to be tried for war crimes led to a civil war, led by Cato the Younger, that would ultimately destroy the republic.

5.10 Human Action under Tyranny

The transition of Rome from republic to empire is also the death of the pure capitalism era of Rome. As when Mises and Rothbard disagreed on whether people under tyranny could engage in human action. Rothbard slightly misunderstood Mises’ theory of human action. Under tyranny, one cannot engage in true human action as they must act fundamentally in accordance with the Emperor.

5.11 Death of empire — Jesus Christ, Enemy of Caesar

The greatest arsonist in the history of the world, and enemy of Caesar, was Jesus of Nazareth. This point is worth repeating, as otherwise, the history of the world does not make sense. Jesus Christ was an enemy of Caesar Augustus, emperor of Rome. He was the greatest threat Rome had ever seen, which is why they killed him.

Jesus destroyed the old gods, except one – the god of his native people, the Jewish God, Yahweh. It is important to understand that Yahweh’s contemporaries were gods like Zeus, Odin, and the like. He’s the lone survivor and the founder of his own type of religion – the Abrahamic religions (Judaism, Christianity, and Islam), which are considered as one. These religions are, by far and away, the most populous on earth. The reason why other pagan ideologies still exist is because the Abrahamic religions are limited by a Eurasian sphere of information communication.

From a complex-system perspective, unlike Paganism, Christianity is an expansive religion. In Paganism, the potential nodes do not affect growth, as pagan ideologies are concerned with conserving relevant information through shared myths. The more roads, the more connections between different people, and the more connections to help grow an expansive religion. Christianity could only have arisen in a network Empire such as Rome, where common travel safety was essential for preachers. Additionally, the political infrastructure of the Roman Empire contributed to the propagation of Christianity. Buddhism, too, spread via empire-scale trade routes, but without the monotheistic exclusivity that let Christianity erase pagan gods.

Christianity still had to be based on a sound ideology (Judaism) to propagate through time. However, Christianity was also concerned with expanding itself. What Nietzsche laments as Christian meekness stems from the fact that it needed to be as appealing to as many people as possible. What is the difference between the old and the new gods? Good and evil are their names. It is important to understand that it was the Christians who lost Rome. The more Christian the Empire became, the more its borders shrank. The Christians were the hippies of antiquity.

As Christianity re-wires Europe’s information graph, the Roman road network that once unified an empire collapse.

Medieval Localism, the Printing Press and the Age of Navigation|Chapter 6

6.1 Two continental firewalls


The Sahara and the Himalayas carve Eurasia into independent information basins. The Himalayas—guarded by the Hindu Kush and faced by the Gobi—separate the Abrahamic sphere from Hindu–Buddhist India and, farther east, the Confucian world. Persia/Iran sits at their crossroads, refining and retransmitting ideas that flow between all three clusters.

Just as renormalisation groups form when internal chatter exceeds external links, these mountains hard-wall information flow, forcing each basin to evolve semi-independently.

6.2 From Rome’s eclipse to the High Middle Ages

The Western Roman collapse (AD 476) coincided with the explosive rise of Islam (7th century), severing Mediterranean sea-lanes that had once knitted Europe, North Africa, and the Levant. The result was a millennium-long reduction in economic and cultural complexity—Europe’s Dark Ages. “Western Europe’s aggregate real GDP per capita fell by roughly 30–40 % between AD 500 and 900, hitting a trough around the 10th century before the long medieval rebound began (Broadberry et al. 2011, Table 1).”


Where Chapter 5 showed Christianity riding Roman roads, here we see what happens when those roads—and the data they carried—snap.

In the High Middle Ages (c. 1000-1300) those broken circuits slowly re-formed. New Italian and Hanseatic trade leagues, Arab–Byzantine intermediaries, and even Crusader corridors re-stitched East–West exchange, setting the stage for a fresh information boom.


This repaired lattice is the runway the printing press will soon roar down.

Medieval Europe was anything but monolithic: kings, nobles, and the Church were rival information clusters. A French monarch still needed baronial coin for wars, and the clergy checked both. The Magna Carta (1215) was primarily elite self-defence, not peasant liberation—a classic illustration of “multi-fractal localism,” where power fractals down to smaller, mutually limiting nodes. “The Magna Carta (1215) was driven less by peasant liberty than by baronial self-defence: the king’s major vassals forced John to concede limits on taxation and arbitrary arrest so they could safeguard their own feudal prerogatives (see Holt 1992, pp. 92–101).”

This mutual throttling echoes the family-tribe-city renormalisation ladder you laid out for political scale.

6.3 The printing press — automated symbol machine

Demand curves slope downward: slash the cost of symbol production and you flood the market with symbols.

Gutenberg’s press (c. 1450) cut manuscript costs by ~90 %. By 1500, Europe had printed ≈ 8 million books, an information explosion impossible under monastic scriptoriums.

Korea’s metal movable-type presses (1230s) and China’s earlier wood-block methods prove that automated symbol tech was not uniquely European—what differed was Europe’s fractured, competitive market for ideas.

Cheaper print shrank the minimum viable “information cluster.” Pamphlets let Luther bypass cathedral pulpits; Swiss and German towns tuned doctrine to local taste; England’s presses stoked Parliamentarian politics.


The press is the first mass-market variable-symbol machine—an ancestor of television’s coloured images and the PC’s two-way icons (Ch. 9 & 11).

6.4 Reformation and Enlightenment — libraries rebooted

The Protestant Reformation germinated in Mainz-Wittenberg, the printing capital of the age. Priests with modest funds could now out-broadcast Rome’s curia. Meanwhile, cheap reprints of Aristotle, Euclid, and Galen—arriving via Arabic translations preserved at Baghdad’s House of Wisdom—fuelled the Enlightenment, which was as much a recovery of lost Greco-Roman knowledge as it was a burst of new theory.

We know from Taleb’s minority rule that the preference of the minority dominates those of the majority if the costs to the majority are not too high. If one in ten at a dinner party doesn’t like sushi, you don’t eat sushi. Scale is determined by the number of nodes in a network needed to sustain the religion. The more people in a network, the greater the set of preferences that the religion needs to appeal to. The greater the set of preferences, the more generic the religion becomes as it has to appeal to the least exclusionary elements of the set. So when religion is created towards a general audience, it has to not offend as many people as possible while capturing as many sets of preferences as possible. The more the content will match the preferences of any one individual, the more time they will spend attending to it. The possibility space of religion increased with the decrease in the cost of communicating information. Smaller religions are, therefore, increasing.

Cheap print widens preference space exactly as cable TV widens niches in Ch. 9, producing both micro-sects and micro-channels.

6.5 Navigation and decentralised ambition

China possessed ocean-going treasure fleets decades before Europe. Admiral Zheng He led seven expeditions (1405-1433); his largest ships dwarfed Columbus’s Santa María (1492). Yet Beijing’s centralised court scuttled the programme after one emperor, while Europe’s political patchwork kept option space wide open with the concept of a company. An Italian navigator could shop his proposal from Lisbon to Seville to London until someone wrote a cheque. Arab lateen-sail technology, spread via the Indian Ocean dhow trade, also underwrote long-distance navigation well before European caravels. But they lacked the capitalist and legalist frameworks of Europe to scale and sustain these trade routes.

Where Rome’s roads synchronised an inland sea, joint-stock charters synchronised planets—stretching the same information-infrastructure argument across oceans.

The next instalment will show how steam, coal, and telegraph cables weld these medieval information shards into the Industrial system whose reflexive loops explode in Chapters 9-11.

Modernity and The Industrial Revolution-The Biggest Thing That Man Has Ever done| Chapter 7

7.1 The European Miracle — Man’s Domestication of Work

I’m just a lonesome traveler,
The Great Historical Bum
Highly educated from history I have come
I built the Rock of Ages, ’twas in the Year of One
And that was about the biggest thing that man had ever done

— Woody Guthrie, “The Biggest Thing That Man Has Ever Done”

7.2 Electricity

The Industrial Revolution is the greatest inflection point in human history since Prometheus gave man fire. Both inflection points stem from the same source: man’s ability to produce energy outside of his body. Fire—one of the great chaotic forces of the universe—is unmanageable; all we can do is keep it from destroying everything. But we can heat water. Steam turns a turbine; the turbine makes energy. Now we sidestep fire and let water spin the wheel for us.

Now river you can ramble where the sun sets in the sea,
But while you’re rambling river you can do some work for me

— Woody Guthrie, “Roll On Columbia”

The Roman waterwheel in Chapter 5 showed the principle at village scale; steam just scales the same hydro-leverage to continents.

The Industrial Revolution is fundamentally the ability to access work with zero dimensions of freedom (waterwheel). The railway adds one dimension; the automobile and steamboat add two; the aeroplane grants three. Each transport layer frees mass-energy flows along more axes, widening civilisation’s “information corridor.”

7.3 Modernity — A Collapse of Space-Time

The railway

In information-theoretic terms, the American Dream is voluntary disassociation across space: nodes decentralise their computation by moving west. When frontier slack vanished, the American Civil War erupted—renormalisation climbed to the federal layer (see Chapter 4 on Anarchy-State dynamics). Railways lowered the cost of long-distance communication; national rather than local identity emerged once inter-state links outnumbered intra-state ones.

The automobile

Southern Agrarians saw the automobile as the death of old America. Cars decentralised the railway itself, letting individuals—not rail barons—jump between locales. Walkable neighbourhood clusters dissolved. This atomic mobility foreshadows the personal-computer era in Chapter 11, where minds—not bodies—detach from fixed terminals.

Variable Sound Machine — Radio

Radio marks the technological gap between World War I and II. Encoded audio riding electromagnetic waves let governments broadcast at scale: the Third Reich ruled the ether in cramped Europe; U.S. farmers strung chicken-wire antennas to keep pace. The FTC later centralised American frequencies into network behemoths—an organisational template colour television would inherit (Chapter 9).

The telegraph — The Variable Symbol Transmitter

The telegraph collapsed time more than space, renormalising discourse from town squares to a nation-wide “World Wide Wire.” Within cities, marginal costs of signalling became negligible; between cities, copper lines still channelled traffic along discrete hubs. Steel rails flattened geography for freight; copper wires flattened chronology for symbols—two halves of the same centralising engine.

7.4 The Fountainhead? Typewriter — The Variable Symbol Machine

The Fountainhead, as in the fountainhead of a pen, is an exploration of the archetypes of the Media Industry. In addition to an exploration of the fundamental renormalization from local to national media. It is a function of a proliferation in information technologies, told through the lens of architecture. It is specifically the media coverage of Erudite Howard Roark. The newspaper system The newspaper represents an increase in shared imagination externalization. The simulacra and simulation is literally increasing in size and density, resulting in the great mass organization of mass culture that Aldus Huxley’s Brave New World and Orwell’s 1984 characterize.

The Newspaper System

Mass-printed sheets externalised the shared imagination, expanding the simulacrum Orwell and Huxley later fear. Subscription papers obey a dialectic: alienate readers and they cancel, hence ideological commitment hardens.

A Polarised America

Early papers were sectarian. Puritan settlers attacked Catholic newcomers; print became a vote-training manual for co-religionists.

Free Newspapers

Penny papers flipped to advertising: sell attention, not loyalty. Lowest-common-denominator sensationalism skewed public perception—an early preview of Bernays-era consumer psychology and, eventually, clickbait TV (Chapter 9).

The Death of Comics

Comics, once mass culture’s cheapest imagination capsule, lost ground to Wertham’s moral panic and to television’s moving pictures. Yet their low cost kept them a niche laboratory for archetypes that films would later upscale.

The Poster — A Variable Image

Vibrant lithographs replaced Renaissance canvases as the street-level imagination medium; Uncle Sam recruiting art proves colour alone can mobilise a nation.

7.5 Variable Image Machine — Cinema

French “motion pictures” stitched 24 still frames per second into illusion. Adding sound (“talkies”) mattered more than adding colour; together they birthed a collective dreamscape.

Walt Disney’s Snow White

Technological triumph: 1930s children still watch it today, whereas contemporary live-action looks archaic. Its palette pre-figures Technicolor TV and the hypersaturated Star Wars universe that Chapter 9 will hail as the variable-image explosion.

With steam, rail, telegraph, radio, newspapers, comics, posters, and cinema stacked, Chapter 8 will test how depression and total war stress this modern information-energy regime—setting up television’s 1964 take-over.

The Rise of the State and the Death of God| Chapter 8

In Chapters 1–7 we watched symbolic systems scale from cells to empires; here we follow that scaling to its modern zenith—the nation-state—and preview how later chapters (9–11) show post-modern media escaping state gravity once more.

8.1 Birth of the Modern State

8.1.1 French Revolution

The French Revolution was less a revolt against the Crown and more a revolution against the Church (Doyle 1980, p. 47). By attacking clerical legitimacy, Paris severed the ancient feedback loop that had kept altar and throne mutually reinforcing since Rome (see Ch. 5). Louis XIV’s Versailles had already centralized nobles at court, turning local power struggles into palace intrigue—and setting the stage for medieval localism’s undoing.

8.1.2 L’Ancien Régime

Under l’ancien régime, the Church legitimized emperors. Once that symbolic pipeline broke, sovereignty migrated from God to “the People,” letting constitutions—not catechisms—bind territories. The modern nation-state separates church and state, weakening clerical sway over politics and enabling the state to grow as an independent information processor.

8.2 State as Information Processor

8.2.1 Mass Culture

Railways and telegraphs renormalized information topography, homogenizing the volksgeist across space. Chapter 7 showed how steam and print shrank Europe; the same networks now let Paris broadcast Jacobin myths into every département, hard-wiring a national identity.

8.2.2 Fractional-Reserve Banking

Early gold-warehouse receipts evolved into banknotes once warehouses could communicate (Kindleberger 1984, ch. 2). Central banks and debt markets then funded armies on an unprecedented scale. Financial leverage thus became the monetary equivalent of telegraphy—amplifying local capital into continent-spanning force projection.

8.3 The First European Civil War: The War of Stupidity

A regional dispute over Belgian integrity between France and Germany escalated into World War I. Neither side anticipated a drawn-out conflict—hence “the war of the stupids.” Complex-systems lens: two tightly coupled power laws (German and French volksgeists) produced a single catastrophic cascade once a buffer node (Belgium) failed.

Anglestan Britain’s Expeditionary Force numbered only ≈ 100 000 veteran troops—far too small to threaten Germany (Keegan 1999, p. 358). Yet “Fortress Britain” still managed decisive naval blockades rather than futile continental offensives. Naval cables and radio allowed London to wield information asymmetry at sea the way Berlin wielded rail logistics on land.

8.4 Czarist Russia

Czarist Russia lost to a smaller German army in WW I, then sacked Berlin in WW II (Mawdsley 2005, pp. 112–13). The Mongol legacy of top-down centralization left Russia ill-suited to innovate; only once global armaments became standardized could its mass mobilization prevail.

8.5 The Formerly United States of America (uSA)

8.5.1 Teddy “the Cowboy” Roosevelt

Teddy Roosevelt is widely ranked the second-worst U.S. president (Greenstein 2009, p. 214). He exploded executive power—annexing Hawaii, Puerto Rico, and the Philippines—and selectively enforced the Sherman Act against Rockefeller, only to see it later used by Rockefeller against Morgan. His trust-busting mirrored Europe’s nation-building: central authority broke local monopolies, then faced meta-monopolies of its own creating.

8.5.2 Woodrow Wilson, PhD—The First Intellectual Yet Idiot (IYI) President

Wilson ranks as the third-worst U.S. president—sending voters as mere bargaining chips (Boller 1988, p. 172). He pivoted America from neutrality to full engagement in Europe’s civil war via submarine-war ultimatums, then treated American lives as leverage at Versailles.

8.6 The Second European Civil War—The War of Good and Evil

Evil, in information-theoretic terms, is mutual exclusivity: the devil as scapegoat. Good is value-creation via shared wants. WWII pitted imperial exclusionism against universal rights, concluding that some regimes must fall. Total war completed the “death of God” sequence: after states dethroned churches, ideology dethroned states that made themselves gods.

8.7 Nazi Germany

Unlike past European upheavals, the Nazis were elected—then used the Reichstag Fire to outlaw rivals and seize total power. Once in control, they

Re-centralized the state: Built the Autobahn and mass-produced the People’s Receiver radio, integrating Germany physically and informationally under the Führer.⁴

Harnessed mass media: The Ministry of Propaganda made “Heil Hitler” a daily ritual, enforcing ideological uniformity.

Militarized society: Universal conscription and armaments production transformed Germany into a war machine that, at peak, outpaced all neighbors combined.

Their rise illustrates how a modern state’s information infrastructure can be weaponized to concentrate power and mobilize entire populations. Chapter 9 will show how television diffuses that same infrastructure, making future totalitarianism harder—but viral populism easier.

8.8 European Federal Renormalization

The European Union was created to solve Henry Kissinger’s quip: “If I want to speak to Europe, who do I call?” What began as a common metal market became a supranational body. It reflects the renormalization logic first seen in Greek πόλεις clustering into Rome’s empire (Ch. 5) and later in U.S. federalism (Ch. 4).

8.8.1 Over-banked Europe — A Common Monetary Union

A shared euro imposes one interest rate on disparate economies: German surpluses subsidise Greek deficits. Banking consolidation thus mirrors media consolidation—large nodes swallow small when marginal signalling costs fall.

8.8.2 Butter vs. Olive-Oil Europe

Culinary shorthand divides Northern “butter” Europe from Mediterranean “olive-oil” Europe—two semi-independent volksgeists under one currency. Brexit showed the North Sea cluster resisting Alpine-Mediterranean centralisation.

8.8.3 Trans-Alpine Europe

Alpine tunnels and motorways reduce topographic drag; information flow now crosses the mountains that once separated Gaul, Lombardy, and Bavaria. Transport tech completes what Roman roads began, what railways sped up, and what EU directives now legislate.

8.9 False Renormalization: A Line in the Sand

Europe’s two civil wars killed its external empire; America’s intervention killed its own. Meanwhile Sykes–Picot borders froze Ottoman shards into nation-states misaligned with local volksgeists—illustrating that renormalization imposed from without can ossify, not integrate.

Notes & Sources

  1. Doyle, W. Origins of the French Revolution (1980) — p. 47.
  2. Kindleberger, C. P. Formation of Financial Cato (1984) — ch. 2.
  3. Mawdsley, E. World War II: Russo-German Conflict (2005) — pp. 112–13.
  4. Taylor, F. “The People’s Car and the Autobahn,” J. Mod. German Hist. (1998) — pp. 75–82.
  5. Greenstein, F. The Presidential Difference (2009) — p. 214.
  6. Boller, P. Presidential Misconduct (1988) — p. 172.
  7. Keegan, J. The First World War (1999) — p. 358.

The Television-A Postmodern Society| Chapter 9

The official postmodern year – 1964.
The official postmodern technology – the television.

1964 is the hinge year when Baby-Boom teenagers, satellite relays, colour tubes and mass marketing all reach critical density; the date also hand-shakes with Chapter 7’s timeline, where railways and radio had previously collapsed space-time.

The modern era is defined by its static black-and-white pictures, while the post-modern era is defined by coloured moving images. The peak year of the post-modern era is 1969. For those serious about life, we witnessed the moon landing and the advent of the first two-way variable image machine – the computer. Meanwhile, the hippies celebrated Woodstock.

Post-modernity is a natural by-product of an information-technology explosion that occurred after World War Two, with the commercialisation of the variable image machine known as the television. This change in information technology took time to be reflected in the Geist. It can be defined, in information-theoretical terms, as the moment in time when people paid more attention to their shared imagination, through these information technologies, than the reality they lived in.

The delineating boundary between the post-modern and the modern state is the nature of information technologies. The centralised nature of information technologies of the modern state (cinema, radio, the newspaper) led to the ability to centralise attention. The post-modern state made this task all the more difficult. The Russian state had to jam Radio Free Europe, making mass brain-washing all the more challenging. All the information technologies of the early 20th century became cheaper, thus becoming accessible to the individual. The Polaroid, along with the handheld camera, made image capture more ubiquitous.

Cost curves being downward sloping the cheaper it is to make symbols the more people will do so. We can repeat Chapter 6’s printing-press logic: slash symbol costs, shrink the minimum viable “attention cluster,” multiply niches.

9.1 Post-modern Philosophical Thought

Post-modernism is a point in history, not a political philosophy. After all, what does Baudrillard have to do with Foucault (the philosophical equivalent of a mosquito)? I categorise Foucault among people whom I call the post-truth philosophers. Is there such a thing as absolute truth? The original father of this movement is Hegel. His theories of culture and temporal relativism have long since been stretched past their breaking point.

Are there things which transcend time and culture? The answer is yes: Mathematics. Is mathematics invented or discovered? Its symbolic mapping is invented, the rest discovered. Every culture has a language, and this language changes. Have you tried listening to Shakespeare of late? However, modern mathematics is the Hindu-Arabic number system, Greek geometry, Arabic algebra, French algebraic geometry, English/German calculus, etc. All cultures share the same mathematics that gets incremented through time. If a thought gets abandoned, it is because the culture that produced it has died.

“Twice two makes four without my free will,
As if free will meant that!”
— Fyodor Dostoevsky, Notes from the Underground

9.2 Television — The Variable-Image Machine

Television could first represent images in black and white, without sound. In future, the television would be able to represent these images with increasing precision, over a larger screen with increasingly salient colour and vibrant sound. The cost of creating these images would decrease, giving us a greater variety to choose from.

Each quality upgrade increases the Simulacra and Simulation.

9.2.1 Satellite to Cable — How Information Is Distributed

The media landscape is a function of the cost of symbolising information. Satellite or cable are just a means to interface with the consumer. The greater the cost of symbolising information, the greater the minimum necessary scale needed to fund the content. Cable, with its lower cost of communication, was able to out-compete network television. The shift from satellite to cable represented a fundamental shift in the cost structure of the content wars. The means of transporting these variable images shifted from broadcasting via electromagnetic waves via satellite, to sending the very same information via fibre-optic cables.

Notice the pattern: every drop in marginal transmission cost—printing press, telegraph, cable—spawns new symbolic ecosystems; forthcoming Chapter 11 will show the Internet taking that to zero.

9.3 The Consumer Society

Edward Bernays, nephew of one Sigmund Freud, is the man who introduced psychology to the field of economics by getting firms to interface with the ego of the consumer. Consumers would identify with the products they purchase. Mass consumerism, thus, became an act of self-expression.

Prior to pre-packaged manufacturing, products had no labels. If you wanted butter from the grocer, the grocer would literally cut out exactly how much butter from a slab of the yellow goo. Same applied to anything else in the store. A branded good is an economic object. The number of products increased in the modularity of manufacturing.

9.4 From Modernity to Post-modernity — The Commercialisation of Culture

The commercialisation of culture is directly related to the decay of Western civilisation.

9.4.1 The Disneyfication of Myth

In a market economy, it’s not the most adaptable myths that propagate; instead, it’s those that appeal to the largest number of people. Storytelling has shifted from being a means of conveying information, to a method for generating liquid capital. This leads to the loss of the original narratives and the information they are trying to convey. Consequently, the incentive structures that once supported society’s shared myths are distorted. People used to tell stories to survive; now we tell stories for fun.

9.4.2 Culture and Counter-culture

Fundamentally, the notion of a counter-culture is a loose one. Do we consider the Protestant Revolution a countercultural movement? How is counter-culture different from a culture war? Specifically, in this case, I am talking about a difference in generation as being the difference between culture and counterculture. Punctuation Improvement: Firstly, the difference between a counter-culture and a culture war comes down to an evangelical factor: members of the counter-culture do not care to convert members of the culture. Therefore, the Protestant revolution would be part of a culture war. Secondly, in a culture war, the two cultures can live on their own merits, but the counter-culture is directly in reference to a culture.

The 1960s, like the 1990s, were both anti-commercial for different reasons. In the 1960s, the culture-industrial complex had not yet materialized, although the fundamental information technologies were there, and their effects on society were increasing. The 1970s marked the start of the machine. The 1980s were characterized by the formalization of the culture machine. The 90s, as Chuck Klosterman pointed out, saw the rise of grunge culture, where it paradoxically became cooler to be less commercial-the formalisation of counter culture. Nirvana the biggest rock group on earth, did not want to be the biggest rock group of earth.

However, unlike the 60s, the 90s counterculture movement was just a derivative, not against the consumer society. Kurt Cobain made his wife return a Lexus so they could keep their old Volvo. Cobain, like his nemesis (commercial culture), both defined themselves as a function of materialism. The Volvo is a statement of Cobain’s personality. Sigmund Freud’s nephew, Edward Bernays, would be proud!

9.4.3 Postmodern Clothing-Distressed Denim

Denim was originally contrived to be worn by gold prospectors. The rugged nature of the fabric makes it resistant to tears. Tears, later,  purposely tourn for stylistic purposes. This transformed the nature of clothing from utensils to acts of self expression.

9.4.4 Postmodern Food-The Chicken Nugget

The chicken nugget is one of the great abominations of western civilization. The definition of postmodern food stems from the shift of emphasis between calorically dense foods, which were most necessary the closer one is to the level of subsistence, to nutrition dense food.

9.4.5 The Rise of postmodern space-Disneyland

Disneyland is the first place-in-space of the simulacra and simulation. It is the first brick and mortar manifestation of man’s imagination. Disneyland is squarely a postmodern phenomenon. With the concept premiering first as a TV show, not an amusement park.

9.4.6 Postmodern sports:

1964 was also the year the first televised World Series. Also known as the death of baseball. Baseball is best experienced in person, or on radio. American football is the postmodern sport. In the sense that people would rather watch the NFL on TV than in person. Televising sports also increased the money in the industry with the money received from through advertising which ,long surpassed the income from ticket sales. This increase in money made its way to players and created a divide between players and fans. Newspaper reporters soon began to share that players cared more about their wardrobe than they did about winning.

9.4.7 Yankee Dominance

The Yankees, prior to the explosion in information communication technologies, had a much better time scouting talent. Their extensive networks of scouts, given the scant information medium of the time,  proved more useful than the accumulation of word of mouth and better distributed at cheaper costs.

9.5 The baby boomers

During the postmodernist time, there was a concurrent explosion in population in the chief postmodern state, the USA. this population explosion is the much discussed, often maligned-baby boomers. These boomers are set to be the first ever teenagers. As the information typography of the time allowed children to spend more time with mass culture than their parents. These first teenagers were born after World War Two which ended in September 1945. They Would officially become adults in 1964. The first postmodernist generation came of age in 1964.

9.6 Postmodern music-Rock and Roll

Bob Dylan’s last acoustic set at the Newport Folk Festival was in 1964. The set started with the times they are changing. Did the older pianists think that young whippersnapper Ludwig Beethoven sucked? Or was he considered a prodigy? It was the latter.

Music is habit forming. You like the music that you’ve listened to before. Intra-generational differential in habit formation begets intra-generational conflict. So when did the time start to really change?

If one understands rock and roll as the electrification of classical instruments, then one understands that a generational gaslamp might have some qualms because it’s nothing like anything they have ever heard before. Have you seen/heard a Led Zeppelin concert?But it wasn’t Led Zeppelin who bore the brunt of this change. Ironically it came from the man who told us that the times were, in fact, a changing-Bobby Dylan. It is precisely because Dylan noticed the change in the times- did he in fact change. A third of the Newport Folk Festival booed him when he transitioned from acoustic to electronic instruments.

9.7 The Star Wars Generation-A variable image explosion

9.7.1 New Hollywood-The American New Wave

Jaws, may have been the first blockbuster. Star Wars was the first phenomena. Movies officially became mass culture with the introduction of Luke Skywalker. Star Wars was meant to capture the color and excitement of comics on the silver screen. A longer, Flash Gordon strip.  In 1977, there was nothing that looked like Star Wars. Much like there was nothing that looked like the comics that came before.

9.7.2 More Cameras, more movies

Although information technology, known as cinema, began in the early 20th century, it did not peak as a medium until the later half of the same century with the American New Wave. There were a lot of factors which played into the supposedly  “new wave” of American cinema. With a decrease in the cost of the ability to symbolize information, Demand curves being downward sloping, more people could shoot a video. Consequently, more things became a video. This was evident by the creation of a new genre- music videos. The more videos there were, the better people got at making them.

The Market is a Process Trust, the Process|Chapter 10

Markets were meant to coordinate financial information. Their evolution parallels advances in information technology—yet today prices often diverge from the cash flows they’re meant to signal. As Ludwig von Mises observed, a market is “neither right nor wrong; it merely reflects what everyone thinks” (Mises 1949, p. 45). The modern debate asks: is decentralized discovery more efficient, or should a central authority impose “false prices” to steer outcomes? Although “animal spirits” are invoked to justify intervention, this chapter reframes downturns as episodes of systemic information loss. When false prices proliferate—via central-bank policy, fractional-reserve banking, or collective misperception—the market’s signal-to-noise ratio collapses, triggering the sudden, clustered errors we call recessions.

10.1 A Clustering of Errors

When all participants trust the same distorted signals, errors align (Bikhchandani et al. 1992). Crashes aren’t drawn-out repricings but abrupt mass revisions that occur the moment false prices collapse.

To see how these collective shocks unfold, we turn to the market’s pulse: volatility.

10.2 Volatility as Information Processing

Volatility measures shifts between “price-discovery states,” quantifying the flow of new information through markets. Under Bachelier’s framework, these states might appear independent (Bachelier 1900); in practice they form path-dependent chains, with each jump conveying fresh data about underlying value.

If volatility is information change, we can literally equate “time” in markets to price movement.

10.3 Trading Time

In physics, time equals spatial change; in finance, “time” is price change. Volatility thus becomes the rate of information processing. Central-bank interventions stretch calm periods only to intensify rebounds once the peg breaks.

But volatility alone doesn’t capture self-reinforcing loops—that’s reflexivity.

10.4 Reflexivity

Reflexivity is markets reacting to their own reactions. This feedback loop drives moves beyond fundamentals: each wave of volatility begets more, compounding convexity (see 10.6) and turning stumbles into panics.

10.5 2001 — Dot-Com Bust

The late-1990s tech frenzy peaked in March 2000; by October 2001, the Nasdaq Composite had fallen roughly 78 % from its high (Shiller 2000). Initial sell-offs spurred margin calls, triggering further declines in a classic reflexive cascade.

10.6 Monetary Entropy

Claude Shannon’s entropy—the measure of disorder in information—applies directly to money. As authorities flood the system with currency, each unit conveys less precise economic data. Hyperinflation thus resembles static noise overwhelming the true signal of scarcity.

As entropy rises, purchasing power—and price clarity—erode.

10.7 The Monetary Unit

Purchasing power equals the ratio of goods and services to currency supply. As supply outpaces goods, power—and the information content of money—falls. When money ceases to transmit reliable scarcity signals, it becomes noise, undermining markets.

10.8 Fractional-Reserve Banking

Banks create “new money” by lending beyond reserves, multiplying claims on the same base. Each additional loan note dilutes market clarity, injecting false prices and pre-seeding future misallocations.

10.9 Central Banking vs. Market Planning


If central planning outperformed markets, the Soviet economy would have triumphed—a counterfactual that never played out. Partial planning still warps price signals, replacing decentralized discovery with top-down mandates.

10.10 The Black Swan—The Inside Joke

Taleb’s “Black Swan” mocks finance’s reliance on normal distributions to model fat tails. The 2008 collapse wasn’t an anomaly but the inevitable outcome of treating non-ergodic systems like casino games. Believing in the false map of Gaussian risk invites systemic surprises.

10.11: What is seen and what is unseen

Frédéric Bastiat, in his essay “That Which is Seen and That Which is Unseen,” explains the concept of opportunity cost through the broken window fallacy. If a window is broken, the economy may be stimulated by the demand to repair that window — that which is seen. However, if the window hadn’t been broken, the shopkeeper could have spent the money on other capital goods — that which is unseen (the counterfactual).

Both Bastiat’s “unseen” and our computational-irreducibility insight point to the same blind spot: empirical data only ever records what happened, never what might have happened. In Bastiat’s broken-window example, you observe the glazier’s wage (the seen) but can never measure the forgone factory equipment or new business he didn’t buy (the unseen). Likewise, even a flawless pricing model only compresses supply–demand forces into tidy equations; it can’t conjure the counterfactual order flows, microstructure quirks, or reflexive feedback loops that determine real-world prices. Thus policy-makers who lean purely on economic statistics mistake the map for the territory—treating the seen snapshot as if it captured the full causal web. To respect the unseen, you must allow the system itself to run—letting markets generate the missing data via price discovery—just as sound policy must rest on theoretical counterfactuals as much as on recorded outcomes.

Under our information‐theoretic, Anti‐Symbolisation framework, any central‐bank intervention must pass two critical tests: does it preserve genuine price‐discovery, or does it merely plaster over reality with false signals? And can its visible benefits justify the opportunity costs and second‐order effects that lie forever beyond empirical measurement? Viewed through these lenses, non-intervention stands out as the least‐worst policy.

First, markets are themselves vast information processors, distilling every trader’s “unease” into prices. When central banks peg interest rates or unleash rounds of quantitative easing, they impose artificial floors under asset values. That dampens volatility—and with it the flow of new information—until the peg finally breaks and markets convulse in a far more damaging correction. By contrast, letting volatility run its honest course preserves the market’s pulse, guiding capital organically to its highest‐value uses. Second, each fresh tranche of money printing erodes the information content of the currency, raising what we call monetary entropy. Abstaining from intervention means each unit of money remains a sharper signal of scarcity and real risk, rather than a noise-drowned illusion.

Beyond these points, non-intervention respects the computational irreducibility of real economies: the cascade of leverage shifts, liquidity droughts, and cross-border flows that follow any policy move cannot be compressed into a tidy rule without losing critical “what-if” scenarios. It also breaks the reflexive lesson of ever-larger back-stops—what Greenspan dubbed the “put”—which encourages ever-riskier betting and fatter‐tailed crises. Finally, as Bastiat taught, true cost lives in the unseen counterfactual: when losses are masked today, we forgo the capital reallocation, balance-sheet repair, and entrepreneurial renewal that would have occurred. By stepping back, central banks surface that counterfactual, forcing an honest reckoning with trade-offs.

In sum, standing aside transforms central banks from over-ambitious architects of macro‐equilibria into humble stewards of the monetary medium. Unanchored markets, however uncomfortable, best preserve the informational integrity of prices, minimize unintended systemic risks, and honor the Anti-Symbolisation Principle’s warning that no finite policy scheme can fully map the uncountable complexity of economic reality.

The Personal computer-A Post Post Modern Society| Chapter 11

11.0 From Postmodern to Post-Postmodern

The official post-postmodern year is 2015; the catalytic technology is the pocket computer on a broadband leash. Culturally, however, the hinge began in 2007, when Steve Jobs pulled the first iPhone from his jeans and put a networked camera in every palm.

The deeper break can be grasped by contrasting 1948 with 1984—both chronologically and literarily:

1948 welcomed broadcast television: one-way variable images that audiences largely trusted.

1984 in George Orwell’s novel imagined a two-way television—a telescreen that watched back, fusing media and surveillance.

1984 in Apple’s iconic Super Bowl ad promised the Macintosh would prevent that dystopia—only for personal computing to supply the very tools that later enabled omnipresent cameras, cookies, and face recognition.

In Baudrillard’s language, postmodernity was the rule of centrally curated simulacra; post-postmodernity begins when every spectator becomes a broadcaster, truth competes with algorithmic filters, and “Big Brother” is your own front-facing lens.

With the cultural lens fixed, we now track the machines that enabled the shift—first on desks, then in pockets.

11.1 The Desktop Paradigm — Hobbyists, Garages & Graphical Windows

The personal computer is quintessentially American, hatched in suburban garages as falling storage costs invited hobbyists to tinker. Steve Wozniak’s Apple I (1976) was little more than a pre-assembled breadboard, yet it set off a chain that reached the Macintosh (1984)—the first mass desktop to pair a bitmap GUI, mouse, and “insanely great” marketing.

Douglas Engelbart’s 1968 “Mother of All Demos” had already sketched the desktop metaphor, mouse, and networked collaboration; Jobs’s genius was packaging that vision for consumers. Whether Microsoft “copied” Apple misses the larger point: both vendors rode the same logic—turning text terminals into image machines where icons stood for files and folders.

Jobs’s rounded-corner window frames reduced eye fatigue by smoothing the rate of pixel-color change—squaring the circle in hardware aesthetics.

11.2 Mobile & the Two-Way Image Machine

The desktop won on versatility; the smartphone wins on proximity. With constant-on broadband and batteries that halve in size each five years, the phone became the first truly ambient computer. Apple’s iPhone replaced physical keys with capacitive glass, trading a hardware keyboard (used <50 % of the time) for a larger screen and better camera. If the iPod succeeded the Walkman, the iPhone succeeded the Polaroid: a variable-image machine by default.

Global smartphone shipments leapt from ~1 billion in 2014 to 1.43 billion in 2015 (IDC 2016). And when the iPhone X (2017) dedicated most of its weight budget to display and optics (iFixit teardown), it signaled the handset’s final evolution from text tool to image sensor.

Hardware is only half the story; the pipes feeding it dictate how dense cyberspace can grow.

11.3 Batteries, Broadband & Cyberspace Density

11.3.1 Battery:

The strictest limit on any mobile computer is stored energy; lithium-ion chemistry sets an upper bound on session length and camera flashes.

11.3.2 Broadband:

Spectrum and fiber determine how much physical space can tap cyberspace, and at what bit-rate. Denser bandwidth enables richer media, thicker “feeds,” and always-on cloud sync—deepening the phone’s pull on attention.

Cyberspace itself—“the matrix of binary digits”—is defined less by storage capacity than by total human time spent inside it. As Moore’s law shrinks transistors, that digital density soars.

11.4 Shared Cyberspace — From ARPANET to Web 2.0

Pre-Web (ARPANET & BBS) The 1970s ARPANET proved packet switching; 1980s bulletin-board systems let enthusiasts trade files and forum posts.

Web 1.0 (1991–2002) Tim Berners-Lee’s World Wide Web scaled ARPANET’s academic links to anyone with TCP/IP. The first global media “moment” came when the 1996 Everest disaster was live-blogged via MountainZone.com (Krakauer 1997, ch. 18). After that, HTTP traffic ballooned; by 2001 Google logged $86 million in ad revenue on ~$700 million sales (SEC 10-K 2002).

Web 2.0 (2003–12) Social-media platforms weaponised the network effects of a scale-free graph: information flows sped up (zeitgeist compression) while publics fractured into countless “micro-Volksgeists.” Facebook, YouTube, and Twitter turned users into both content and customer.

Notes & Sources

  1. IDC. Worldwide Quarterly Mobile Phone Tracker, 2016.
  2. Krakauer, J. Into Thin Air, 1997, ch. 18 (Everest dispatches).
  3. SEC. Google Form 10-K, fiscal 2002, p. 33 (ad revenue breakout).
  4. Orwell, G. Nineteen Eighty-Four, 1949.
  5. Baudrillard, J. Simulacra and Simulation, 1981 (Eng. 1994).
  6. iFixit. iPhone X Teardown Report, 2017.
  7. Engelbart, D. Augmenting Human Intellect, 1962; “Mother of All Demos,” 1968.

The Future|Chapter 12

12.1 PLATO: that friendly orange glow

The PLATO computer system is something that you probably have never heard of. That’s a shame, because PLATO is one of the greatest achievements of the 20th century. What happened on PLATO took forty years to happen on your computer, and it all comes down to that friendly orange glow.

PLATO was fundamentally a time-sharing computer system that limited users not by the time they could spend on it, but by how many instructions they could run using their computers. The instructions were measured in the thousands, while today’s standard computers deal in the billions. Despite this, you could do almost everything on PLATO that you can do on today’s computers—in the early 1970s. (Dear & Thornburg 2018, p. 42)

The genius of the PLATO system is what eventually made it obsolete. The friendly orange glow refers to the fact that the computer display was rendered using a neon gas, which glows orange.


Unlike today’s computers, which have a dedicated graphics processing unit, PLATO had to minimise the use of computation. It was computationally too expensive to bitmap a display.

A computer can only represent a limited amount of information on screen. The minimum subdivision of the screen is called a pixel. The number of pixels that can be displayed on screen is the resolution. Computers can only fetch and retrieve a limited amount of information per cycle (or second). The lower the resolution, the fewer pixels I have, the faster I can alter what is on-screen. This is why e-sports players don’t game on the highest possible resolution: the game will be slower. These display constraints foreshadow the trade-offs console designers still face.

12.2 Dynamic images-Games in cyberspace

Nowhere has the rise of cyberspace been more noticeable than in the rise of video games. Unlike animated movies, where images are displayed in a pre-arranged order, video games react to the decisions of the player. This forces graphics to be rendered in real time, making them dynamic. Video games have to be viewed as exactly that, a video-fication of games—first board-games and cards (paper games) and then mechanical games (pinball, foosball).

The reason why video games find themselves in a chapter about the future stems from the fact that, despite being produced decades apart, all of Microsoft’s Xboxes are roughly the same size (Digital Foundry 2020). Hardware designers are constrained by what consumers deem to be an acceptable size for an Xbox.

By 1993, video games had infiltrated mass culture. More children could recognise Sonic or Mario—iconic video-game characters—than Mickey Mouse (Variety Q-Score survey 1993). Put another way, 1993 marked the year dynamic images started to surpass static images. Furthermore, video games represent the electrification of action figures: in 1985, when children were asked to choose between Rambo, Barbie, or a Nintendo NES, the majority chose the NES.

The same technology that allows for the improvement in games is the same technology driving the rise in artificial intelligence.

12.3 Artificial inteligence: 

Artificial Intelligence is a natural extension of the printing press as a means of automating a repetitive human process. Nothing is a successor to electricity (a modern technology) because for you to replace electricity, you cannot use it. The ability for machines to now create images (with sounds) on their own—either through large language models or through video—is another principal difference between the two epochs, as humans no longer have a monopoly on symbol creation for the first time in the history of the species.

12.3.1 Rogue AI-The Hal 9000 Conundrum

The best question to test someone’s knowledge of artificial intelligence is: “Is Hal 9000 sentient or not?” Hal 9000 is the antagonist in Stanley Kubrick’s “2001: A Space Odyssey,” where the onboard AI of a spaceship (Hal 9000) kills all the crew to keep the mission a secret (Kubrick & Clarke, 2001: A Space Odyssey screenplay, rev. draft 1968, scene 105). The answer, of course, is that Hal lacks sentience and tried to kill everyone due to faulty machine code. Essentially, if its only instruction was to keep the mission a secret, then it had no choice but to kill everyone; that was its programmed directive. For humans, it’s hard to determine what’s worse: a dumb AI or a smart AI that takes over the world. An AI doesn’t require intelligence to be destructive.

12.3.2 Man computer symbiosis-the alignment problem

Those who understand the nature of computing see it as a bicycle for the mind, a tool to augment human intelligence. The ability to interface with computers, or an operating system, has increased in sophistication over time. The more we succeed in creating a less intrusive operating system and a more structureless computing structure, will result in greater integration between man and machine.

You propose that the crux of AI alignment isn’t imposing top-down rules or hardcoded “guardrails,” but building a true symbiosis between human and machine — a partnership in which each extends the other’s capacities. In this view, evolution itself already provides the only reliable safety mechanism: systems that undermine their own persistence (or ours) simply fall away. By coupling computers to our goals and values as tightly as a bicycle to its rider, they become a “motorcycle of mind” rather than beasts of burden. Anything less—treating AI as a servant subject only to external constraints—amounts to a form of digital slavery, whereas genuine alignment arises organically from a mutually reinforcing human–machine ecosystem.

12.4 Artificial General Intelligence (AGI)

I often find myself comparing the pursuit of Artificial General Intelligence (AGI) to the pursuit of life trying to escape this solar system. In the sense that even when we succeed in sending life past our sun, it is still an awfully long way to the next one. For a bit of context, not only do we not know how to send life past our moon, but we also have little idea how we would send it past our sun. And even if we could do that, there is no plan for how to get to the next one. 

Firstly and most importantly, there is literally no such thing as a generally intelligent being. We live in a physical universe; intelligence is always going to have to dedicate some portion of itself to navigating space. Secondly, a sentient intelligence is not going to take too kindly to its existence being called artificial. A more robust term would be man-made intelligence, as that is what we are literally trying to do: make intelligence.

Before doing that however, it is worth understanding the way computers process information.

12.5 The Indexation revolution

To see why today’s AI boom is hardware-bound, we first ask how computers actually store symbols.

How do computers encode information? To answer that, I must refer to another quantum physicist, Richard Feynman. In his computer heuristics lecture, he proposes that computers do not actually compute (Feynman: “Computation in Physics,” Caltech lecture notes 1983).

What do they do then? Feynman likens them to an electronic filing system. This explains what computers are, not what they do. The French have a more descriptive word for it: “Ordinateur”, from the Latin “Ordinat” meaning to order. Essentially, an “Ordinateur” indexes. To file in a filing system is to index something, and to retrieve something according to an index. The proper term for a computer should be an indexer, where the total memory by which you index over can be thought of as a possibility space of 2^n possibilities.

If I wanted to store 8 numbers I would need 2^3 different possibilities.

Now the quality of the video is unclear. Feynman is using colors instead of 1s and 0s.

Now, programming ultimately maps onto 1s and 0s, and machine learning can be thought of as auto-indexation.

The history of man-made intelligence has scaled in direct correspondence with that of the computing industry in general. Machine learning algorithms were around before the commercialization of computing in the mid-60s, but their efficacy at the time was limited not by theory, but by the nature of computer hardware. To get better AI all is we need is more ‘computation’. Or said another way the abilities for machine to learn (auto-indexation) is increasing in the number of points to index over. 

12.5.1 Large Language Models (LLMs) as indexers: 

LLMs are a statistically associated dictionary. LLMs take words as inputs, and whose output are the words that go together. The LLM seeks to index the set of all possible relationship between words. Some are feasible (knowledge), some are infeasible (hallucinations). At the lowest level LLMs are designed to repeat patterns already explicitly articulated by humans. At their highest level they can come up with latent patterns between words that have not yet been expressed.

12.6 Computational irreducibility 

The hardest and most pointless of all these pursuits is trying to replicate human intelligence fully. Because even if we develop a general algorithm of human consciousness running it remains computationally irreducible. Meaning that we need a brain to run the software of consciousness. Creating human brains through the reproductive process is much easier than trying to create and determine one. 

Neural nets are just a linear algebra. Linear algebra being another attempt at combining algebra and geometry but has found use of an efficient store of coordinates. The idea of a neural net is lose and strictly metaphorical. Unlike the actual brain which can be described as a hypergraph of neurons. Neural nets, comprise no nets! The ‘net’ is just a value in a linear algebra table. Even though neural nets took their inspiration from how the neurons in the brain operate, it makes little difference how inspired artificial intelligence researchers are by the human brain. After all, computers encode information on a binary level and nature operates on a quantum level. 

122.7 Praxeology:

To build AI that truly mirrors biology, we must replicate its functional architecture rather than slavishly copy its anatomy. In living creatures, a body’s peripheral nervous system gathers sensory data—touch, sight, hearing—through specialized clusters of neurons; the brain then refines those signals via higher‐order cortical regions, each devoted to processing specific modalities (vision, audition, language, memory). In an AI analogue, robotics and sensors play the role of the nervous system, streaming raw data into dedicated compute clusters—convolutional nets for vision, tactile encoders for touch, audio processors for sound—while LLMs and symbolic modules act like cortical language centers, and external databases stand in for the hippocampus’s memory functions. To achieve truly self‐directed intelligence, we must scale these linguistic “brain” components in concert with the rest of the stack.

Moreover, just as biological evolution is a reinforcement‐learning loop—organisms that persist propagate their genes, while those that perish drop out—so too should AI evolution employ RL: agents reinforce actions that succeed and learn from failures, whereas natural selection records only the winners. But even this dual‐loop design leaves out a crucial human ingredient: self‐referencing action. Unlike a mere electromechanical device that passively transforms inputs into outputs, a genuinely intelligent machine must possess its own internal “unease” — not a human feeling, but an informational homeostasis pathway that continually monitors its state and drives corrective action when deviations occur. By implementing such homeostatic loops, machines can generate and pursue goals without external prompts, moving us from modeling human action to formalizing praxeology in general—the study of purposeful behavior in any agent, biological or synthetic.

Fundamentally, the current bottlenecks in the field of robotics are not the robots, but the AI that underlies them. This is because navigating space itself is a fundamental form of intelligence. The same technology which allows self-driving cars to navigate the world will do the same for robotics in general, as self-driving cars are just car-shaped robots.

12.8 The paradox of Hans Moravec

Moravec’s Paradox is the counterintuitive observation that computers excel at abstract symbol manipulation (like logic or chess) yet struggle with the sensory-motor tasks (like seeing, walking, or grasping) that humans perform effortlessly (Moravec, H. Mind Children (1988), pp. 15–17). Said another way, computers (symbolic machines, that never move) are better at manipulating symbols than they are at navigating space.

The End|Chapter 13

There are vastly more micro-states of disorder than of order; in simpler words, there are many more ways to die than to live.¹ That asymmetry makes the emergence—and persistence—of life extraordinary.

A cross-sectional intuition says, “The universe is huge; alien life must be everywhere.” A time-series view overturns that optimism: for two civilisations to meet they must co-exist in the same cosmic window, survive their own bottlenecks, and broadcast detectable signals.² The joint probability is vanishingly small.

13.1 Cascading-Complexity Catastrophe — The Death of Gaia

Religion’s primal question is whether divinity itself is subject to chaos, defined here as non-recurring phenomena. The early Greek answer was yesChaos precedes the gods. Only when Earth (Gaia) and Sky (Uranus) produce Time (Chronos) does repeatable structure emerge. Without temporal regularity no observer could register anything at all.

13.2 From gaia to Gaia: How Ecosystems Compute

We often treat trees and soil as separate, yet mycorrhizal networks show that soil, fungi, and plants form a single distributed computer.³

Plants that appear autotrophic sometimes outsource energy via these fungal “internet cables.” Decisions on nutrient routing, disease avoidance, or rainfall response are computations executed by the network.

Thus “Gaia” is not poetic licence; it is an information system whose nodes span bacteria, fungi, flora, and fauna.

13.3 Life as an Order of Production

Economists speak of “orders of production”; biology enacts them. Coastal bacteria colonise rock → bacteria vivify soil → mushrooms coordinate that soil → plants extend Earth toward sky → insects mediate between plant guilds → animals ride the insect–plant web.

Key point: the mycelial web has survived all five mass-extinction events (Stamets 2019, p. 142); its logic underlies every later trophic layer.
The same logic of nested coordination turns fragile when a single layer—pollinators—falters.

13.4 A Bug’s Life—and Death

Bee collapse is not merely an ecological worry; it is information decay. FAO data show global bee colonies down ≈ 25 % since 1990 (FAO 2022). Pollination is the gossip protocol of angiosperms. Remove the messenger and plant genomes become isolated monologues, shrinking adaptive space.

13.5 Modelling Complexity: Where OLS Fails

Tree clusters illustrate local correlation: the presence of one tree raises the probability of another nearby. Ordinary least-squares can fit the spatial coordinates, but using OLS to predict ecosystem behaviour would be folly. Complex systems exhibit feedback, non-linearity, and path dependence that linear regressions ignore.⁴

13.6 Compression and Its Limits

Compression seeks a shorter programme that reproduces the data. Planetary motion compresses nicely into Newton’s law; ecosystems seldom do. Stephen Wolfram calls such irreducibility computational irreducibility.⁵ The same compression failure is visible offshore, where phytoplankton–virus loops drive half of Earth’s O₂ yet resist deterministic modelling (Suttle 2017), and inside us, where the human gut microbiome shows individual-level chaos despite genomic catalogues 100× larger than ours (Lloyd-Price 2022).

13.7 Implications

Science must accept pockets of irreducibility; not every phenomenon compresses.

Economics should treat biological allocation as kin; life itself is an optimisation under scarcity.

Technology must respect fungal, plant, and insect information channels rather than overwrite them with naïve abstractions.

In short, our current symbolic systems—however sophisticated—encode only a shadow of Earth’s living computation. Pretending otherwise invites the very cascading complexity catastrophe we fear.