Simulacra, Simulation and Symbolic Systems |Chapter 3| Image Symbol Code

3.1 The variable image revolution — Externalised imagination

Away from Eden, we enter the simulacra and simulation. It is important to understand that humans’ primary mode of reasoning is through images. As previously discussed, the brain creates a representation of the outside world in the inside world. We then evolved language as a meta-commentary on these images and the relationships between them. Therefore language allows narratives not only to serve as a supplementary information medium but also as a means to extract more information from an existing one.


In Chapter 1 we framed that inside-brain “movie” as a compressed model; here we watch it spill outward into shared media, turning private sketches into exchangeable files.

3.2 The Matrix

Jean Baudrillard’s Simulacra and Simulation is the intellectual inspiration for the Matrix franchise. The Matrix is a unique point of philosophy in its own respect. It departs from the book in many ways. The first Matrix film reflects the versatility of filmmaking as an information medium with its Hong Kong-style kung-fu. The other films were poor imitations of the first.

Baudrillard postulates that people get their information not through actual reality, but through our shared imagination. This shared imagination is externalized through increasingly sophisticated information technologies, complete with increasingly salient variable images. The book starts off with a map of reality, a map so accurate that it’s confused for the real one. Anyone who has followed Google Maps down a bad road knows the equivalence of this. It is this confusion between this externalized imagination and fundamental reality that constitutes The Matrix or Simulacra and Simulation.

3.3 The Evolution of variable-image technologies

Our ability to produce, and the complexity of, these variable images have increased over time. Eventually, we learned how to automate the process of variable-image creation through the printing press, and to send these symbols without having to walk with the telegraph. The inflection point in variable-image technology occurred in the post-modern era with the introduction of the television. The television can be thought of as a one-way variable-image machine.

Next we widen from broadcast to interaction:

The shift from satellite to cable represented a fundamental shift in the cost structure of the content wars. The means of transporting these variable images shifted from broadcasting via electromagnetic waves via satellite, to sending the very same information via fiber-optic cables. The personal computer is a two-way variable-image machine. Instead of just being able to receive the variable images, one can also create them. Two-way variable-image technology was mass commercialized in 1984 with the Apple Macintosh. It gave people the greatest externalizing imagination machine in human history, overtaking pen and paper. The Macintosh allows users to employ visual representation associated with computer objects.

The iPhone represents an explosion in the ubiquity of variable images. It allows users to view and create variable images at an unprecedented rate. Video games can be considered dynamic images in the sense that the images change based on user action.

3.4 The nature of symbolic systems

It is important to understand that symbols are just a special type of image. What sets them apart from other images is that they are mutually connected. They are self-referential images; when combined they form greater representations of the world around them. This combined greater representation and the rules that define it are known as symbolic systems.

Stanford, the only university with a symbolic-systems degree, defines a symbolic system as “the meaningful symbols that represent the world around us”. Unfortunately for them, they defined a symbol, not a symbolic system. A symbolic system is the set of rules which define the interaction between symbols. Symbols, being a subset of an image. Therefore, a better definition of a symbolic system is the set of rules which dictate the behavior of variable images. Variable images being the codification of relevant phenomena. It is the externalization of imagination, given the biological motivation for the fantastical already discussed.

3.5 Alphabetical symbols — Language as a symbolic system

The mapping of variable images onto phonetics allows the common information communication to propagate over a geographic area. The rules governing the symbolic system of language are known as grammar. Fundamentally alphabetical images are the most high-dimensional way we have to talk about the universe, in the sense that it takes the least amount of information to describe the most amount of information.


Language therefore functions as a lossless compressor—an idea we revisit when Gödel shows that even the best compressor leaks truth.

3.6 Religion as a symbolic systemJungian archetypes

The gods are the symbolic mapping of different natural phenomena. Religion, fundamentally, is a reflection of the environment. The Mesopotamians, flanked by the violently erupting Tigris and Euphrates, viewed their gods as capricious and vengeful. The Egyptians, with the gently flooding Nile, saw themselves as a favored civilization.

Another term for the word “God” is “the nature of reality,” or what Buddha calls “the law.” This is why, in Buddhism, there is no prayer. One cannot pray to the nature of reality; one obeys the nature of reality. The Greek epic of the Odyssey is what happens when one does not obey the nature of reality. Odysseus, the principal protagonist, is lost at sea after disrespecting Poseidon. The moral of the story is that if one goes against the nature of reality, one must pay the cost.

These gods and their stories represent ergodic habit formation—the habits that we use to survive across time, with the best habits being propagated across time. Those with improper habits perish over time. Religion, by this interpretation, is the codification of ergodic habit formation. Said another way, formalized religion is the process of symbolizing the acts that lead to the continuation of the species. When people lament the death of God, they are lamenting the death of the habits that got us this far. It is unclear if our new habits will take us anywhere.


The argument dovetails with Chapter 13’s warning: sever enough of those time-tested habits and Gaia herself teeters.

3.7 Depth Psychology as a symbolic system

Before the emergence of the field of Artificial Intelligence, the study of intelligence was primarily conducted by the field of psychology, specifically depth psychology. Although Karl Popper criticized Freud for being unfalsifiable, we use Freud’s terms casually as if they were always part of our language. Freud’s theories are falsifiable inasmuch as they are computable. If his theories can be used in the construction of an AI, which is intelligent, then Freud is correct.


This offers a bridge to Chapter 12, where neural-net architectures revive Freudian tropes (ego, id) as computational sub-modules.

3.8 Economics as a symbolic system

The fundamental point of the Misesian portion of the Austrian school of economic thought is that the symbolic systems used in economics are inappropriate, as these symbolic systems are those inherited from natural science and mathematics. These symbolic systems, when applied to economics, lead to faulty conclusions. As economics, the youngest of all the sciences, requires a different set of symbolic systems.

The above point, made by Mises, was expressed before the advent of the computer, so the language of information theory was limited. Information theory was not fully developed until after Mises’ time. In addition there were attempts by Hayek to use the language of cybernetics to describe economic systems—not human action. Despite this, Mises was making a point about the nature of computation, taking a computationalist view of the universe. This view characterizes human intelligence as a specific instance of intelligence as such. With this perspective, the language of artificial intelligence becomes applicable to economics. Specifically, economics becomes the study of the estimation of a cost function.

This contrasts starkly with neoclassical economics, which is devoid of any notion of computation. The closest concept to it comes from the notion of creative destruction, theorized by Joseph Schumpeter, who is often mischaracterized as Austrian but is squarely Walrasian. Schumpeter acknowledged that it’s the entrepreneur who takes the economy from one state of equilibrium to another.


Chapter 10 will show how price signals act as training data, and how central-bank “label noise” can crash the learner.

3.9 The Limits of the symbolic system of mathematics — Gödel’s impossibility theorem

A mathematical object is an object which can be formally proved or defined within the axiomatic framework of mathematics. The fundamental rule which undermines the symbolic system of mathematics, or the interaction between mathematical symbols, are the laws of arithmetic and geometry.

3.10 Gödel encoding

The reason why Gödel chose prime numbers as the types of numbers on which to base his encoding scheme comes down to their uniqueness on the number line. This unique point allows each axiom to have its own numeric mapping, as prime numbers cannot be divided by any number other than themselves and one. After encoding a set of axioms in prime numbers, more complex axioms can be formed by using the laws of arithmetic to combine axioms. These axioms need not be unique necessarily (a limitation of the encoding). The validity of these combined axioms can be evaluated using the laws of arithmetic. Gödel’s theorem needs to be evaluated alongside the algebra of Boole as an attempt at arithmetising logic.

3.11 A mechanized mathematics

Roger Penrose, made the point that Gödel defines the limits of a formal system. Computers, being based on formal systems, therefore inherit the limits of mathematics. However, this is not an exact explanation. Most people view computers as an evolution in information technologies, not a mechanization of our existing information technologies. So, computable mathematics is really a mechanized mathematics.

Therefore, one of the implications of Gödel’s theory is that there are things which we know are true but cannot be proven, providing an inherent limitation on what a computer can know. That ceiling on formal symbol power loops back to Baudrillard’s worry: when the map diverges, it may not be fixable by any algorithm we possess.

3.12 The Anti-Symbolisation Principle

The information underpinning reality cannot be fully captured—let alone executed—by any finite symbolic system.

Formally: no countable set of symbols, axioms, or algorithms can map bijectively onto an uncountable continuum of micro-physical states. Trying to label every real number in (0, 1) with integers always leaves most numbers untagged.A formal theory—however ingenious—can be written down as a finite alphabet plus a (countable) list of well-formed formulas and inference rules. Because that catalogue is countable, it can label at most countably many distinct “addresses” in the world. But the phase-space of a physical continuum (every point on a line segment, every micro-state of a field) is uncountable, matching the cardinality of the real numbers. You cannot pair a countable set with an uncountable one bijectively; there will always be vastly more real states than there are symbolic names to pin on them. Hence any formal language, by sheer arithmetic, must leave most of reality unmapped—precisely the content of the Anti-Symbolisation Principle.

Geometry itself testifies to the Anti-Symbolisation Principle. Hilbert’s 1899 axioms tried to quarantine Euclid inside first-order logic, only for Gödel to reveal (1931) that such a system cannot prove its own consistency from within. Hermann Weyl dubbed the entire reduction of spatial intuition to number syntax an act of violence. A century on we juggle rival formalisms—ZFC sets, synthetic differential geometry, Homotopy Type Theory—none mutually inter-translatable without loss. Space thus slips through every countable mesh we weave; the continuum’s “uncountable body” forever exceeds its symbolic clothing. 

Mathematicians tried to bridge the gap by importing actual infinities into their formal language: Cantor’s set theory simply names the continuum as a single set R whose cardinality exceeds that of any listable collection.

This manoeuvre lets proofs talk about uncountably many points, but it does not let the formal system enumerate or decide every fact about them, because every axiom-scheme, proof, and algorithm we can write is still drawn from a countable alphabet.

Gödel shows that provability is a subset of truth; geometry shows that even the oldest, most concrete branch of mathematics lies partly outside that subset. Together they certify the Anti-Symbolisation Principle: the world’s full informational content spills beyond any formal net we cast. Gödel’s incompleteness theorems make the limitation explicit: once a theory is strong enough to encode arithmetic—and hence the real line—it must leave some true continuum statements undecidable. So introducing infinite sets recognises the size of reality, yet the Anti-Symbolisation gap remains: a countable syntax can reference the continuum, never exhaust it.

Thus geometry, though vastly richer than first-order syntax, is still an intermediate abstraction. It inherits symbolic gaps from formal logic and adds its own by idealising matter, energy, and measurement. The Anti-Symbolisation Principle expands: even the geometries we devise are partial metaphors—useful lenses, not perfect mirrors—of a world that forever outruns both our equations and our instruments.