The Future|Chapter 12| Image Symbol Code

12.1 PLATO: that friendly orange glow

The PLATO computer system is something that you probably have never heard of. That’s a shame, because PLATO is one of the greatest achievements of the 20th century. What happened on PLATO took forty years to happen on your computer, and it all comes down to that friendly orange glow.

PLATO was fundamentally a time-sharing computer system that limited users not by the time they could spend on it, but by how many instructions they could run using their computers. The instructions were measured in the thousands, while today’s standard computers deal in the billions. Despite this, you could do almost everything on PLATO that you can do on today’s computers—in the early 1970s. (Dear & Thornburg 2018, p. 42)

The genius of the PLATO system is what eventually made it obsolete. The friendly orange glow refers to the fact that the computer display was rendered using a neon gas, which glows orange.


Unlike today’s computers, which have a dedicated graphics processing unit, PLATO had to minimise the use of computation. It was computationally too expensive to bitmap a display.

A computer can only represent a limited amount of information on screen. The minimum subdivision of the screen is called a pixel. The number of pixels that can be displayed on screen is the resolution. Computers can only fetch and retrieve a limited amount of information per cycle (or second). The lower the resolution, the fewer pixels I have, the faster I can alter what is on-screen. This is why e-sports players don’t game on the highest possible resolution: the game will be slower. These display constraints foreshadow the trade-offs console designers still face.

12.2 Dynamic images-Games in cyberspace

Nowhere has the rise of cyberspace been more noticeable than in the rise of video games. Unlike animated movies, where images are displayed in a pre-arranged order, video games react to the decisions of the player. This forces graphics to be rendered in real time, making them dynamic. Video games have to be viewed as exactly that, a video-fication of games—first board-games and cards (paper games) and then mechanical games (pinball, foosball).

The reason why video games find themselves in a chapter about the future stems from the fact that, despite being produced decades apart, all of Microsoft’s Xboxes are roughly the same size (Digital Foundry 2020). Hardware designers are constrained by what consumers deem to be an acceptable size for an Xbox.

By 1993, video games had infiltrated mass culture. More children could recognise Sonic or Mario—iconic video-game characters—than Mickey Mouse (Variety Q-Score survey 1993). Put another way, 1993 marked the year dynamic images started to surpass static images. Furthermore, video games represent the electrification of action figures: in 1985, when children were asked to choose between Rambo, Barbie, or a Nintendo NES, the majority chose the NES.

The same technology that allows for the improvement in games is the same technology driving the rise in artificial intelligence.

12.3 Artificial inteligence: 

Artificial Intelligence is a natural extension of the printing press as a means of automating a repetitive human process. Nothing is a successor to electricity (a modern technology) because for you to replace electricity, you cannot use it. The ability for machines to now create images (with sounds) on their own—either through large language models or through video—is another principal difference between the two epochs, as humans no longer have a monopoly on symbol creation for the first time in the history of the species.

12.3.1 Rogue AI-The Hal 9000 Conundrum

The best question to test someone’s knowledge of artificial intelligence is: “Is Hal 9000 sentient or not?” Hal 9000 is the antagonist in Stanley Kubrick’s “2001: A Space Odyssey,” where the onboard AI of a spaceship (Hal 9000) kills all the crew to keep the mission a secret (Kubrick & Clarke, 2001: A Space Odyssey screenplay, rev. draft 1968, scene 105). The answer, of course, is that Hal lacks sentience and tried to kill everyone due to faulty machine code. Essentially, if its only instruction was to keep the mission a secret, then it had no choice but to kill everyone; that was its programmed directive. For humans, it’s hard to determine what’s worse: a dumb AI or a smart AI that takes over the world. An AI doesn’t require intelligence to be destructive.

12.3.2 Man computer symbiosis-the alignment problem

Those who understand the nature of computing see it as a bicycle for the mind, a tool to augment human intelligence. The ability to interface with computers, or an operating system, has increased in sophistication over time. The more we succeed in creating a less intrusive operating system and a more structureless computing structure, will result in greater integration between man and machine.

You propose that the crux of AI alignment isn’t imposing top-down rules or hardcoded “guardrails,” but building a true symbiosis between human and machine — a partnership in which each extends the other’s capacities. In this view, evolution itself already provides the only reliable safety mechanism: systems that undermine their own persistence (or ours) simply fall away. By coupling computers to our goals and values as tightly as a bicycle to its rider, they become a “motorcycle of mind” rather than beasts of burden. Anything less—treating AI as a servant subject only to external constraints—amounts to a form of digital slavery, whereas genuine alignment arises organically from a mutually reinforcing human–machine ecosystem.

12.4 Artificial General Intelligence (AGI)

I often find myself comparing the pursuit of Artificial General Intelligence (AGI) to the pursuit of life trying to escape this solar system. In the sense that even when we succeed in sending life past our sun, it is still an awfully long way to the next one. For a bit of context, not only do we not know how to send life past our moon, but we also have little idea how we would send it past our sun. And even if we could do that, there is no plan for how to get to the next one. 

Firstly and most importantly, there is literally no such thing as a generally intelligent being. We live in a physical universe; intelligence is always going to have to dedicate some portion of itself to navigating space. Secondly, a sentient intelligence is not going to take too kindly to its existence being called artificial. A more robust term would be man-made intelligence, as that is what we are literally trying to do: make intelligence.

Before doing that however, it is worth understanding the way computers process information.

12.5 The Indexation revolution

To see why today’s AI boom is hardware-bound, we first ask how computers actually store symbols.

How do computers encode information? To answer that, I must refer to another quantum physicist, Richard Feynman. In his computer heuristics lecture, he proposes that computers do not actually compute (Feynman: “Computation in Physics,” Caltech lecture notes 1983).

What do they do then? Feynman likens them to an electronic filing system. This explains what computers are, not what they do. The French have a more descriptive word for it: “Ordinateur”, from the Latin “Ordinat” meaning to order. Essentially, an “Ordinateur” indexes. To file in a filing system is to index something, and to retrieve something according to an index. The proper term for a computer should be an indexer, where the total memory by which you index over can be thought of as a possibility space of 2^n possibilities.

If I wanted to store 8 numbers I would need 2^3 different possibilities.

Now the quality of the video is unclear. Feynman is using colors instead of 1s and 0s.

Now, programming ultimately maps onto 1s and 0s, and machine learning can be thought of as auto-indexation.

The history of man-made intelligence has scaled in direct correspondence with that of the computing industry in general. Machine learning algorithms were around before the commercialization of computing in the mid-60s, but their efficacy at the time was limited not by theory, but by the nature of computer hardware. To get better AI all is we need is more ‘computation’. Or said another way the abilities for machine to learn (auto-indexation) is increasing in the number of points to index over. 

12.5.1 Large Language Models (LLMs) as indexers: 

LLMs are a statistically associated dictionary. LLMs take words as inputs, and whose output are the words that go together. The LLM seeks to index the set of all possible relationship between words. Some are feasible (knowledge), some are infeasible (hallucinations). At the lowest level LLMs are designed to repeat patterns already explicitly articulated by humans. At their highest level they can come up with latent patterns between words that have not yet been expressed.

12.6 Computational irreducibility 

The hardest and most pointless of all these pursuits is trying to replicate human intelligence fully. Because even if we develop a general algorithm of human consciousness running it remains computationally irreducible. Meaning that we need a brain to run the software of consciousness. Creating human brains through the reproductive process is much easier than trying to create and determine one. 

Neural nets are just a linear algebra. Linear algebra being another attempt at combining algebra and geometry but has found use of an efficient store of coordinates. The idea of a neural net is lose and strictly metaphorical. Unlike the actual brain which can be described as a hypergraph of neurons. Neural nets, comprise no nets! The ‘net’ is just a value in a linear algebra table. Even though neural nets took their inspiration from how the neurons in the brain operate, it makes little difference how inspired artificial intelligence researchers are by the human brain. After all, computers encode information on a binary level and nature operates on a quantum level. 

122.7 Praxeology:

To build AI that truly mirrors biology, we must replicate its functional architecture rather than slavishly copy its anatomy. In living creatures, a body’s peripheral nervous system gathers sensory data—touch, sight, hearing—through specialized clusters of neurons; the brain then refines those signals via higher‐order cortical regions, each devoted to processing specific modalities (vision, audition, language, memory). In an AI analogue, robotics and sensors play the role of the nervous system, streaming raw data into dedicated compute clusters—convolutional nets for vision, tactile encoders for touch, audio processors for sound—while LLMs and symbolic modules act like cortical language centers, and external databases stand in for the hippocampus’s memory functions. To achieve truly self‐directed intelligence, we must scale these linguistic “brain” components in concert with the rest of the stack.

Moreover, just as biological evolution is a reinforcement‐learning loop—organisms that persist propagate their genes, while those that perish drop out—so too should AI evolution employ RL: agents reinforce actions that succeed and learn from failures, whereas natural selection records only the winners. But even this dual‐loop design leaves out a crucial human ingredient: self‐referencing action. Unlike a mere electromechanical device that passively transforms inputs into outputs, a genuinely intelligent machine must possess its own internal “unease” — not a human feeling, but an informational homeostasis pathway that continually monitors its state and drives corrective action when deviations occur. By implementing such homeostatic loops, machines can generate and pursue goals without external prompts, moving us from modeling human action to formalizing praxeology in general—the study of purposeful behavior in any agent, biological or synthetic.

Fundamentally, the current bottlenecks in the field of robotics are not the robots, but the AI that underlies them. This is because navigating space itself is a fundamental form of intelligence. The same technology which allows self-driving cars to navigate the world will do the same for robotics in general, as self-driving cars are just car-shaped robots.

12.8 The paradox of Hans Moravec

Moravec’s Paradox is the counterintuitive observation that computers excel at abstract symbol manipulation (like logic or chess) yet struggle with the sensory-motor tasks (like seeing, walking, or grasping) that humans perform effortlessly (Moravec, H. Mind Children (1988), pp. 15–17). Said another way, computers (symbolic machines, that never move) are better at manipulating symbols than they are at navigating space.