On the Self Organisation of Information Systems|Chapter 1|The Death of the Gods and the Revolt Against Reason

I often find myself comparing the pursuit of an Artificial General intelligence (AGI) to the pursuit of life trying to escape this solar system. In the sense, even when we succeed in sending life past our sun, it is still an awfully long way to the next one. For a bit of context, not only do we not know how to send life past our moon, we have little idea how we would send it past our sun and if we could do that, there is no idea how to get to the next one.

Firstly and most importantly there is literally no such thing as a generally intelligent being. We live in a physical universe, intelligence is always going to have to dedicate some portion of itself to navigating space. Secondarily, a sentient intelligence is not going to take too kindly to its existence being called artificial. A more robust term would be man made intelligence, as that is what we are literally trying to do-make intelligence.

It is also worth defining the goals of artificial intelligence. What are we trying to do here? Are we trying to make something smarter than humans? Are we trying to replicate human level intelligence? Or is AI just an extension of the bicycle of mind that is the computer?

Firstly, the hardest and most pointless of all these pursuits is trying to replicate human intelligence fully. Because computers would then try to signal value by decorating themselves with shiny rocks, sacrifice their fellow kin if the harvest has been lackluster and would make mistakes with basic arithmetic. A much better idea would be the development of an AI which helps compliment human intelligence. In the sense that humans have terrible memories, but when was the last time a computer forgot something?

Even though neural nets took their inspiration from how the neurons in the brain operates. It makes little difference how inspired artificial intelligence researchers are by the human brain because computers encode information on a binary and nature on a quantum level. Before you start talking to me about quantum computers, it’s worth understanding Roger Penrose’s point on the matter. Schrödinger was trying to show us a problem with his equation by showing us that his equations lead to a paradox. How can the cat be dead or alive? There is a problem here. My interpretation (I am not a physicist, I am an information theorist) is that Schrödinger’s equation returns the possibility space.

Now how is it that computers encode information?

To that, I must make reference to another quantum physicist-(dork Jesus) Richard Feynman. Where in his computer heuristics lecture he tries to tell us perhaps the most profound point of the computer revolution-computers do not actually compute.

What do they do? Feynman likens them to an electronic filing system.

Which explains what computers are-not what they do. To that we must turn to the French who have a much better word for it-ordinateur. From the latin ordinat which means order. Essentially what an ordinateur does is index. To file in a filing system is to index something and to retrieve something according to an index. The proper term for a computer should be an indexer. Where the total memory by which you index over can just be thought of as a possibility space of 2n possibilities.

If I wanted to store 8 numbers I would need 23 different possibilities.


Now the quality of the video is unclear. Feynman is using colors instead of 1s and 0s.

This may seem trivial to those with little knowledge of machine learning. Programing can be thought of as it ultimately is, a mapping onto 1s and 0s. Now machine learning can be thought of as auto-indexation.

The history of man made intelligence has scaled in direct correspondence with that of the computing industry in general. Machine learning algorithms were around before the commercialisation of computing in the mid 60s. But their efficacy, at the time, was limited not by theory but the nature of computer hardware.

However, if you listen to the supposed ‘experts’ in the field they will tell you that all we need is more compute. Which is a statement which conveys no information, as what they are essentially saying, the ability for a computer to auto-index is increasing in the number of points to index over. Now the current state of machine learning can be summarised as follows. Machines are better at auto-indexing when they have a human-made index to map onto-supervised learning. than when they are given data over which to auto-index over-unsupervised learning.

Additionally, what is oft understood is the current field of AI has yet to progress past the point that mathematics reached in medieval Europe with most of probability theory being based on mastering the games of chance. Which is exactly my point, these are but mere games. In reinforcement learning, what the computer essentially does is try a series of possible strategies. Based on if these strategies prove successful or not, the computers classify them as either having worked (1), did not work (0) or ended in a draw (1/2). The power of reinforcement learning is dependent on the ability to accurately reward the agent in question. In the case of actual warfare (not chess) one can lose every single battle and still win the war. Therefore when trying to accurately determine weights when it comes to the real world one encounters a dimensionality problem.

However it is worth understanding that humans do use reinforcement learning. But machine learning algorithms (save for long short term memory LSTM) do not take advantage of the way nature creates intelligence. You are an information system on top of a nervous system. It is worth understanding the evolution of life is also the evolution of intelligence as such. In the sense that at some point, life was not able to see, smell, hear or have access to any senses. Therefore it is worth understanding that the first information system was not the neo-cortex but the nervous system. Feelings were evolved in order to give us a representation of the outside world. I see a tiger, I am then scared, so I then run. 

The role of feelings in information processing save for the recent work of neuro-scientist Antonio Damasio is completely absent in modern literature. However, it does have a corollary surprisingly in the economics literature. Specifically, the work of the Austrian economist Ludwig Von Mises whose theory of human action is predicated on the notion that humans act because they want to satiate felt unease. This unease is generated by the nervous system and it is up to the information processing system to try to determine it.

I felt hungry, so I tried eating some ice cream. That didn’t quite work because I felt sick. I then went to chipotle for some steak guac, and now I feel a lot better. Maybe that Ferari will fill the void in my life. You then get the Ferrari but are just as morabund as ever. But at least, you learnt. So this process of consciously trying to satiate unease is how society scales through time. This is the point of Demasio’s strange order of things. However, Mises wrote Human Action in 1949. 

In the language of computer science, your nervous system responds in a binary. Did this human action satiate my felt unease? If yes, repeat, if not then avoid. The nervous system acts as a compressor of information in order to map onto the high-dimensional nature of reality. Computers lack an interface for reality.

I used the term compression in passing, a term that originated in the computer science literature but belongs squarely in the discipline of evolutionary biology. The goal of compression is to try to reduce a set of data points by understanding the general behavior of the system. For example, if I have the coordinates of all the planets in the orbit of the sun, I could then compress this information by producing the equation which predicts the motion of the object. In terms of Artificial intelligence, the goal of compression is to produce the most amount of information processing with the least amount of associated energy. How many thoughts I can have per calorie essentially. As it is important to remember, that the energy a gorilla uses to lift a car is equivalent to the energy you use to watch that Netflix show.

Human beings navigate this high dimensional nature, not by mapping their reality to a set of 1s and 0s but with words and images. It is less computationally expensive to tell a human being “the sky is blue” than it is to run a machine learning algorithm that determines the sky is blue. Specifically, I only need 8 bytes of information (the amount of information needed to store the ASCII keyboard) to tell a human almost everything I need to know about his universe. However, to an indexer, the sentence “the sky is blue” means nothing other than the 1s and 0s that it needs to retrieve to generate that sentence on the screen. Needless to say, it takes several hundred orders of magnitude to run a machine-learning algorithm to understand there is such a thing as a sky and that it is blue.

To bring this back to compression, humans think the way they do because it is the least energy-consuming way to move through a physical universe as the human brain needs less energy than a supercomputer to navigate the same domain.

Freud

The probabilist Nassim Taleb shows us in Skin in the Game that most of what humans call intelligence stems from their ability to interface accurately with reality. People with skin in the game do better without skin in the game not because of incentives, but because of their feedback to reality. Feedback is then compressed by your nervous into a binary, which is then up to the information system to process.

This is in stark contrast to neoclassical economics, which is completely devoid of any notion of computation. The closest thing to it comes from the notion of creative destruction from the oft mischaracterized as Austrian but squarely Walrasian Joseph Schumpeter. In which he acknowledges that it is the entrepreneur which takes the economy from one state of equilibrium to the other. 

It is the self-referencing nature of human action which is what modern AI is currently lacking as the typical definition of a computer which is an electromechanical device which takes an input and produces an output. 

If we are to invent a true man made intelligence. It would not need to take an input to produce an output. It would just reference itself. Because it has felt unease that has to be met, it will then act independent of an input to try to satiate it.

The Market is a Process, Trust the Process

On the computational nature of capital, the pure time theory of interest

Anarchy State Mafia

On the Centralisation of disputation/ an Information Theoretic view of State Formation/ the Death of Rock and Roll/ Why there are no Hippies in the Wild/Culture Wars/ and the Death of the American Dream

This is the first day in the history of political science. 

As yesteryear’s poli-cy is nothing more than an amalgamation of terms. If anyone has ever had the displeasure of having taken a politics 101 course. One is immediately confronted with the terms liberalism and realism. Presented as a dichotomy to be contrasted. 

This is a problem as liberalism and realism have nothing to do with each other.

Realism is a theory regarding state formation.  Defensive realists define a country as an area that can defend itself. Offensive realists, defines a country to be an area over which it can project force. While liberalism is a theory which dictates inter-state relations.

Now when it comes to intra-state relations the term conservative and liberal literally convey no information from a political science perspective. Conservatism and liberalism are positions with respect to time. As a conservative in 1991’s Russia for example would be in favour of communism while at the same time in America it would be pro-markets. 

The notion of scale as such is completely devoid from any political science course(inter-state vs intra-state). The closest corollary coming from multi-fractal localism. The notion of multi-fractal localism can be given a methodological foundation when combined with the notion of a renormilization group.

A renormilization occurs when there is greater internal than external communication.

One can define a country as the scaling of these various renormilization groups

Family 

Tribe

Neighbourhood 

City

State 

Country 

Empire

The study of the relationship between these renormilization groups, is the study of political science. 

The best attempt to incorporate this idea of scale comes from the United States of America. The states are united. In the sense that each state is comprised of individual governing bodies which then delegate authority to the federal state through the interstate commerce clause (the 10th amendment).

Hegel’s notions can be reinterpreted in an information theoretic/complex system sense: 

Volksgeist-shared information communication over space.

Zeitgeist-shared  information communication over time at a point in time. 

A volksgeist formed when there is greater internal rather than external communication (renormilization). Where the concept of a power law can be invoked. As a power law occurs in nature when all local bonds are overridden in favour of a global bond. 

The alpha of the power law,  the size of the power law is inversely proportional to the disparateness of the nodes over an area. Disparateness as measured by shared information communication (shared attention). The level of disparateness can then be reinterpreted as a roughness quotient invoking the fractal geometry of nature. 

In layman’s terms- societies which are less disperete are more homogenised. 

This model of state formation can be extended with the language of calculus.

As the zeitgeist can be modelled as the first derivative of the volksgeist.  How much is the information over an area changing at a specific point in time.  A higher zeitgeist implies greater generational change. 

The second derivative of the volksgeist is also equal to the first derivative of the zeitgeist.

How much does the zeitgeist change over time? 

Which is equal to the intergenerational information differential.

In my post When the Music Died-On the Birth and Death of Rock and Roll. Seeing that music is habit-forming, we can then use a model of self-similarity to measure the change in music (as music is just information). This is the reason why Rock and Roll and later electronic music are held in different regards by different generations. Rock and Roll needs electricity, electronic music needs electricity and an electronic circuit. So the generations which grew up without electricity is not going to like Rock because they’ve never heard anything like it.

This is what happens when there is a large change in the zeitgeist.

Generations fight.

Further Hegelian terms can be given a methodological foundation when incorporated with Computationalism. Were one considers human intelligence to be a specific instance of artificial inteligence. These nodes (information agents) nodes can either cooperate (trade) or destroy (war).

When one node suggests a proposal to another-thesis 

The other proposes one back-anti-thesis

And if they agree- a synthesis

If the nodes do not agree, the cycle continues as long as there is a society to decide over.  Each node can either engage in this process with their neighbour or centralise this process in a political body.

This is why there are no hippies in the wild.

Hippies do not fight when they disagree with each other they voluntarily dissociate.

One group of dorks goes one way the other another.

You can only do this within a state, as disputation is centralized.

Because in the wild you cannot choose peace and love.

Evil finds you.

Nodes can either go to war physically or culturally. The technical definition of a culture war is when two separate volksgeists amalgamate into one.

It is important to understand, that the process of civilisation is the process of increasing a volkgeist. Greater areas where there is greater external than internal communication.

The Nation State is a recent invention of modern history. The birth occurring during the French Revolution (one of the great tragedies in human history- but that is a for another chapter of the Death of the Gods and the Revolt Against Reason

However, it is entirely worth understanding that ‘Italians’ or the ‘French’ did not share a common language until recently. Germany provides us with the most recent example of smaller states forming a larger one. Resulting in something you call World War 1 (I call the First War of the Stupids or Europes civil war-another chapter of the Death of the Gods and the Revolt Against Reason).

On the other hand, the United States of America provides us with the most recent example of a culture war. 

As it is important to understand the American Dream in information theoretical terms. The American Dream is about the ability to voluntarily disassociate. Prior to the early eighteen hundreds if you didn’t like your neighbour.

You could always just move. 

Because why else would people live in Ohio?

It also not a coincidence that when people ran out of space, the American Civil War started. 

In information theoretical terms we would say that the American dream is defined as the ability for nodes to decentralise their computation to an independent geographical area.

Only after the American Dream died can we now recognise modern day America. Because the national identity that most American’s have is a very recent phenomenon aided by the two wars of the stupids (the two world wars). 

This new national ‘American’ identity. Measured in information theoretical terms, is simply that there are now more communications between states than within states.

The railway became a thing by this point. 

The telegraph was invented. 

Which allows greater internal communication over greater distances.

The telegraph allows for national information communication. The birth of the modern day press is directly associated with it. As before news was local in nature. How could anyone else find out if you had to ride a horse to tell someone something.

Religion provides the most transparent means of measuring self similar information communication. There are more muslims in Jordan than Japan. Because Jordan is closer to Saudi Arabia than Japan is. 

Pre-colonional/ colonial/ post colonial America is the same. As Massesuchets was killing Quakers way before the Boston Tea party.

Most early Americans were puritans. Fleeing religious persecution from catholic Europe. So it is of no surprise. That once catholics made their way to the new country. They would be treated in kind.

Much of the political tumult prior to the great American homogenisation of the First World War. Occurred as different religious groups were vying over political control. America’s public school system was founded so that WASPs could brainwash catholic kids.

Comic books provide the best example of America’s shifting zeitgeist and volksgeist. As each of the eras of comic book history (golden, silver, bronze, dark, modern) can they themselves me considered shifts in the zeitgeist

Within America’s golden age of comics. The age prior to Fredric Wertham’s Seduction of the innocent which destroyed comic books as a medium. There exists a character who quite literally, quite honestly represents this new national identity. 

His name is Steve Rogers.

But you may know him as Captain America. 

Captain America sprung out at the greatest period of homogenisation in human history.

The art of the time reflects this.

The dystopian future of Aldus Huxley’s Brave New World is a homogenised one. 

Captain America according to his creators is a New Dealer. Who starts his comic debut by punching Hitler in the face. Nevermind the fact that Hitler was a fan of the New Deal (people are stupid idk what to tell you).