## The Dark Ages

## The High Middle Ages

### Medieval Localism

### Medieval trade

## the printing press-the automated symbol machine

### the protestant reformation

### the enlightenment

## Medieval Mathematics (technical)

### The Symbolic system of Algebra

Algebra allows you to execute the inverse operation. Introduces the notion of abstraction into mathmatics. Which allows for the representation of non-numerical symbols in mathmatics. Thereby increasing the set of mathamtical objects which can be represented. to the arithmatic

Altough not treated as sub fields of algebra.

As a symbolic system, algebra can be thought of

### Algebraic Geometry

An application of a symbolic system to geometry.

A codification of the points in space.

Uses the rules of geometry to make calculations.

A point is defined by it’s coordinate on the xy-axis. A line then becomes a pattern of points.

#### Greek Geometry

this way of thinking would be considered an abomination to the greeks. Who specialise in non symbolic geometry. To the greeks differences in geometric forms can be quantified. But only in comparison to one another. By the scaling of common proportions. The Greek aversion to irrational numbers. Stem from the fact that irrational numbers to do not scale linearly. That’s what makes them irrational. It then becomes impossible to compare. As each line, differs in their growth pattern.

This is because to the Greeks. Numbers correspond to geometerical patterns. The exactness of a cult of mathematical mumbo jumbo.

To the greeks there are no such things as infinite sets/.

#### Calculus

A study between points in space. Space defined in by Algebreaic Geometry.

### Combinatorics

Most of what people call a computer comes to down to applied combinatorics/

### The algebra of Bool

As the arithmitisation of logic.

#### Logarithms

A data table

## The Computer/Indexer Revolution-The Electromechanical symbol machine (technical)

Now how is it that computers encode information?

To that, I must make reference to another quantum physicist-(dork Jesus) Richard Feynman. Where in his computer heuristics lecture he tries to tell us perhaps the most profound point of the computer revolution-computers do not actually compute.

What do they do? Feynman likens them to an electronic filing system.

Which explains what computers are-not what they do. To that we must turn to the French who have a much better word for it-**ordinateur**. From the latin ordinat which means order. Essentially what an **ordinateur** does is index. To file in a filing system is to index something and to retrieve something according to an index. The proper term for a computer should be an indexer. Where the total memory by which you index over can just be thought of as a possibility space of *2 ^{n}* possibilities.

If I wanted to store 8 numbers I would need *2*^{3} different possibilities.

Now the quality of the video is unclear. Feynman is using colors instead of 1s and 0s.

This may seem trivial to those with little knowledge of machine learning. Programing can be thought of as it ultimately is, a mapping onto 1s and 0s. Now machine learning can be thought of as auto-indexation.

The history of **man made intelligence** has scaled in direct correspondence with that of the computing industry in general. Machine learning algorithms were around before the commercialisation of computing in the mid 60s. But their efficacy, at the time, was limited not by theory but the nature of computer hardware.

However, if you listen to the supposed ‘experts’ in the field they will tell you that all we need is more compute. Which is a statement which conveys no information, as what they are essentially saying, the ability for a computer to auto-index is increasing in the number of points to index over. Now the current state of machine learning can be summarised as follows. Machines are better at auto-indexing when they have a human-made index to map onto-**supervised learning**. than when they are given data over which to auto-index over-**unsupervised learning**.

Additionally, what is oft understood is the current field of AI has yet to progress past the point that mathematics reached in medieval Europe with most of probability theory being based on mastering the games of chance. Which is exactly my point, these are but mere games. In reinforcement learning, what the computer essentially does is try a series of possible strategies. Based on if these strategies prove successful or not, the computers classify them as either having worked (1), did not work (0) or ended in a draw (1/2). The power of reinforcement learning is dependent on the ability to accurately reward the agent in question. In the case of actual warfare (not chess) one can lose every single battle and still win the war. Therefore when trying to accurately determine weights when it comes to the real world one encounters a dimensionality problem.