Your Next OS Revolution

The growth of Mobile OS has been lethargic. As despite being the new computing default, its design is still based on the way computers were designed originally.

The evolution of computing:

 

At first there was no difference between software and hardware because computers were special purpose. Meaning they were designed for a specific task like breaking codes, or solving some differential equations. Then computers became what Alan Turing refers to as “general purpose”. Meaning machines which could be programmed for differenting computing fuctions (which we call applications).

After computers were able to process different types of information, the next step was to get them to share that information with one another. This is the processes which started with the ARPANET but eventually culminated with the internet. After that, computers then got smaller and as a result we got the personal computer. The phone is just a natural extension of the process of making computers smaller. So small in fact, society will force you to take it everywhere.

*Note computing refers to the theory of computers. While computers refer to your, well, computer.

Said another way, first computing as a theory was a bundle. You did not have access to an application of computing as you could accesscomputing in general. However , as computers could start capturing more of the information world around it. We created different applications for the theory of computing. These condensed, easily repeatable computer functions called applications (heuristically I think of them as sub routine boxes) were stored in a file directory. Which is one our best meme regarding information storage whether it be physical or electronic.

Man computer symbiosis:

The job of computing companies is to maximise the amount of information captured by computers in general and to maximise the ability of that information to interface with you.

The level of interfacility between man and computer depends their appeal to our senses. The main ones that get exploited are visual, audio, touch.

How man started to interface with machine:

The first step of the assimilation occurred when we started being able to interface with computers in real time. This happened when computers switched from batch processing to time shared terminals thanks to work of Licklider and McCarthy at MIT in the 1950s. Then we started to interface with them visually thanks to work of Douglas Engelbart and others.

But since Doug, the visual space hasn’t moved much (outside of VR) since the the 1960s. Additionally, most people don’t know what to do with touch outside of touchscreens and buttons. Nowadays, people are more sophisticated when it comes to computing. So there is less of a need to express information visually but these other avenues are not being pursued enough. However, we are at the point in history when the audio dimension is being captured by computers.

Physical and software interfaces:

The ability to interface with computers (man computer symbiosis) happens in two places-the physical as well as software interface.

The physical interface comprises of product dimension, buttons, attachability/portability. The software interface comprises, well, everything else. Said another way hardware is the physical network of your phone. Software is the network of useful parts or computing functions. More succintly, the role of software is to facilitate the connection between the applications that you use.

The stagnation of physical interface:

One aspect of hardware is the physical interface is about increasing the level of dimensionality and interfacability between humans and machines. Because the greater than synergy the more “intuitive the experience feels” or whatever Steve Jobs said. The tablet market is still currently confused by this. As the Ipad (being more portable and quicker to start up) is an increase in the level of physical attachability/portability between the user and the product. But lacks the generality of a computer because of its (pretty bad) software.

The shift of the focus on software updates as opposed to hardware is nothing new. The earliest signs of this was the shift in power when Microsoft started licensing software to IBM. However, this trend is increasing as currently the bottleneck in computing (and civilization generally) has shifted from processing power to human time (more on this in subsequent posts).

The new shift in software:

As a result of this new emphasis on time, and the shift from an application poor universe as opposed to an application rich universe it worth reconsidering the connection between applications (ie software). Because the level of redundancy is high, especially on phones. People use the same 5 applications on their phone, but more importantly almost all information that you use on your phone is text based.

Why do you have to open the notes app to write? why do you need to open twitter to write? Instead of having to access applications to input information. Why don’t we input information that we then send to applications?

It’s important to remember when it comes to designing a system to interface with humans. That depsite the human brain being the most clever thing in the known universe, is still not that clever. So it’s important to make things as minimalist as possible. This is something Steve Jobs undertsood but Douglas Engelbart the inventor of the mouse (along with Bill English) dindn’t. He wanted to fit as many as 10 functions on a mouse. Which is cool if you’re a niche consumer, but not with interfacing with the rest to society.

Computers without buttons:

Steve Jobs dream of a computing device without buttons. Unfortunately he was not able to live long enough to see his vision through, as we still have 3 (not that useful) buttons on our iPhone. Buttons are necessary if they provide functionality that is superior to something you could do with software. They should be regarded as the most necessary maximumly relevant computing links.

Unfortunately outside of powering down my screen I don’t know when was the last time I used the off button or changed music with them. Most of their relevance comes from secondary use applications (screenshots, powering down your phone).

So if we are to make Steve’s dream come through we need to find a way to encode all relevant information within the screen (I have some suggestions). Our inability to do this is behind much of the stagnation behind the Ipad and other tablets. As the way we interface with computers has shifted from keyboard (buttons) to touch. But we still have not designed appropriate software assimilate this new change. As we previously we mapped keyboards to human language. Human language with computer language (computer languate). But not fingers with computing language.

Swiping to unlock is a waste of time:

Audio integration is something that needs to be assimilated either by having a specific button reconfigured for audio narration, and another for Siri (holding your lock screen button is not good design). Additionally, you can either use the forced touch command or have a specific part of the phone screen that activates siri.

The best place to flip this redundant order of information inputs is your lock screen. Which per square inch is the most valuable untapped real estate in the world. So it’s import to integrate more functions into it, but in this system the home screen still looks the same.

In this system you could do 90% of what you could otherwise do on your phone with the same amount of movements that it takes you to unlock your phone.

 

When texting:

The last way to add dimensionality is to have different finger swipes encode map to different computing function instead of having to rely on clunky software buttons.