Over time, machines might become more intelligent than humans. And that is our best-case scenario.

Lately a lot of people have begun taking this so-called ‘machine intelligence singularity’ seriously. And that is mostly because we’re already able to solve many business related problems through automation techniques. Which means there is an intrinsic business value in machine intelligence. Which in turn means there is active research and a great effort to make smarter algorithms, better learning methods and more general AIs.

All this might or might not lead us to machines that are more intelligent than humans, and to many people that is a frightening thought.
This article lays out some arguments of why this might actually happen, and takes a short look at why it might be - in fact - very beneficial for humanity as a whole.

About the Human Brain

(Many different) scientific communities are excited about the human brain project, essentially a detailed simulation of the brain. And indeed, our knowledge of the inner workings of this body part is very limited and discovering more about it will certainly have a great impact on future (machine learning) research.

But there are also voices arguing against this possible progress. Paraphrasing Anthony Bell: There might be fundamental structural problems that we will run into while trying to simulate biological brains on entirely different digital hardware. 1 And that seems plausible given the fact that we still don’t really know on a continuous top-to-bottom scale how this biological network functions.

What these arguments usually do not take into consideration though, is that there is no known reason for intelligence to only occur in biological neural nets. In fact, we do not even have a standardized definition of intelligence and thus no metric that could be applied to existing “smart” algorithms in comparison to biological examples.

My guess here is though, that this kind of limitation does not exist. The probably is just too low. After all, the only form of life we know of is biological, and the form of intelligence that philosophy would usually categorize as such is based on centralized biological neural network, i.e. brains of some sort.

But what tells us that a brain is needed to exhibit intellect?

The Possibilities

Let’s assume that we (or a machine) will, at some point in the future, by research or by chance, create a technology capable of reasoning and autonomous problem solving - and maybe most importantly, consciousness. And for the sake of referencing, let’s call this technology general intelligence, or GI for short.
Would we actually want that, and would it do us good?

To start this argument we should deconstruct the notion of ‘we’. We as humans are beings capable of reasoning about our environment, and we too experience a consciousness. If we separate these facts from the actual physical instantiation in terms of bodies and brains etc. (and of any non-scientific religious notions), we might end up being unable to distinguish between our intelligence and GI.
This is what makes the Turing test particularly significant: It separates the actual ‘hardware’ from the observer, who only gets to judge between human intellect and non-human intellect based on a textual dialogue.

Online Worlds

While this deconstruction is theoretically useful, in reality, most humans are unable to psychologically separate their consciousness from their bodies. A digital existence just seems to outlandish to be possible. However, through gaming we have already begun to live parts of our lives separated from physical reality in digital worlds. This is not a fictional overstatement - see MMORPGs for concrete examples.

The interaction with these game worlds is currently (2017) limited to keyboard and mouse input (or game controllers) and audio-visual feedback. However, by means of human-computer-interaction research, we are able to make these experiences more and more immersive. See current virtual-reality technology trends as a reference.

These worlds do not only contain humans players. There is usually also a build-in concept of non-player characters, that are controlled by algorithms. While the games strive to make these characters more and more indistinguishable from humans, there is a parallel development in place as well.

Since there is actual economic value in the in-game currencies - you can exchange them for ‘real-world’ currencies - there is also economic value in algorithms that gather this kind of in-game money or valuables. Algorithms that play the game for you, and automatically harvest in-game resources. These are more commonly referred to as bots.
Now, bots are bad for the in-game economy since they generally lead to inflation - just like printing money at home would. That is why the companies on one hand try hard to distinguish human players from bots (and ban the latter), and bot writers on the other hand try to make their algorithms behave as human-like as possible. They go as far as implementing chat features, completely unrelated to the actual gathering of in-game money.

The ‘Real World’

Interestingly, in the online game worlds, these simulated characters are not only common, they are usually a major feature of a game. Some worlds simulate their characters in such detail, that you can keep following them around and observe them as they work, trade, walk around, chat with other NPCs, ‘relax’ at home, sleep and so on. If the same thing would happen to you in reality - in terms of machines ‘trying to act human’, you would probably find that a rather disturbing notion.

It is also rather unlikely to happen, since ‘acting like a human’ has no apparent value to the actual machine. My guess here is that the ‘intelligent’ machines, at first, will have no physical representations at all. It will probably be an intellect that is restricted to digital environments. And those do not necessarily need to be game worlds.

An Actual Use-Case for Intelligence

One of those restricted environments of particular interested is that of a von-Neumann machine. Currently, programming such a computer is regarded as an AI-hard task, in the sense that it is very hard to automate. A GI system though, among other things, should be able to construct algorithms - not only to extend its own capabilities and intellect, but also to extend its influence on its environment. The latter could e.g. be an ability to use a new online API, or to use a new piece of hardware and so on.

These (admittedly non-human) needs will not drive a development of such a GI at first. Actually more likely is the extrinsic factor of tech-businesses in need for cheap but skilled programmers. Or, better yet, software capable of turning intangible high-level ideas and concepts into concrete algorithms.

This is such a pressing need that programming languages tend to get ever more high-level, and underlying optimisation systems ever more sophisticated. Yet, we still have to actually write the code. Plus, abstractions come at the price of having to grasp them entirely to be able to use them, which is why programmers still benefit from understanding low-level languages and underlying hardware concepts.

Therefore, these approaches might not converge in a fully automated programming system. For that we will need to combine them with machine learning methodologies and specialized program synthesizers.

Looking Farther Into the Future

The discussed kind of needs are already reasonable right now, in the sense that they are driven by current economic concerns. However, looking a few years ahead in time, there might be more criteria to be of interest.

Human life in general has a well known impact on the most immediate environment: The ecosystem of the earth. At the moment we struggle e.g. with climate related catastrophes that might threaten all of our lives, above all other societal aspects. And not only do we destroy our own livelihood, we also destroy that of other life-forms on this planet.

Solutions to these developments usually contain the notion of more sustainable use of natural resources, shifting towards a style of living that has less negative impact and so on. However, I argue that a possibility to shift towards a digital existence, ignoring the above mentioned fears of loosing a ‘real’ physical body, will solve a major part of this problem as well.

A digital existence can be maintained based on the consumption of electricity, which is known to be providable by sustainable means. Even if we might not yet have developed the right technology to provide it entirely that way.

There are many, far reaching consequences of such societies, among them the possibility to travel extremely long distances in space. If intelligence can be encoded into a convenient digital format, which can be transmitted using electromagnetic waves (i.e. light), travelling at light speed suddenly becomes feasible.

The Nutshell

These thoughts are part of a more general and broad argument of why a machine intelligence singularity might happen and why that might be a future to welcome instead of reject.

However, the crucial point that I wanted to emphasise here is that there are concrete existing needs right now for machines with such capabilities. A machine that replaces a human, essentially frees that human to do more interesting work - work that is not as easily automatable. An economy driven by the want for more automation can therefor be an accelerator for human freedom, if we strive toward enabling that instead of clinging to old values.

In my own endeavors I would very much like to speed up the process of enabling such a society with all the known benefits. So, since I am a programmer and creating software is a task I understand very well, solving the automated programming issue is probably the best starting point for me to work on right now.



  1. Bell, Anthony J.: “Levels and loops: The future of artificial intelligence and neuroscience.” Philosophical Transactions of the Royal Society of London B: Biological Sciences 354.1392 (1999) [return]