Riding the Exponential Curve in AI's Tail
The majority of us today have AI in our pockets, in the form of Siri, Cortana or Bixby. While these technologies might seem extremely sophisticated, they are still, in fact, considered to be “narrow” or “weak” AI. But imagine a world in which machines could interact and think, not just in terms of data collection, but in a reasoned and (almost) sentient manner. AI in this context would radically change our daily lives (think Ex Machina or I, Robot). But is there any fact to the fiction portrayed in these movies?
For AI to operate in this manner, the technology would have to evolve to the next phase of AI, namely artificial general intelligence (AGI). At this stage, AI reaches the same general intelligence as a human. But how close are we to this milestone? The consensus, among futurists and analysts, is that AGI will only begin to emerge in roughly the next 20 years.
That 20-year estimate, however, is starting to look a little shaky, especially as advancements in narrow AI are coming thick and fast and have changed the way AI operates. Our job at the CFOW is to monitor these technologies and their impact on work. Let’s examine some platforms at the leading edge of this development.
- Google’s AlphaGo. Yes, we’ve all heard of AlphaGo beating professional Go player Lee Sedol. But do we understand the implications of this victory? The AI used in this instance is vastly superior to any AI used before that has beaten humans at strategic games. When IBM’s Deep Blue beat chess grandmaster Garry Kasparov in 1997, the computer calculated all possible moves and then made an optimal decision based on the outcome of these moves.
The game Go has many more possible moves than chess – as many more as there are atoms in the universe (yes, really) – so calculating all potential moves is impossible given modern computing power. Instead, AlphaGo used deep reinforcement learning and neural networks to mimic the learning and reasoning capabilities of the human brain. What makes all this so interesting and potentially revolutionary is that AlphaGo is general-purpose AI and is not preprogrammed but instead uses raw pixel input. Therefore, this type of AI could potentially be the predecessor to AGI.
- StackGAN. This is one you might not have heard of (but you will soon). StackGAN is a text-to-photo synthesizer. You enter text such as “This flower has overlapping pink pointed petals surrounding a ring of short yellow filaments,” and the AI synthesizes a photo-quality image. On the face of it, this sounds exactly like a Google search, but here the AI completely synthesizes that image from scratch.
But what really makes this interesting is its use of dual neural networks. Here, one neural network produces an image, and the next decides whether the image is real or fake and then fixes any defects. This type of network is referred to as a generative adversarial network, a form of unsupervised machine learning.
- Gamalon. This company is creating AI that can write its own code based on experience and probability instead of hard variables, essentially automating much of the human input of program coding and training of AI. This means that Gamalon can learn significantly more quickly than other AI platforms. Gamalon uses “Bayesian Program Synthesis” – writing its own code that best explains the data in front of it. The system is taught to recognize a cat with only a few examples – whiskers, tail, eyes. From there, it can continually update its understanding of what a cat looks with each new example. This is the opposite of what deep learning offers, where you have to train the system by showing it as many examples of a cat as you can. Deep learning can't handle uncertainty.
So why are these three technologies important? Well, we are continuously told that AI is still firmly in the “narrow” camp – think Amazon’s Alexa, which is essentially a slick real-speech interface with an Internet search capability. But the reality is that AI is progressing significantly faster than we anticipated. AlphaGO, for example, was 10 years ahead of expected AI capabilities at the time of its Go victory .
Also, these advances, as demonstrated by StackGAN and Gamalon, open the door for significantly more intelligent machines that can teach and program themselves. It also helps that widely available and cheap – but incredibly fast – CPUs are now available, as well as access to GPUs (graphics processing untis), coupled with open platform applications, such as Caffe, CNTK and Mahout. These are opening up AI development to the general public, meaning we now have an exponentially greater number of developers worldwide, further fueling development.
Incremental Advancements Add Up
Making predictions about the pace of technology is a risky business; what’s clear, however, is that we are fast approaching a point where the outputs from machine learning are set to improve exponentially as generative adversarial networks and self-programming systems advance. The three platforms listed here aren’t going to single-handedly bring about AGI. But the convergence of these technologies could usher in a wave of self-coding and self-creating artificial general intelligence.
So to answer my question of whether there is any truth to those fictional AI films, the answer is a resounding maybe! My apologies that it’s not more clear cut yet, but the simple fact is that even though we are so close to technology potentially having this capability, understanding the real-world application is incredibly difficult to do. Imagine Adam Smith foreseeing the Internet ... while the gap between his death and the birth of the Web was 201 years, not 20, the technological pace of change compresses timeframes immensely. It’s not far-fetched to believe that in human years, anyway, the gap between now and the age of AGI is just as wide.