What was once just a figment of the imagination of some our most famous science fiction writers, artificial intelligence (AI) is taking root in our everyday lives. We’re still a few years away from having robots at our beck and call, but AI has already had a profound impact in more subtle ways. Weather forecasts, email spam filtering, Google’s search predictions, and voice recognition, such Apple’s SIRI, are all examples. What these technologies have in common are machine-learning algorithms that enable them to react and respond in real-time. There will be growing pains as AI technology evolves, but the positive effect it will have on society in terms of efficiency is immeasurable.

The quest for artificial intelligence (AI) begins with dreams – as all quests do. People have long imagined machines with human abilities – automata that move and devices that reason. Human-like machines are described in many stories and are pictured in sculptures, paintings, and drawings.

Ahead of his time with inventions (as usual), Leonardo Da Vinci sketched designs for a humanoid robot in the form of a medieval knight around the year 1495. No one knows whether Leonardo or contemporaries tried to build his design. Leonardo’s knight was supposed to be able to sit up, move its arms and head, and open its jaw.


  Figure 1:  Model of a robot knight based on the drawings of Leonardo da Vinci.

The science fiction writer Isaac Asimov wrote many stories about robots. His first collection, I, Robot, consists of nine stories about “positronic” robots. Because he was tired of science fiction stories in which robots (such as Frankenstein’s creation) were destructive, Asimov’s robots had “Three Laws of Robotics” hard-wired into their positronic brains.

The three laws were the following:

  1. First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov later added a “Zeroth” law, designed to protect humanity’s interest:

Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

The quest for artificial intelligence, quixotic or not, begins with dreams like these. But to turn dreams into reality requires usable clues about how to proceed.

The Far Future—Coming Soon

This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so it’s no surprise that humanity made far more advances in the 19th century than in the 15th century—15th century humanity was no match for 19th century humanity.

What is AI?  

There are three reasons a lot of people are confused about the term AI:

  1. We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.
  2. AI is a broad topic. It ranges from your phone’s calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.
  3. We use AI all the time in our daily lives, but we often don’t realize it’s AI. John McCarthy, who coined the term “Artificial Intelligence” in 1956, complained that “as soon as it works, no one calls it AI anymore.” Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. So let’s clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not—but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body—if it even has a body. For example, the software and data behind SIRI are AI, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.
Types of AI

While there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AI’s caliber. There are three major AI caliber categories:

  1. AI Caliber Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.
  2. AI Caliber Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. AGI would be able to do all of those things as easily as you can.
  3. AI Caliber Artificial Superintelligence (ASI): Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear every time we talk of AI.

As of now, humans have conquered the lowest caliber of AI—ANI—in many ways, and it’s everywhere. The AI Revolution is the road from ANI, through AGI, to ASI—a road we may or may not survive but that, either way, will change everything.


Where We Are Currently—A World Running on ANI

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

  • Cars are full of ANI systems, from the computer that figures out when the anti-lock brakes should kick into the computer that tunes the parameters of the fuel injection systems. Google’s self-driving car, which is being tested now, will contain robust ANI systems that allow it to perceive and react to the world around it.
  • Your phone is a little ANI factory. When you navigate using your map app, receive tailored music recommendations from Pandora, check tomorrow’s weather, talk to Siri, or dozens of other everyday activities, you’re using ANI.
  • Google search is one large ANI brain with incredibly sophisticated methods for ranking pages and figuring out what to show you in particular. Same goes for Facebook’s Newsfeed.
  • And those are just in the consumer world. Sophisticated ANI systems are widely used in sectors and industries like military, manufacturing, and finance (algorithmic high-frequency AI traders account for more than half of equity shares traded on US market), and in expert systems like those that help doctors make diagnoses.

But while ANI doesn’t have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively harmless ANI as a precursor of the world-altering hurricane that’s on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze”—the inanimate stuff of life that, one unexpected day, woke up.

The Road from ANI to AGI: Why It’s So Hard

You realize that there is no free will in what we create with AI. Everything functions within rules and parameters”

– Clyde DeSouza, Maya

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down—all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

What’s interesting is that the hard parts of trying to build AGI (a computer as smart as humans, in general, not just at one narrow specialty) are not intuitively what you’d think they are. Build a computer that can multiply two ten-digit numbers in a split second—incredibly easy. Or, as computer scientist Donald Knuth puts it, “AI has now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.’”

“When people are told that a computer is intelligent, they become prone to changing themselves in order to make the computer appear to work better, instead of demanding that the computer be changed to become more useful.”

-Jaron Lanier, You Are Not a Gadget

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of millions of years of animal evolution. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why it’s not that malware is dumb for not being able to figure out the slant word recognition test when you sign up for a new account on a site—it’s that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we haven’t had any time to evolve a proficiency at them, so a computer doesn’t need to work too hard to beat us. And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.


So how do we get there?

-First Key to Creating AGI: Increasing Computational Power

-Second Key to Creating AGI: Making it Smart

So as AI zooms upward in intelligence toward us, we’ll see it as simply becoming smarter, for an animal. The attribution of intelligence to machines, crowds of fragments, or other nerd deities obscures more than it illuminates.

Warfighting robots could reduce civilian casualties, so calling for a ban now is premature. We’re not going to be able to prevent autonomous armed robots from existing. The real question that we should be asking is this: Could autonomous armed robots perform better than armed humans in combat, resulting in fewer casualties on both sides?

'TERMINATOR 3: RISE OF THE MACHINES' FILM STILLS - 2003...No Merchandising. Editorial Use Only. No Book Cover Usage  Mandatory Credit: Photo by c.Warner Br/Everett/REX (421101h)  'TERMINATOR 3: RISE OF THE MACHINES'  'TERMINATOR 3: RISE OF THE MACHINES' FILM STILLS - 2003

But don’t expect a machine takeover any time soon. As easy as it is for machine-learning technology to self-improve, what it lacks is intuition. There’s a gut instinct that can’t be replicated via algorithms, making humans an important piece of the puzzle. The best way forward is for humans and machines to live harmoniously, leaning on one another’s strengths. Advertising is a perfect example, where machines are now doing much of the purchasing through programmatic exchanges to maximize returns on investment, allowing advertisers to focus on creating more engaging content.

The only thing we have to remember is duly highlighted by Ian McDonald in the River of Gods:

“Any AI smart enough to pass a Turing test is smart enough to know to fail it.”