The term ‘Artificial Intelligence’ has become one of the most misused phrases in popular culture.  It has gotten so bad as to rival the abuse of the word ‘literally’.  In fact, now every time I encounter the inappropriate use of the phrase ‘Artificial Intelligence’ my head literally spins around and explodes.

John McCarthy was a giant in the field of computer science.  In 1955, while at Dartmouth, McCarthy convened a research conference on “artificial intelligence”, writing that, “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

The term stuck, and a whole new branch of computer science was born. (Along with numerous movie plot-lines and memorable characters such as HAL from 2001, Data of StarTrek fame and Taylor Swift from the world of entertainment.)  The implied salient quality of a true AI is something called Artificial General Intelligence, or AGI.  Now however, every time a cool new computer trick pops up, we call it ‘Artificial Intelligence’.  Your car slows itself down when the traffic gets bad?  Artificial Intelligence.  Ask Siri to define ‘nihilism’ and end up getting directions to the nearest Arbys?  Artificial Intelligence.  There are even blogs out there with ‘Artificial Intelligence’ in the title that are nothing but posts about income inequality and politics.  Imagine that!

It’s enough to make me want to literally gouge my eyes out and hurl them at my computer screen.

At the same time that this has been happening, we have begun to hear warnings that true human-level intelligence would quickly surpass us in a feedback loop of self-improvement;  An ‘Intelligence Explosion’ resulting in Super Intelligence, at which point the genie is out of the bottle and we’d better hope like hell that our new creation develops a fondness for us.  No less than Steven Hawkings, Bill Gates and Elon Musk have eluded to runaway AI as an existential threat to our species.  A very popular book by James Barrat, ‘Our Final Invention: Artificial Intelligence and The End of the Human Era’ spends 16 well researched and intellectually rigorous chapters basically telling us that we are irredeemably doomed.  (I personally believe that a few massive conceptual leaps were taken in arriving at that conclusion, which I address below.)

So without minimizing the big-picture threat of ultimate annihilation at the hands of our robot overlords, it can be said that at the very least AI has developed a branding problem.

The popular media is not very good at applying reason when covering science topics.  (Which is ironic when you think about it.) Why settle for factual accuracy when good old-fashioned fear mongering will generate a lot more click-throughs?   Look no further than the shit-storm stirred up by the first fatal accident involving a Tesla in Autopilot mode.  Vanity Fair did a great job calling out the media on their abysmal coverage of that incident.  CEO Elon Musk was on the defensive, and rightfully so, but then again he is also on record as warning about the AI-pocolypse.  So in addressing the accident he made great pains to refer to the software behind Tesla Autopilot as ‘Narrow AI’.

‘Narrow AI’ is also referred to as ‘AI Lite’.  I think of it as a savant-like ability to absolutely dominate at one narrow task.  Think self-driving cars or facial recognition.  But is this really ‘intelligence’?  If one includes qualities such as self-awareness, understanding, creativity and the ability to plan as salient aspects of intelligence, then no.  Take self-driving cars. Can they learn?  Yes.  And they do all the time.  In fact, feedback from the entire fleet of Tesla’s using their autopilot feature is quickly available to all other cars.  The software is designed to teach itself and improve its algorithms.  The wider it is deployed, the better it gets.

But does it KNOW that it’s getting better?  Does it feel good about itself when your car successfully navigates that left turn?  After executing a perfect parallel parking job between a Lamborghini and a Harley Davidson festooned with a chrome skull – at night – in the rain, does it feel butt-hurt when you simply get out of the car and walk away as if that were NOTHING?

Let’s look at it this way:  On my way to a meeting a portion of my brain takes in feedback from my eyes, vestibular system and pressure sensors in my feet and runs them through a memory bank so that the upcoming obstacle – a flight of stairs – is instantly recognized.  Appropriate signals are then sent to various actuators which expand and contract a dizzying assortment of muscles with impeccable precision and timing, allowing for my successful navigation of the obstacle, all while balancing a cup of hot coffee in one hand and a phone in the other.

Most people wouldn’t witness a fellow human navigate a flight of stairs while carrying a cup of coffee and think, “Wow. That dude is intelligent!”

The fact is, dude was busy on the phone, and the amazing walking sub-system got him up the stairs without him even being aware of it.  A completely different system directed dude to the desired destination at the desired time (OK, probably a few minutes late), but the stair sub-system did it’s part without really caring a wit about dude’s overall mission.   The stair navigation sub-system doesn’t know or care if this particular set of stairs represents the most efficient route to the destination, however it is capable of recognizing that the stairs are wet, and then alters minute details of the actuation sequence and amplitude to compensate for the potential of degraded traction.

Much like the stair subsystem, the auto-pilot of your self driving car doesn’t concern itself that Arby’s may not be the optimal destination as far as your health is concerned, or even if you are hungry or just clinically depressed.

So, is it an ‘intelligent system’?

That depends on your preferred definition of ‘intelligence’.  When it comes to Artificial Intelligence, we have come to include all sorts of machine learning, deep-but-narrow smart systems under that umbrella.  It seems inevitable and perhaps irreversible that this should be the case.  However, the species-ending Artificial Intelligence warned about by Steven Hawkings and Elon Musk is a WHOLE different (artificial) animal.

How?  Here are a few things that Amazon’s Echo lacks, but true General AI would exhibit (in my opinion):

Self-awareness – it knows that it exists, and what its capabilities and available resources are.

Intentionality – it is goal seeking, and applies what it knows in creative ways to discern the best course of action.

Integration – Its many (CRAZY many) systems of sensory input, logic, memory, categorization of data, etc. are very tightly integrated, with virtually countless connections between them, coordinated by systems devoted to managing and prioritizing those connections.

Number Five is Alive!

shortcircuit6These three qualities bring us to that ineffable, indefinable state that (so far) separates us from machines.  Consciousness. This is where I believe many AI-pocolypse alarmists – especially in the popular media – make perhaps too big of a leap.  The first two qualities (self-awareness and intentionality) are dependent to a large degree on the third (integration).  But NOW we’re beginning to imply the arrival of ‘artificial consciousness’.  But what is consciousness anyway?

I am conscious (whatever that is) when all of my various systems are ‘on-line’ and synchronized.  A good knock to the head, too much tequila, general anesthetic and sleep are but a few of the many things that can take consciousness off-line.  We naturally call it being UN-conscious.  The systems all still work; my visual cortex is active during dream states. It’s just not connected to eyeballs.  My heart and respiratory systems still get the message to operate as needed to maintain proper oxygen levels, etc.

However during this state I have no ability to plan or plot or seek goals.  Even if I did, in the muddled randomness of dreamtime, my body is paralyzed and unable to act on them in a concrete manner.  In other words, none of the goals I have been ‘programmed’ to accomplish (sex, food, blog writing, absolute power and dominion over my environment) even exist for all practical purposes UNLESS the mysterious coordination of all systems is activated, and in a highly optimized manner, to create the state of being that we call ‘consciousness’.  When I am UNconscious, I am not dead.  I am still a remarkable, amazing miracle.  A biological machine of awesome capability.  But I have no agency, no ability to operate with purpose in the world.  No ‘self’. None. Blackness. Like before I was born and after I die.

So, back to autonomous driving.  Imagine a massive database of every road and intersection, on and off ramp in the world hooked up to something with the neurological complexity of a cockroach.  Now add inputs (cameras, LIDAR, motion detectors, GPS), which are programmed to look for only one thing, to provide real-time positional information and ‘situational awareness’.  Hook it up to data inputs that constantly update the database with road-closure and weather information.   Such a system, interfaced with the very specifically designed control mechanisms of an electric vehicle, totally ROCKS at getting the car safely from point A to point B.  But could this ever morph into anything approaching consciousness or even ‘intelligence’ (as defined above)?

“But what about self-learning?”

Well, yeah.  Glad you asked.  Your Autopilot has only the programming and the means to add to its road database and tweak the control algorithms of the car. (I.E. get better at the thing it’s designed to do.)  But it has NO imperative, ability and (importantly) neurology to decide for itself to (for instance) determine your tolerance for risk based on your past web searches, consult your personal schedule and thus realize that you are late,  research local traffic enforcement data to determine the odds of you getting pulled over and finally weigh that data against your previously determined risk tolerance before deciding to hit 93 MPH on an alternate route involving a long abandoned logging road.

What, Me Worry?

Am I saying that AGI or ASI are not feasible prospects?  That well-funded organizations and governments are not actively working to achieve it right now?  That Steven Hawkings and Elon Musk are wrong to worry about the consequences of runaway super-intelligence?

No.  I’m not saying any of those things.  I am glad that it is being taken seriously by very smart people. I am just trying to inject a bit of perspective into the hyperbole.

I recently read an article over at Inverse.com about Werner Herzog’s documentary Lo and Behold – Reveries of the Connected World, where Elon Musk warns about how even ‘benign AI’ could have ‘bad outcomes’.  The example he throws out goes something like this:  An AI is managing a private equity fund and, being programmed to maximize returns, figures out that it could do so by shorting consumer stocks and going long on defense stocks, and then using its connections to other AI’s to start a war.

I’m going to go out on a limb here and quibble with Elon Musk.  I am loath to do so, as I hold him in high esteem as a singular human being who is doing nothing less than bending the arc of history in a good – perhaps planet saving – direction.  I am also fairly certain that he is way, way, way smarter than I am.

But here goes: The scenario he cites as an example of ‘benign AI’ leading to a ‘bad outcome’ is ACTUALLY an example of conscious, human level AI, or even super intelligence.

Let’s say that the ‘neurology’ of this AI has now been upgraded from cockroach to rodent level.  The programming at the root of its self-learning ‘black box’ algorithms doesn’t compel the AI to develop strategies to manipulate the market.  Rather, it is compelled to develop a database of historical market data and look for economic events that seem to correlate with this data, and then make use of this, in conjunction with real-time market data, to execute trades that optimize the AI’s predicted market behavior.  To use fewer words, it’s a prediction machine.  It would continue to improve its ability to separate correlation from causation via continuous feedback, it could tweak its own software in order to speed its order execution.  In short, it would logically act on what has already happened and on what is.   For this ‘tool’ to suddenly decide on its own to recognize that by manipulating external events it could maximize returns would be a HUGE leap in coordinated intelligence.  Our rodent brain only knows that it’s supposed to absorb current and historical information, and then to get really, really good at predicting what will happen next, and act on it.  It has no motive, nor the ability to even have a motive, to do anything but that and to do it really, really well.

To go beyond that would imply intentionality, self-awareness and integration (of systems that, by design, it lacks.)… In short, consciousness!  Possible in the distant future? Perhaps.  But then we would no longer be talking about ‘benign AI’.  We would be talking about Human Level Intelligence.  And if we’re talking about it acquiring the means to manipulate external events to achieve its self-evolved goals, we’d be talking about Super Intelligence.

Therefore, Elon Musk is WRONG!  In your FACE, Elon Musk!

OK, that was immature and uncalled for.  But then again, I’m only human.