Humans have been fascinated with the idea of a robot revolution since we first conceived of robots. Long before Skynet and the Terminator were imagined in the early 80's (?!?), Asimov wrote of the Three Laws of Robotics, warning us of the potential for intelligent robots to do us harm. Recently, with the rapid expanse of deep learning neural nets in the last few years, we hear more and more about how an artificial "superintelligence" (i.e., beyond human intelligence level) could wipe us out with as much difficulty as a human stepping on an ant. While I do think that superintelligence could potentially be an apocalyptic threat to humanity if it were ever to be created without proper safety built in, I am actually way more concerned about what I like to call "dumb AI". Which honestly can kill us right now if we're not careful.
What's "dumb AI"? Well, artificial intelligence, or AI, is a buzzword that spans many different technologies. The media tends to call any technology that does anything sufficiently complicated "artificial intelligence". In computer science circles, it means something different, but for our purposes, let's go with the popular view of "technology that does something complicated that probably used to be done by a human" (I am aware that this is incredibly broad). General human level AI would be a system that is as intelligent as an average human being. Superintelligent AI would be a system that is more intelligent than a human, perhaps by orders of magnitude. So "dumb AI" is what we have now - systems that are good at isolated tasks, potentially even better than humans, but that have no general intelligence.
Autonomous cars are an example of "dumb AI" - they're good at the single task of driving, which they handle by combining data from different sensors such as lidar and cameras, building a model of their environment, and using that model to follow the road and avoid obstacles. A difficult task, for sure, but your Model S is never going to become "self-aware", because the only thing it knows is how to drive. It has no opinions on current events, it can't make you a coffee (but one day it will probably drive itself to Starbucks to pick up a coffee for you), and it wouldn't hold up very well in a Turing test. It seems really "smart" that it can drive, but it is designed specifically for this one task, and has no general intelligence.
Teslas are amazing machines, but let's return for a moment to the fictional world of Terminator. In the first Terminator, a super strong, intelligent cyborg takes a hop in a time machine back to 1984 and starts kicking ass and taking names. This scenario is unlikely in the near future, obviously because of the time machine, but also because we're pretty far away in both robotics* and artificial intelligence. It would be cool if text-to-speech systems could do an Ahnold accent, though.
Terminator 2 was even more unbelievable - liquid metal time traveling robots, somehow with enough distributed intelligence in the molecules of metal that you could blow the thing to pieces and it would magically come back together. We're nowhere close with nanotech, sorry.
Which brings us to Terminator 3. A quaint little movie from 2003. Check out the start of the war:
An engineer reports Skynet "processing at 60 teraflops a second". Well, that was actually surpassed by a single supercomputer in the 2004 timeframe (http://www.top500.org/), and distributed systems have way more combined processing power, although processing power by itself isn't enough to incept intelligence. More important than the depiction of Skynet's self-awareness, check out a scene from a few minutes later showing a killer drone called a "hunter killer":
In 2003, this was pretty fantastic to see. A drone with heavy firepower and automatic targeting! Holy shit!
Well, guess what. All the technology needed to build that exists right now. It might have been Hollywood effects then, but we have it now. Quad copter drones with precise maneuvering capabilities? Check. Automatic targeting systems? Check. Slap some guns on a beefed up quad copter and give it a relatively dumb computer vision system, and you basically have the hunter killer from Terminator 3. The main difference is that there's no superintelligent Skynet behind it.
But actually, that's precisely my point. The technologies needed to accidentally start global world-ending conflicts are here, not tomorrow, not today, but yesterday. Check out this "automated turret" system from 2010, which South Korea wanted to put on the DMZ (not sure if they ended up installing it). http://www.gizmag.com/korea-dodamm-super-aegis-autonomos-robot-gun-turret/17198/ Here's the breathless videogame-like description from the article:
"The Super aEgis 2 is an automated gun tower that can find and lock on to a human-sized target in pitch darkness at a distance of up to 1.36 miles (2.2 kilometers). It uses a 35x zoom CCD camera with 'enhancement feature' for bad weather, in conjunction with a dual FOV, autofocus Infra-Red sensor, to pick out targets.
Then it brings the pain, either with a standard 12.7mm caliber machine-gun, a 40mm automatic grenade launcher upgrade, or whatever other weapons system you want to bolt on to it, including surface-to-air missiles. A laser range finder helps to calibrate aim, and a gyroscopic stabilizer unit helps correct both the video system's aim and the direction of the guns after recoil pushes them off-target."
The BBC covered this company again last summer: http://www.bbc.com/future/story/20150715-killer-robots-the-soldiers-that-never-sleep. Supposedly, the autonomous firing mode has been turned off, but that doesn't make this tech any less terrifying. First, it can simply be turned back on if requested by the client (apparently, mostly friendly Middle Eastern countries...). Second, the things have a friggin' network connection, and Korean companies are not very well-known for their internet security (sorry, every Korean bank). Can you imagine an automated turret with a .50 caliber machine gun being hacked?? Additionally, the tech for this is honestly not very advanced. It's an infrared motion sensor combined with a turret. It has no way to distinguish friend from foe - it just shoots at humans. You know, like in Terminator. The "senior research engineer" doesn't inspire much confidence, either, stating,
“Within a decade I think we will be able to computationally identify the type of enemy based on their uniform.”First of all, talk about lowering expectations! I could build a very accurate classifier today using open-source technology and my four year old laptop that would be able to accurately distinguish between South and North Korean soldiers based on uniform. Which is fine and dandy, but if slipping past an automated turret is as simple as stealing some South Korean army uniforms, then you can see why this is "dumb AI". What if this thing accidentally fires a bunch of RPGs over the DMZ due to a false positive in its vision system and starts WWIII?
Actually, buggy vision systems have nearly killed us already, many decades ago. Here's just one declassified incident from the Cold War that we know about: https://en.wikipedia.org/wiki/Stanislav_Petrov. The tl;dr version is that the Soviet nuclear missile early warning system erroneously detected a launch of ICBMs from the US, and Petrov decided that it was probably a mistake, and thus arguably prevented a "retaliatory" nuclear volley from the Soviets which would have turned the early 80's into a Mad Max apocalypse. The problem? The Soviet early warning system got confused by the clouds and some Soviet satellites and pegged them as incoming American ICBMs. Whoops.
So, while most people are worrying about their IoT coffee makers becoming self-aware and inciting grey goo scenarios, I'm worried about the tech we already have. Drones are getting better and better, cheaper and cheaper. Right now a motivated individual could build a Terminator 3 style hunter-killer with a modified quad copter, some light guns, and off-the-shelf vision systems. On the other end of the spectrum, "real" drones such as the Predator can run autonomously without too much difficulty, and those pack serious firepower. Auto-targeting turrets on the DMZ and other high tension areas. Increasing autonomy built into other parts of the military and supply chain. Autonomous systems acting on the basis of requests from autonomous satellites. Intelligence based on imperfect vision systems. We're putting way too much power in the hands of systems that are extremely precarious. And the consequences of a mistake could be devastating.
Artificial intelligence is going to change the world in fundamental ways. We can't even imagine how transformative some of these changes are going to be, and the next few decades are going to bring scientific and technological advances that were considered science fiction not long ago. I just hope dumb AI doesn't accidentally kill us before we have a chance to see it.
--------
* Of course while I was writing this, Boston Dynamics released a new video of their humanoid robot.
Slap some guns on it, along with today's best facial recognition system, and you basically have the T1. Give them a network connection and some goals (i.e., kill humans), and maybe we're not so far away from the Terminator vision after all.
Comments
Post a Comment