Skip to main content

Dumb AI Will Kill Us Before Superintelligent AI Has A Chance

Humans have been fascinated with the idea of a robot revolution since we first conceived of robots. Long before Skynet and the Terminator were imagined in the early 80's (?!?), Asimov wrote of the Three Laws of Robotics, warning us of the potential for intelligent robots to do us harm. Recently, with the rapid expanse of deep learning neural nets in the last few years, we hear more and more about how an artificial "superintelligence" (i.e., beyond human intelligence level) could wipe us out with as much difficulty as a human stepping on an ant. While I do think that superintelligence could potentially be an apocalyptic threat to humanity if it were ever to be created without proper safety built in, I am actually way more concerned about what I like to call "dumb AI". Which honestly can kill us right now if we're not careful.

What's "dumb AI"? Well, artificial intelligence, or AI, is a buzzword that spans many different technologies. The media tends to call any technology that does anything sufficiently complicated "artificial intelligence". In computer science circles, it means something different, but for our purposes, let's go with the popular view of "technology that does something complicated that probably used to be done by a human" (I am aware that this is incredibly broad). General human level AI would be a system that is as intelligent as an average human being. Superintelligent AI would be a system that is more intelligent than a human, perhaps by orders of magnitude. So "dumb AI" is what we have now - systems that are good at isolated tasks, potentially even better than humans, but that have no general intelligence.

Autonomous cars are an example of "dumb AI" - they're good at the single task of driving, which they handle by combining data from different sensors such as lidar and cameras, building a model of their environment, and using that model to follow the road and avoid obstacles. A difficult task, for sure, but your Model S is never going to become "self-aware", because the only thing it knows is how to drive. It has no opinions on current events, it can't make you a coffee (but one day it will probably drive itself to Starbucks to pick up a coffee for you), and it wouldn't hold up very well in a Turing test. It seems really "smart" that it can drive, but it is designed specifically for this one task, and has no general intelligence.

Teslas are amazing machines, but let's return for a moment to the fictional world of Terminator. In the first Terminator, a super strong, intelligent cyborg takes a hop in a time machine back to 1984 and starts kicking ass and taking names. This scenario is unlikely in the near future, obviously because of the time machine, but also because we're pretty far away in both robotics* and artificial intelligence. It would be cool if text-to-speech systems could do an Ahnold accent, though.

Terminator 2 was even more unbelievable - liquid metal time traveling robots, somehow with enough distributed intelligence in the molecules of metal that you could blow the thing to pieces and it would magically come back together. We're nowhere close with nanotech, sorry.

Which brings us to Terminator 3. A quaint little movie from 2003. Check out the start of the war:


An engineer reports Skynet "processing at 60 teraflops a second". Well, that was actually surpassed by a single supercomputer in the 2004 timeframe (http://www.top500.org/), and distributed systems have way more combined processing power, although processing power by itself isn't enough to incept intelligence. More important than the depiction of Skynet's self-awareness, check out a scene from a few minutes later showing a killer drone called a "hunter killer":


In 2003, this was pretty fantastic to see. A drone with heavy firepower and automatic targeting! Holy shit!

Well, guess what. All the technology needed to build that exists right now. It might have been Hollywood effects then, but we have it now. Quad copter drones with precise maneuvering capabilities? Check. Automatic targeting systems? Check. Slap some guns on a beefed up quad copter and give it a relatively dumb computer vision system, and you basically have the hunter killer from Terminator 3. The main difference is that there's no superintelligent Skynet behind it.

But actually, that's precisely my point. The technologies needed to accidentally start global world-ending conflicts are here, not tomorrow, not today, but yesterday. Check out this "automated turret" system from 2010, which South Korea wanted to put on the DMZ (not sure if they ended up installing it). http://www.gizmag.com/korea-dodamm-super-aegis-autonomos-robot-gun-turret/17198/ Here's the breathless videogame-like description from the article:

"The Super aEgis 2 is an automated gun tower that can find and lock on to a human-sized target in pitch darkness at a distance of up to 1.36 miles (2.2 kilometers). It uses a 35x zoom CCD camera with 'enhancement feature' for bad weather, in conjunction with a dual FOV, autofocus Infra-Red sensor, to pick out targets. 
Then it brings the pain, either with a standard 12.7mm caliber machine-gun, a 40mm automatic grenade launcher upgrade, or whatever other weapons system you want to bolt on to it, including surface-to-air missiles. A laser range finder helps to calibrate aim, and a gyroscopic stabilizer unit helps correct both the video system's aim and the direction of the guns after recoil pushes them off-target."

The BBC covered this company again last summer: http://www.bbc.com/future/story/20150715-killer-robots-the-soldiers-that-never-sleep. Supposedly, the autonomous firing mode has been turned off, but that doesn't make this tech any less terrifying. First, it can simply be turned back on if requested by the client (apparently, mostly friendly Middle Eastern countries...). Second, the things have a friggin' network connection, and Korean companies are not very well-known for their internet security (sorry, every Korean bank). Can you imagine an automated turret with a .50 caliber machine gun being hacked?? Additionally, the tech for this is honestly not very advanced. It's an infrared motion sensor combined with a turret. It has no way to distinguish friend from foe - it just shoots at humans. You know, like in Terminator. The "senior research engineer" doesn't inspire much confidence, either, stating,
“Within a decade I think we will be able to computationally identify the type of enemy based on their uniform.”
First of all, talk about lowering expectations! I could build a very accurate classifier today using open-source technology and my four year old laptop that would be able to accurately distinguish between South and North Korean soldiers based on uniform. Which is fine and dandy, but if slipping past an automated turret is as simple as stealing some South Korean army uniforms, then you can see why this is "dumb AI". What if this thing accidentally fires a bunch of RPGs over the DMZ due to a false positive in its vision system and starts WWIII?

Actually, buggy vision systems have nearly killed us already, many decades ago. Here's just one declassified incident from the Cold War that we know about: https://en.wikipedia.org/wiki/Stanislav_Petrov. The tl;dr version is that the Soviet nuclear missile early warning system erroneously detected a launch of ICBMs from the US, and Petrov decided that it was probably a mistake, and thus arguably prevented a "retaliatory" nuclear volley from the Soviets which would have turned the early 80's into a Mad Max apocalypse. The problem? The Soviet early warning system got confused by the clouds and some Soviet satellites and pegged them as incoming American ICBMs. Whoops.

So, while most people are worrying about their IoT coffee makers becoming self-aware and inciting grey goo scenarios, I'm worried about the tech we already have. Drones are getting better and better, cheaper and cheaper. Right now a motivated individual could build a Terminator 3 style hunter-killer with a modified quad copter, some light guns, and off-the-shelf vision systems. On the other end of the spectrum, "real" drones such as the Predator can run autonomously without too much difficulty, and those pack serious firepower. Auto-targeting turrets on the DMZ and other high tension areas. Increasing autonomy built into other parts of the military and supply chain. Autonomous systems acting on the basis of requests from autonomous satellites. Intelligence based on imperfect vision systems. We're putting way too much power in the hands of systems that are extremely precarious. And the consequences of a mistake could be devastating.

Artificial intelligence is going to change the world in fundamental ways. We can't even imagine how transformative some of these changes are going to be, and the next few decades are going to bring scientific and technological advances that were considered science fiction not long ago. I just hope dumb AI doesn't accidentally kill us before we have a chance to see it.

--------

* Of course while I was writing this, Boston Dynamics released a new video of their humanoid robot.


Slap some guns on it, along with today's best facial recognition system, and you basically have the T1. Give them a network connection and some goals (i.e., kill humans), and maybe we're not so far away from the Terminator vision after all.

Comments

Popular posts from this blog

Why Korean Is Hard For Native English Speakers

A couple of days ago, as an experiment, I wrote my first blog post ever in a non-English language. It was an attempt to explain some of the reasons that Korean is hard to learn for native English speakers, so I figured I might as well try to write it in Korean. Those of you who actually read Korean can see how awkward the attempt was =).

In any case, the post came from an email conversation I had with The Korean from Ask a Korean, a fantastically well-written blog about all things Korea from the perspective of a Korean who moved to the United States during high school. Since I tend to geek out on language things, I figured I might as well post part of that conversation. An edited version follows.

---------

Out of the languages that I've attempted to learn so far, Korean has been the hardest. I've done a lot of meta thinking about learning Korean, and I think there are a number of reasons it's difficult for non-Koreans (and especially Westerners) to learn:

1) Obviously, the…

영어가 모국어인 사람들은 왜 한국어를 배우기가 어려운 이유

이 포스트는 내 처음 한국어로 블로그 포스트인데, 한국어에 대하니까 잘 어울린다. =) 자, 시작합시다! 왜 외국사람에게 한국어를 배우기가 어렵다? 난 한국어를 배우고 있는 사람이라서 이 문제에 대해 많이 생각하고 있었다. 여러가지 이유가 있는데 오늘 몇 이유만 논할 것이다.

1. 분명히 한국어 문법은 영어에 비해 너무 많이 다른다. 영어는 “오른쪽으로 분지(分枝)의 언어"라고 하는데 한국어는 “왼쪽으로 분지의 언어"이다. 뜻이 무엇이나요? 예를 보면 이해할 수 있을 것이다. 간단한 문장만 말하면 (외국어를 말하는 남들은 간단한 문장의 수준을 지낼 수가 약간 드물다), 간단한 걸 기억해야 돼: 영어는 “SVO”인데 한국어는 “SOV”이다. “I’m going to school”라고 한국어로는 “저는 학교에 가요"라고 말한다. 영어로 똑바로 번역하면 “I’m school to go”이다. 두 언어 다르는 게 목적어와 동사의 곳을 교환해야 한다. 별로 어렵지 않다. 하지만, 조금 더 어렵게 만들자. “I went to the restaurant that we ate at last week.” 한국어로는 “전 우리 지난 주에 갔던 식당에 또 갔어요"라고 말한다. 영어로 똑바로 번역하면 “I we last week went to restaurant to again went”말이다. 한국어가 왼쪽으로 분지 언어라서 문장 중에 왼쪽으로 확대한다! 이렇게 좀 더 쉽게 볼 수 있다: “전 (우리 지난 주에 갔던 식당)에 또 갔어요”. 주제가 “전"이고 동사가 “갔다"이고 목적어가 “우리 지난 주에 갔던 식당"이다. 영어 문장은 오른쪽으로 확대한다: I (S) went (V) to (the restaurant (that we went to (last week))) (O). 그래서 두 숙어 문장 만들고 싶으면 생각속에서도 순서를 변해야 된다.

2. 첫 째 점이니까 다른 사람을 자기 말을 아라들게 하고 싶으면, 충분히 …

Don't Take Korean Language Advice From Kyopos

I'm not sure why it took me so long to figure this out, but the last people you should take Korean language advice from are kyopos (foreign-born or raised Koreans). That being said, if you do follow their advice, you will get many laughs from Koreans. Some of my personal favorites, all of which actually happened to me:

- When I first got to Korea, I was at some open-air event, and during a break I started talking to one of the hosts. He said he was only a part-time host, so I asked him what his full-time job was, and he said "백수" (which is slang for "unemployed guy"). I asked him what that was, and he replied, "Comedian". So then the next few people I met, I proudly told I was a baeksu. (Edit: Actually, this guy was Korean Korean, not kyopo.)

- Next, a kyopo who lived in the apartment I moved into back in 2010 asked me what I was doing in Korea, and I told him I was starting a company, and asked how to say that in Korean in case people ask. He told me…