The most common source of reasonable disagreement tends to be about how fast the “takeoff” will be, that is, how long it will take to go from human-level intelligence to superintelligence. People tend to be more concerned about extinction from AI if they think there will be a fast takeoff, whereas if they think there will be a slow takeoff, they sometimes conclude that we’ll have time to figure out how to control an intelligence after we’ve made it.
Why can we expect a fast takeoff with high confidence? To start with, there’s a lot of low hanging fruit for an AI with human level intelligence. It can run much faster than a human, and use much more processing power. This makes artificial intelligence scalable in a way that human intelligence is not. An AI could think like a mentally synced team of 1000. But an AI could run much faster: an AI running for 30 minutes could be like a 1000 member team working for 10 years. This is not even taking into account the actual improvements in intelligence that we would expect.
I should mention that I do assign a small probability to a slow takeoff (i.e. takes longer than a few weeks), but I don’t think it’s likely. And I would need 99% certainty of a slow takeoff before I could begin to feel justified in complacency.
I think that at the end of the day, a lot of people aren’t concerned because it sounds too much like fiction (if they’ve heard this argument at all), and because considerations like this are so divorced from their day to day.
Lastly, professional disagreement among AI researchers has been overstated by many journalists.1 If you compare AI Researchers who are “skeptical” of the value of AI Safety research and those who “believe” in it:
The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research.
The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.
Popular articles would lead you to believe that many AI researchers are completely unconcerned about extinction risk, and that AI Safety research is useless. In reality this is simply not true. See the footnoted article for more details.
- http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/ ↩