So far, I’ve been focusing on how hard it is to create AI safely even if you’re very committed to safe AI. I hate break it to you, but it’s possible that the people who end up making it won’t even be trying that hard to make it safe.
Also, if it ever became apparent that human-level AI is close, there would probably be an arms race sort of deal, where multiple entities are trying very hard to be the first ones to make superintelligence. Those are not the sorts of conditions that promote caution, deliberation, and quadruple-checking every update.
Narrow AI will continue to do more and more wonderful, amazing things, and the people who start to speak up more and more about the dangers will be seen as out of touch with the data. “Every time new and promising software comes out, the alarmists paint their apocalyptic pictures about how AI will destroy the world, and every time, they’re wrong.” Well duh! Things will keep looking better and better until suddenly it doesn’t look so good. But in any case, there’s a risk that caution will get a bad reputation.
These are a few of the strategic concerns that people have. They are not technical problems; they are social ones.