If our only concern was an evil or selfish person gaining control of a superintelligence, this would be a reasonable safety measure. But the extinction threat that I’m discussing right now is that most uncaged AIs would kill everything, completely irrespectively of the beneficence of the maker. Making lots of uncaged AIs that would kill everything does not seem to solve that problem. At least one would have to defend human civilization, at which point creating the others hardly seems like a safety measure.
Making lots of AIs might be a good idea, but it shouldn’t be seen as a solution to this particular problem.