Some day, someone will create a human-level machine intelligence. If every country in the world called a halt to all AI research, it would come out of someone’s basement. We have to do our best to make sure that the first superintelligence protects human life and our diverse values.
To say, “Who am I to create such a thing?” seems humble, cautious, and wise. If so, it’s pretty reckless caution. If all the humble, cautious, and wise people thought like that, it would guarantee that the maker of the first superintelligence would be arrogant, reckless, or foolish. On the other hand, if the people that are seriously concerned about extinction are the ones who make human-level machine intelligence, I think we are much likelier to survive.
Please please please don’t try to convince people not to create human-level AI. The only people who might listen are those who are concerned about extinction, and if you are successful in convincing people to leave the field, the fraction of active AI researchers who are concerned about extinction would decrease.