What can I do?

There’s a quote from An Inconvenient Truth that stuck with me: “There are a lot of people who go straight from denial to despair.”1 You can have an impact. Maybe it doesn’t feel like you can have a “meaningful” impact; maybe it feels like your impact wouldn’t be enough. Don’t worry. No one is asking you to do anything all on your own.

I see three avenues toward helping prevent human extinction from AI.

1.) Raising Consciousness

2.) Earning to Give

3.) Direct Impact (Research and Engineering)

This is a list of a bunch of ideas that have occurred to me. Obviously, it’s not exhaustive.


1.) Raise Consciousness

Low Effort:

  • Share this link. (I know, I know, this is slimy and memetic, and I’m truly sorry about that. But I would be remiss if I didn’t mention it.)
  • Email it to your parents or your children.
  • You know that Facebook friend of yours who always gets hundreds of likes even when his posts aren’t anything special? Get him to read this, and maybe he’ll share it.
  • Talk about it with people. How old fashioned.

High Effort:

  • Write an article/book/screenplay/TV series. (I did warn you, these were high effort). If you do go this route, be sure to look at these blog posts.
  • Found an organization that recruits Math, CS, and philosophy Ph.D. students to this field.
  • Found an extracurricular program for K-12 students that introduces them to effective altruism, of which extinction risk might be one of a few focus areas. (I don’t think you could justify the whole program focusing on extinction risk). Or along similar lines, get involved with SHIC Schools.
  • Create an umbrella organization for collegiate student organizations to help recruit AI Safety researchers.
  • Fundraise. (I’ll get into specific organizations in a moment.)
  • Think of people you know who could do one of these things, and talk to them.

2.) Earn to Give

vishnu
This will become clear in a moment

Earning to give requires a bit of an introduction. Suppose a lawyer wants to feel like he has helped the homeless. One thing he could do is go to a soup kitchen for an hour. Now suppose he wants to actually help the homeless. What if, instead of going to soup kitchen himself, he worked an extra hour at his job and donated the extra money he made? With that money, they could hire someone else to work at the soup kitchen for tens of hours. If the lawyer does this, then the same amount of time, he has helped many more people.

Earning to give, sadly, puts at odds our desire to help others with our desire to have to the experience of directly helping others. Some psychological tricks might allow for a win-win, here. Maybe our putative lawyer could spend that hour he’s working at the office reminding himself that it’s like he’s Vishnu serving soup at the soup kitchen… because it’s like he’s serving many more people… Get it? Vishnu has lots of arms? Probably good at serving soup? *Sigh* Never mind. Let’s just enjoy the thought of Vishnu having found his calling and ladling soup like there’s no tomorrow.

If your goal is to forestall human extinction, probably, for most of the people reading this, the most effective thing you can do is earn money to help pay the salaries of people who are doing the other things on this list. I know that giving away money can feel like taking hit points. I have been tempted by the feeling: “Really it’s people who are richer than me that can afford serious philanthropy.”

And yet, if you donate, you can know that at least you’ve done something. I have a friend named Ben who’s living that starving artist life, and he still donates to MIRI. You don’t have to make a lot to give what you can.

Where to donate to? I’d go with MIRI or FHI. You might also consider CFAR: they’re trying to expand the pipeline of AI Safety researchers. Personally, as of 2016, I donate to MIRI.

Depending on the dollar amount, this approach could be low effort or high effort.

3.) Direct Impact

Low Effort:

  • Error 404. Sorry, I couldn’t come up with any of these.

High effort:

  • Research any of the open problems in AI Safety at MIRI or FHI. For MIRI in particular, if you don’t think you have the qualifications to do research, but you like math, check this out. And this is, I think, the most compelling research agenda put forward so far; it outlines which research questions we should pursue today.
    • FHI Research Areas
      • Macrostrategy
      • AI Safety
      • Technology Forecasting and Risk Assessment
      • Policy and Industry
    • MIRI Research Areas
      • Realistic World-Models
      • Logical Uncertainty
      • Error Tolerance
      • Value Specification
  • Do similar research at a university (probably within a computer science department, or potentially a philosophy department).
  • Work for OpenAI or DeepMind, while keeping up to speed with AI safety research.
  • 80,000 Hours wrote an article that has some more good ideas.

CFAR (the organization I mentioned that’s trying to expand the pipeline to AI safety) has a link on their website where you can sign up for a 20 minute conversation. They’d be excellent at brainstorming with you about how you could help in a way that’s a good fit for you. They’d certainly be much more helpful than this unpersonalized list I’ve made.

If you are considering doing research, check out 80,000 Hours’ Career Review for AI Safety research as well as their AI Safety Syllabus. Both of these resources have been very helpful for me.

Also, and I mean this, you can contact me. Maybe I’ll have some ideas for you depending on where you are in life. Right now, about one person a day visits this page on average. If everyone contacted me, I think I could handle that.

I recently read a post that quotes AI Safety researcher Andrew Critch’s advice here.2

[Andrew:] “If you have three years of runway saved up, quit your job and use the money to fund yourself. Study the AI landscape full-time. Figure out what to do. Do it.”

This felt a little extreme.

Part of this extremity is due to various caveats:

  • “Three years of runway” means comfortable runway, not “you can technically live off of ramen noodles” runway.
  • This requires you to already be the sort of person who can do self-directed study with open ended, ambiguous goals.
  • This requires you to, in an important sense, know how to think.
  • This makes most sense if you’re not in the middle of plans that seem comparably important.
  • The core underlying idea is more like “it’s more important to invest in your ability to think, learn and do, than to donate your last spare dollar”, rather than the specific conclusion “quit your job to study full-time.”

But… the other part of it is simply…

If you actually think the world might be ending or forever changing in your lifetime – whether in ten years, or fifty…

…maybe you should be taking actions that feel extreme?

I almost forgot to mention: thanks for your interest in helping out. Or at least being interested enough to click the link to this page. I was hoping you’d end up here. Thanks for reading!

Appendix A:

Some extra ideas for college students:

  • Create a college organization or club whose mission is to expand the pipeline of people working on AI Safety.
  • Try out some computer science, math, and philosophy classes, and see if you like them. If so, take lots.

Back


  1.  Guggenheim, D. (Director). (2006). An inconvenient truth: A global warning. Hollywood: Paramount. 
  2.  https://www.lesserwrong.com/posts/HnC29723hm6kJT7KP/critch-on-taking-ai-risk-seriously 
Advertisement
%d bloggers like this: