Contact Us

Use the form on the right to contact us.

You can edit the text in this area, and change where the contact form on the right submits to, by entering edit mode using the modes on the bottom right. 

         

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

Frequently Asked Questions

 

What is existential risk?

An existential risk is a risk that threatens the destruction of humanity’s longterm potential.

The most obvious possibility is the extinction of humanity. But there are others too—where humanity survives for a long time but is locked into a terrible state. For example, a catastrophe might stop short of extinction, but cause an irrevocable collapse of civilisation, reducing humanity to a pre-agricultural state from which there was no possibility of recovery. Or a totalitarian regime with advanced surveillance might able to subjugate all of humanity so completely that the regime could maintain itself indefinitely.

What catastrophes like these have in common is that they would destroy not only our present, but our entire future—everything humanity could ever achieve or become. They thus have uniquely high stakes. And they pose unique challenges: because we cannot afford to suffer even a single such catastrophe, we cannot muddle through with trial and error, but must be proactive. Only then could we survive and flourish for countless generations to come without even once succumbing to such a risk.

What is the Precipice?

The Precipice is the era in which we live—where existential risk is unsustainably high. I date the start of the Precipice to the detonation of the first atomic bomb, in 1945, when humanity first gained the power to threaten its own destruction. The Precipice cannot last too long, since we cannot survive too many centuries with risk this high—either we get our act together and reduce existential risk to a sustainable level, or we will succumb to the accumulating risk.

 
 

What is longtermism?

Longtermism is an ethical view that is especially concerned with the impacts of our actions on the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape—or fail to shape—that story. Working to safeguard humanity’s potential is one avenue for such a lasting impact and there may be others too. The case for safeguarding humanity’s future draws support from a wide range of ethical views, of which longtermism is just one.

How high is existential risk?

No-one can say for sure. My best guess is that humanity faces a one-in-six chance of existential catastrophe over the next century, but this is not intended to be the last word. It represents my current degree of belief based on everything I know, and is intended as a ballpark estimate. I do not expect everyone to agree with my assessment—some of my colleagues think it is much higher, and some much lower. I share my own estimate because I believe it is important to put numbers to your beliefs, especially with a subject as important as existential risk.

What are the biggest risks?

While there is real existential risk from natural threats, such as asteroids, comets, and supervolcanic eruptions, we can use the fossil record to put an upper bound on how high this can be. Because humanity has already survived 200 thousand years, typical species survive for about a million years, and mass extinctions occur only about once in 100 million years, natural risk cannot be very high. In my view, our best guess is that all natural risks together amount to about a 1 in 10,000 chance of existential catastrophe per century.

Anthropogenic (human-created) risks are much harder to estimate (or to bound). However, it would seem optimistic to be more than 99.99% confident we make it through the next 100 years, or to expect we could make it through 10,000 centuries like the 20th, or 21st. Anthropogenic risk would thus appear to be higher than natural risk. In my estimation, the risk posed by current threats of nuclear weapons, climate change, and other environmental damage are each higher than all natural risks combined, and the near-future risks from engineered pandemics and unaligned artificial intelligence are higher still.

Why treat all these risks together?

In some cases, it is entirely reasonable that scholars and activists focus mainly on one particular risk, with only limited awareness of the wider risk landscape. But there are many properties that are shared by all existential risks that make it useful to view them together. They are intergenerational global public goods, which means we systematically expect them to be underfunded by the market or national governments, so need to develop methods to overcome this. They are challenges where we cannot afford to fail even once, requiring special approaches to policy making. They are events that are by definition unprecedented, creating special difficulties for estimating their likelihood, or in motivating the public to care.

Some individual existential risks have received substantial attention and effort. Forty years after the development of nuclear weapons, there was a massive grass roots movement to abolish them—in large part because of their threat to our entire future. But this attention faded away with the end of the Cold War. There is now a similar grass roots movement around climate change, but it also took decades to develop. We may not have decades to raise awareness of the next threat to humanity, but we could move much more quickly if people rallied around ending threats to humanity, rather than needing to develop entirely separate movements for each threat.

What’s so special about humanity?

The definition of existential risk concerns humanity’s longterm potential. But it is not supposed to imply that humanity is the only thing of value: it may be that most of our potential lies in what we can do to protect and preserve other life, or the environment. Instead, the focus on humanity is because humans are the only beings we know of that are responsive to moral reasons and moral argument—the only beings who can examine the world and decide to do what is best. If we humans fail, then that upward force, that capacity to push toward what is best or what is just, will vanish from the world.

Isn’t it more important, and more urgent, to help people here and now?

Both are important and both are urgent. When it comes to many of their challenges, people of the future can help themselves. But just as the present generation are the only people in a position to help those suffering now, so we are the only people in a position to help prevent the existential risks of our time. And because it looks like we are in a rare or unique period of heightened existential risk (the Precipice), this may be one of the only opportunities there will ever be to play such a large role in securing humanity’s future.

But for most purposes there is no need to debate which of these noble tasks is the most important—the key point is just that safeguarding humanity’s longterm potential is up there among the very most important priorities of our time.

Are you saying climate change isn’t important?

Not at all. Even setting aside the most catastrophic possibilities, climate change is expected to result in immense suffering that disproportionately affects the world’s most disadvantaged people. This is enough to make tackling it one of the most pressing moral issues of our time. The fact that climate change could pose an existential risk to humanity further strengthens the case for making it a global priority.

I hear about existential risk all the time—is it really so neglected?

While existential risks are mentioned frequently, it is often in a very shallow way, such as in an action film, or an article designed more to entertain than to prompt serious reflection. So there is a lot of idle talk, but very little real work. If you look at humanity’s spending on existential risk, it is clear that it is not currently a global priority. Consider the possibility of engineered pandemics—often considered one of the largest existential risks this century. The international body responsible for the continued prohibition of bioweapons (the Biological Weapons Convention) has an annual budget of just $1.4 million—less than the average McDonald’s restaurant. And while it is difficult to precisely measure global spending on existential risk, we can state with confidence that humanity spends more on ice cream every year than on ensuring that the technologies we develop do not destroy us.

What kinds of things can people do to help?

We can all play a role in safeguarding humanity’s future. Two of the most important ways individuals can make a difference are through our careers, and through our charitable donations. 80,000 Hours offers a wealth of free resources on how you can use your career to solve the world’s most pressing problems, including existential risk. Giving What We Can is a community of individuals pledging to donate at least 10% of their incomes to the most effective charities, including those focused on protecting our longterm potential.

And one way in which we can all help is by starting a public conversation about the longterm future of humanity: the breathtaking scale of what we can achieve, and the risks that threaten it all. We need this to be a mature, responsible, and constructive conversation: one focused on understanding problems and finding solutions. We can discuss the importance of the future with the people in our lives who matter to us. We can engage with the growing community of people who are thinking along similar lines: where we live, where we work or study, or online. And we can strive to be informed, responsible and vigilant citizens, staying abreast of the issues and urging our political representatives to take action when important opportunities arise.

For more resources see the page What You Can Do.

Is it depressing working on existential risk all day?

Not in my experience. I find the sheer potential of humanity—the best futures that are in our power to create—very motivating. And it makes me strive to protect humanity’s potential through this most challenging time, in order that our descendants may have a chance to fulfil this potential. I am also moved by the long history of humanity—by the ten thousand generations of humans who came before us and created our wealth of knowledge, culture, and technology. By no means was everyone in the past an angel, but there was a strand of work and thought across deep time that strove towards the continued improvement of humanity. I find dedicating oneself to this project to be a very meaningful way to engage and cooperate with fellow humans across the ages.

Does this have anything to do with existentialist philosophy?

Not much. The terms share a common root—relating to ‘existence’—but have few similarities beyond this.