The Future of Humanity Institute’s mission is to bring excellent scholarship to bear on big-picture questions for humanity. We seek to focus our work where we can make the greatest positive difference. This means we pursue questions that are (a) critically important for humanity’s future, (b) unduly neglected, and (c) for which we have some idea for how to obtain an answer or at least some useful new insight. Through this work, we foster more reflective and responsible ways of dealing with humanity’s biggest challenges.

Our work spans four programmatic areas:

Macrostrategy

Go is a game of strategyIt is easy to lose track of the big picture. Yet if we want to intervene in the world and we care about the long-term consequences of our actions, we can’t help but to place bets on how our local actions will affect the complicated dynamics that shape the future. We therefore think it is valuable to develop analytic tools and insights that clarify our understanding of the macrostrategic context for humanity.

A significant interest of ours is existential risk: where an adverse outcome would either end Earth-originating intelligent life or drastically and permanently curtail its potential for realizing a valuable future. Interventions that promise to reduce the integral of existential risk even slightly may be good candidates for actions that have very high expected value.

Our work on macrostrategy involves forays into deep issues in several fields, including detailed analysis of future technology capabilities and impacts, existential risk assessment, anthropics, population ethics, human enhancement ethics, game theory, consideration of the Fermi paradox, and other indirect arguments. Many core concepts and techniques in macrostrategy have been originated by FHI scholars; they are already having a practical impact, such as in the effective altruism movement.

timeline_pre_loader
Featured Macrostrategy Publications

Underprotection of unpredictable statistical lives compared to predictable ones

captureExisting ethical discussion considers the differences in care for identified versus statistical lives. However, there has been little attention to the different degrees of care that are taken for different kinds of statistical lives. Read more. 

Strategic implications of openness in AI development

openness-thumbThis paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals).  Read more >>

Superintelligence: paths, dangers, strategies

SuperSuperintelligence: Paths, Dangers, Strategies book coverintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. Read More >>

The unilateralist's curse: the case for a principle of conformity

This article considers agents that are purely motivated by an altruistic concern for the common good, and shows that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will move forward more often than is optimal. It explores the unilateralist’s curse. Read more.

Existential risk reduction as global priority

ThCapture21is paper discusses existential risks. It raises that despite the enormous expected value of reducing the possibility of existential risk, issues surrounding human-extinction risks and related hazards remain poorly understood. Read More.

Global catastrophic risks

InCapture14 Global Catastrophic Risks, 25 leading experts look at the gravest risks facing humanity in the 21st century, including asteroid impacts, gamma-ray bursts, Earth-based natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issues – policy responses and methods for predicting and managing catastrophes. Read more. 

Anthropic bias

AnCapture12thropic Bias explores how to reason when you suspect that your evidence is biased by “observation selection effects”–that is, evidence that has been filtered by the precondition that there be some suitably positioned observer to “have” the evidence. Read more.

Probing the improbable: methodological challenges for risks with low probabilities and high stakes

ThiCapture18s paper argues that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. Read more. 

The reversal test: eliminating status quo bias in bioethics

Explores whether we have reason to believe that the long-term consequences of human cognitive enhancement would be, on balance, good. Read more.

How unlikely is a doomsday catastrophe?

ThCapture23is article considers existential risk and how many previous bounds on their frequency give a false sense of security. It derives a new upper bound of one per 10^9 years (99.9% c.l.) on the exogenous terminal catastrophe rate that is free of such selection bias, using planetary age distributions and the relatively late formation time of Earth. Read more.

Astronomical waste: the opportunity cost of delayed technological development

This pCapture20aper considers how with very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. It emphasizes that for every year that development of such technologies and colonization of the universe is delayed, there is  an opportunity cost. Read more. 

AI Safety

Superintelligence4Surveys of leading AI researchers suggest a significant probability of human-level machine intelligence being achieved in this century. Machines already outperform humans on several narrowly defined tasks, but the prospect of general machine intelligence would introduce novel challenges. The goal system would need to be carefully designed to ensure that the AI’s actions would be safe and beneficial.

Present-day machine learning algorithms (if scaled up to very high levels of intelligence) would not reliably preserve a valued human condition. We therefore face a ‘control problem’: how to create advanced AI systems that we could deploy without risk of unacceptable side-effects.

Our research in this area focuses on the technical aspects of the control problem. We also work on the broader strategic, ethical, and policy issues that arise in the context of efforts to reduce the risks of long-term developments in machine intelligence. For an in-depth treatment of this topic, please see Superintelligence: Paths, Dangers, Strategies (OUP, 2014).

timeline_pre_loader
Featured AI safety publications

Exploration potential

exploration-potential
This paper introduces exploration potential, a quantity for that measures how much a reinforcement learning agent has explored its environment class. In contrast to information gain, exploration potential takes the problem’s reward structure into account. This leads to an exploration criterion that is both necessary and sufficient for asymptotic optimality (learning to act optimally across the entire environment class).  Read more >>

Safely interruptible agents

This paper provides a formal definition of safe interruptibility and exploits the off-policy learning property to prove that either some agents are already safely interruptible, like Q-learning, or can easily be made so, like Sarsa. It shows that even ideal, uncomputable reinforcement learning agents  can be made safely interruptible. Read more. 

A formal solution to the grain of truth problem

A Bayesian agent acting in a multi-agent environment learns to predict the other agents’ policies if its prior assigns positive probability to them (in other words, its prior contains a grain of truth). Finding a reasonably large class of policies that contains the Bayes-optimal policies with respect to this class is known as the grain of truth problem. This paper presents a formal and general solution to the full grain of truth problem.  Read more >>

Thompson sampling is asymptotically optimal in general environments

This paper discusses a variant of Thompson sampling for nonparametric reinforcement learning in a countable classes of general stochastic environments. It shows that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear.  Read more >>

Learning the preferences of ignorant, inconsistent agents

An aCapture8nalysis of what people value and how this relates to machine learning. Read more.

Off-policy Monte Carlo agents with variable behaviour policies

ThCapture6is paper looks at the convergence property of off-policy Monte Carlo agents with variable behaviour policies. It presents results about convergence and lack of convergence. Read more. 

Corrigibility

An inCapture7troduction to the notion of corrigibility and analysis of utility functions that attempt to make an agent shut down safely if a shutdown button is pressed, while avoiding incentives to prevent the button from being pressed or cause the button to be pressed, and while ensuring propagation of the shutdown behavior as it creates new subsystems or selfmodifies. Read more. 

Learning the preferences of bounded agents

Capture9

This paper explicitly models structured deviations from optimality when inferring preferences and beliefs. They use models of bounded and biased cognition as part of a generative model for human choices in decision problems, and infer preferences by inverting this model. Read more. 

blueprintsTechnology Forecasting and Risk Assessment

A handful of emerging technologies could fundamentally transform the human condition. Advances in biotechnology and nanotechnology may enable dramatic human enhancement but also create unprecedented risks to civilization and biosphere alike. Near-term narrow machine intelligence will have myriad economic benefits but could also contribute to technological unemployment, ubiquitous surveillance, or institutional lock-in.

Our research in these areas seeks to prioritize among emerging risks and opportunities, determine the interaction effects between emerging technologies, and identify actionable interventions that could improve humanity’s long-run potential.

timeline_pre_loader
Featured technology forecasting and risk assesment publications

The future of employment: how susceptible are Jobs to computerization?

Capture5

An examination of how susceptible jobs are to computerisation. Read more. 

Whole brain emulation: a roadmap, technical report

A roadmap of the scientific research and technological innovations required to eventually completely model the human brain in software. Read More.

Policy and Industry

Wr4-1e collaborate with a variety of governmental and industrial groups from around the world. FHI has worked with or consulted for the US President’s Council on Bioethics, the UK Prime Minister’s Office, the United Nations, the World Bank, the Global Risk Register, and a handful of foreign ministries. We have an ongoing sponsorship with Amlin plc., a major reinsurance company, as well as research arrangements with leading groups in artificial intelligence. We welcome expressions of interest from government and industry. Please contact Andrew Snyder-Beattie for further details.

timeline_pre_loader
Featured Policy and Industry Publications

Strategic implications of openness in AI development

This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals).  Read more >>

Unprecedented technological risks

OCapturever the next few decades, the continued development of dual-use technologies will provide major benefits to society. They will also pose significant and unprecedented global risks, this report gives an overview of these risks and their importance, focusing on risks of extreme catastrophe. Read more.

Managing existential risks from emerging technologies

Capture3

A volume containing evidence for the Government Chief Scientific Advisor’s Annual Report 2014 on existential risks, including the development of engineered pathogens, advanced AI, or geoengineering. Recommends horizon-scanning efforts, foresight programs, risk and uncertainty assessments, and policy-oriented research. Read more.