The alignment problem from a deep learning perspective

Authors: Richard Ngo

License: CC BY 4.0

Abstract: Within the coming decades, artificial general intelligence (AGI) may surpass human capabilities at a wide range of important tasks. This report makes a case for why, without substantial action to prevent it, AGIs will likely use their intelligence to pursue goals which are very undesirable (in other words, misaligned) from a human perspective, with potentially catastrophic consequences. The report aims to cover the key arguments motivating concern about the alignment problem in a way that's as succinct, concrete and technically-grounded as possible. I argue that realistic training processes plausibly lead to the development of misaligned goals in AGIs, in particular because neural networks trained via reinforcement learning will learn to plan towards achieving a range of goals; gain more reward by deceptively pursuing misaligned goals; and generalize in ways which undermine obedience. As in an earlier report from Cotra (2022), I explain my claims with reference to an illustrative AGI training process, then outline possible research directions for addressing different aspects of the problem.

Submitted to arXiv on 30 Aug. 2022

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.