Three areas of research on the superintelligence control problem

This is a guide to research on the problem of preventing significant accidental harm from superintelligent AI systems, designed to make it easier to get started on work in this area and to understand how different kinds of work could help mitigate risk. I’ll be updating this guide with a longer reading list and more detailed […]

The whole story can be found at The Global Priorities Project