These techniques come at the problem of safety from a fairly different angle than the things we’ve discussed so far.
Amplification is the idea of bootstrapping a trusted core system, increasing its capabilities while maintaining safety properties. Paul Christiano and the OpenAI safety team have worked on these ideas. One current suggestion for how to do this has a lot in common with functional programming. For some more discussion see e.g. https://ai-alignment.com/alba-an-explicit-proposal-for-aligned-ai-17a55f60bbcf