AXRP – the AI X-risk Research Podcast
by Daniel Filan
October 4, 2023 8:46 am
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it’s been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity’s future potential. You can visit the website and read transcripts at axrp.net.
Recent Episodes
25 - Cooperative AI with Caspar Oesterheld
1 month ago24 - Superalignment with Jan Leike
3 months ago23 - Mechanistic Anomaly Detection with Mark Xu
3 months agoSurvey, store closing, Patreon
4 months ago22 - Shard Theory with Quintin Pope
5 months ago21 - Interpretability for Engineers with Stephen Casper
6 months ago20 - 'Reform' AI Alignment with Scott Aaronson
7 months agoStore, Patreon, Video
9 months ago19 - Mechanistic Interpretability with Neel Nanda
9 months agoNew podcast - The Filan Cabinet
1 year ago