AXRP – the AI X-risk Research Podcast
by Daniel Filan
July 7, 2025 8:54 am
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it’s been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity’s future potential. You can visit the website and read transcripts at axrp.net.
Recent Episodes
45 - Samuel Albanie on DeepMind's AGI Safety Approach
2 weeks ago44 - Peter Salib on AI Rights for Human Safety
3 weeks ago43 - David Lindner on Myopic Optimization with Non-myopic Approval
1 month ago42 - Owain Evans on LLM Psychology
1 month ago41 - Lee Sharkey on Attribution-based Parameter Decomposition
2 months ago40 - Jason Gross on Compact Proofs and Interpretability
4 months ago38.8 - David Duvenaud on Sabotage Evaluations and the Post-AGI Future
5 months ago38.7 - Anthony Aguirre on the Future of Life Institute
5 months ago38.6 - Joel Lehman on Positive Visions of AI
6 months ago38.5 - AdriĆ Garriga-Alonso on Detecting AI Scheming
6 months ago