AXRP – the AI X-risk Research Podcast

AXRP – the AI X-risk Research Podcast

by

AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it’s been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity’s future potential. You can visit the website and read transcripts at axrp.net.

Recent Episodes

  • 45 - Samuel Albanie on DeepMind's AGI Safety Approach

    2 weeks ago
  • 44 - Peter Salib on AI Rights for Human Safety

    3 weeks ago
  • 43 - David Lindner on Myopic Optimization with Non-myopic Approval

    1 month ago
  • 42 - Owain Evans on LLM Psychology

    1 month ago
  • 41 - Lee Sharkey on Attribution-based Parameter Decomposition

    2 months ago
  • 40 - Jason Gross on Compact Proofs and Interpretability

    4 months ago
  • 38.8 - David Duvenaud on Sabotage Evaluations and the Post-AGI Future

    5 months ago
  • 38.7 - Anthony Aguirre on the Future of Life Institute

    5 months ago
  • 38.6 - Joel Lehman on Positive Visions of AI

    6 months ago
  • 38.5 - AdriĆ  Garriga-Alonso on Detecting AI Scheming

    6 months ago