AXRP – the AI X-risk Research Podcast

AXRP – the AI X-risk Research Podcast

by

AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it’s been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity’s future potential. You can visit the website and read transcripts at axrp.net.

Recent Episodes

  • 25 - Cooperative AI with Caspar Oesterheld

    1 month ago
  • 24 - Superalignment with Jan Leike

    3 months ago
  • 23 - Mechanistic Anomaly Detection with Mark Xu

    3 months ago
  • Survey, store closing, Patreon

    4 months ago
  • 22 - Shard Theory with Quintin Pope

    5 months ago
  • 21 - Interpretability for Engineers with Stephen Casper

    6 months ago
  • 20 - 'Reform' AI Alignment with Scott Aaronson

    7 months ago
  • Store, Patreon, Video

    9 months ago
  • 19 - Mechanistic Interpretability with Neel Nanda

    9 months ago
  • New podcast - The Filan Cabinet

    1 year ago