Powered by RND

Doom Debates

Liron Shapira
Doom Debates
Último episodio

Episodios disponibles

5 de 120
  • Facing AI Doom, Lessons from Daniel Ellsberg (Pentagon Papers) — Michael Ellsberg
    Michael Ellsberg, son of the legendary Pentagon Papers leaker Daniel Ellsberg, joins me to discuss the chilling parallels between his father’s nuclear war warnings and today’s race to AGI.We discuss Michael’s 99% probability of doom, his personal experience being “obsoleted” by AI, and the urgent moral duty for insiders to blow the whistle on AI’s outsize risks.Timestamps0:00 Intro1:29 Introducing Michael Ellsberg, His Father Daniel Ellsberg, and The Pentagon Papers5:49 Vietnam War Parallels to AI: Lies and Escalation25:23 The Doomsday Machine & Nuclear Insanity48:49 Mutually Assured Destruction vs. Superintelligence Risk55:10 Evolutionary Dynamics: Replicators and the End of the “Dream Time”1:10:17 What’s Your P(doom)?™1:14:49 Debating P(Doom) Disagreements1:26:18 AI Unemployment Doom1:39:14 Doom Psychology: How to Cope with Existential Risk1:50:56 The “Joyless Singularity”: Aligned AI Might Still Freeze Humanity2:09:00 A Call to Action for AI InsidersShow Notes:Michael Ellsberg’s website — https://www.ellsberg.com/Michael’s Twitter — https://x.com/MichaelEllsbergDaniel Ellsberg’s website — https://www.ellsberg.net/The upcoming book, “Truth and Consequence” — https://geni.us/truthandconsequenceMichael’s AI-related substack “Mammalian Wetware” — https://mammalianwetware.substack.com/Daniel’s debate with Bill Kristol in the run-up to the Iraq war — https://www.youtube.com/watch?v=HyvsDR3xnAg--Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    2:15:50
  • Max Tegmark vs. Dean Ball: Should We BAN Superintelligence?
    Today's Debate: Should we ban the development of artificial superintelligence until scientists agree it is safe and controllable?Arguing FOR banning superintelligence until there’s a scientific consensus that it’ll be done safely and controllably and with strong public buy-in: Max Tegmark. He is an MIT professor, bestselling author, and co-founder of the Future of Life Institute whose research has focused on artificial intelligence for the past 8 years.Arguing AGAINST banning superintelligent AI development: Dean Ball. He is a Senior Fellow at the Foundation for American Innovation who served as a Senior Policy Advisor at the White House Office of Science and Technology Policy under President Trump, where he helped craft America’s AI Action Plan.Two of the leading voices on AI policy engaged in high-quality, high-stakes debate for the benefit of the public!This is why I got into the podcast game — because I believe debate is an essential tool for humanity to reckon with the creation of superhuman thinking machines.Timestamps0:00 - Episode Preview1:41 - Introducing The Debate3:38 - Max Tegmark’s Opening Statement5:20 - Dean Ball’s Opening Statement9:01 - Designing an “FDA for AI” and Safety Standards21:10 - Liability, Tail Risk, and Biosecurity29:11 - Incremental Regulation, Timelines, and AI Capabilities54:01 - Max’s Nightmare Scenario57:36 - The Risks of Recursive Self‑Improvement1:08:24 - What’s Your P(Doom)?™1:13:42 - National Security, China, and the AI Race1:32:35 - Closing Statements1:44:00 - Post‑Debate Recap and Call to ActionShow NotesStatement on Superintelligence released by Max’s organization, the Future of Life Institute — https://superintelligence-statement.org/Dean’s reaction to the Statement on Superintelligence — https://x.com/deanwball/status/1980975802570174831America’s AI Action Plan — https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/“A Definition of AGI” by Dan Hendrycks, Max Tegmark, et. al. —https://www.agidefinition.ai/Max Tegmark’s Twitter — https://x.com/tegmarkDean Ball’s Twitter — https://x.com/deanwball Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    1:50:47
  • The AI Corrigibility Debate: MIRI Researchers Max Harms vs. Jeremy Gillen
    Max Harms and Jeremy Gillen are current and former MIRI researchers who both see superintelligent AI as an imminent extinction threat.But they disagree on whether it’s worthwhile to try to aim for obedient, “corrigible” AI as a singular target for current alignment efforts.Max thinks corrigibility is the most plausible path to build ASI without losing control and dying, while Jeremy is skeptical that this research path will yield better superintelligent AI behavior on a sufficiently early try.By listening to this debate, you’ll find out if AI corrigibility is a relatively promising effort that might prevent imminent human extinction, or an over-optimistic pipe dream.Timestamps0:00 — Episode Preview1:18 — Debate Kickoff3:22 — What is Corrigibility?9:57 — Why Corrigibility Matters11:41 — What’s Your P(Doom)™16:10 — Max’s Case for Corrigibility19:28 — Jeremy’s Case Against Corrigibility21:57 — Max’s Mainline AI Scenario28:51 — 4 Strategies: Alignment, Control, Corrigibility, Don’t Build It37:00 — Corrigibility vs HHH (”Helpful, Harmless, Honest”)44:43 — Asimov’s 3 Laws of Robotics52:05 — Is Corrigibility a Coherent Concept?1:03:32 — Corrigibility vs Shutdown-ability1:09:59 — CAST: Corrigibility as Singular Target, Near Misses, Iterations1:20:18 — Debating if Max is Over-Optimistic1:34:06 — Debating if Corrigibility is the Best Target1:38:57 — Would Max Work for Anthropic?1:41:26 — Max’s Modest Hopes1:58:00 — Max’s New Book: Red Heart2:16:08 — OutroShow NotesMax’s book Red Heart — https://www.amazon.com/Red-Heart-Max-Harms/dp/108822119XLearn more about CAST: Corrigibility as Singular Target — https://www.lesswrong.com/s/KfCjeconYRdFbMxsy/p/NQK8KHSrZRF5erTbaMax’s Twitter — https://x.com/raelifinJeremy’s Twitter — https://x.com/jeremygillen1---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    2:17:48
  • These Effective Altruists Betrayed Me — Holly Elmore, PauseAI US Executive Director
    Holly Elmore leads protests against frontier AI labs, and that work has strained some of her closest relationships in the AI-safety community.She says AI safety insiders care more about their reputation in tech circles than actually lowering AI x-risk. This is our full conversation from my “If Anyone Builds It, Everyone Dies” unofficial launch party livestream on Sept 16, 2025.Timestamps0:00 Intro1:06 Holly’s Background and The Current Activities of PauseAI US4:41 The Circular Firing Squad Problem of AI Safety7:23 Why the AI Safety Community Resists Public Advocacy11:37 Breaking with Former Allies at AI Labs13:00 LessWrong’s reaction to Eliezer’s public turnShow NotesPauseAI US — https://pauseai-us.orgInternational PauseAI — https://pauseai.infoHolly’s Twitter — https://x.com/ilex_ulmusHolly’s Substack: https://substack.com/@hollyelmoreHolly’s post covering how AI isn’t another “technology”: https://hollyelmore.substack.com/p/the-technology-bucket-errorRelated EpisodesHolly and I dive into the rationalist community’s failure to rally behind a cause: https://lironshapira.substack.com/p/lesswrong-circular-firing-squadThe full IABED livestream: https://lironshapira.substack.com/p/if-anyone-builds-it-everyone-dies-party Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    16:17
  • DEBATE: Is AGI Really Decades Away? | Ex-MIRI Researcher Tsvi Benson-Tilsen vs. Liron Shapira
    Sparks fly in the finale of my series with ex-MIRI researcher Tsvi Benson-Tilsen as we debate his AGI timelines.Tsvi is a champion of using germline engineering to create smarter humans who can solve AI alignment.I support the approach, even though I’m skeptical it’ll gain much traction before AGI arrives.Timestamps0:00 Debate Preview0:57 Tsvi’s AGI Timeline Prediction 3:03 The Least Impressive Task AI Cannot Do In 2 years6:13 Proposed Task: Solve Cantor’s Theorem From Scratch 8:20 AI Has Limitations Related to Sample Complexity 11:41 We Need Clear Goalposts for Better AGI Predictions 13:19 Counterargument: LLMs May Not Be a Path to AGI16:01 Is Tsvi Setting a High Bar for Progress Towards AGI? 19:17 AI Models Are Missing A Spark of Creativity28:17 Liron’s “Black Box” AGI Test 32:09 Are We Going to Enter an AI Winter? 35:09 Who Is Being Overconfident? 42:11 If AI Makes Progress on Benchmarks, Would Tsvi Shorten His Timeline? 50:34 Recap & Tsvi’s ResearchShow NotesLearn more about Tsvi’s organization, the Berkeley Genomics Project — https://berkeleygenomics.orgDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    52:41

Más podcasts de Economía y empresa

Acerca de Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Sitio web del podcast

Escucha Doom Debates, Tu dinero nunca duerme y muchos más podcasts de todo el mundo con la aplicación de radio.es

Descarga la app gratuita: radio.es

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v8.0.4 | © 2007-2025 radio.de GmbH
Generated: 12/1/2025 - 6:03:56 PM