Powered by RND

Doom Debates

Liron Shapira
Doom Debates
Último episodio

Episodios disponibles

5 de 84
  • Carl Feynman, AI Engineer & Son of Richard Feynman, Says Building AGI Likely Means Human EXTINCTION!
    Carl Feynman got his Master’s in Computer Science and B.S. in Philosophy from MIT, followed by a four-decade career in AI engineering.He’s known Eliezer Yudkowsky since the ‘90s, and witnessed Eliezer’s AI doom argument taking shape before most of us were paying any attention!He agreed to come on the show because he supports Doom Debates’s mission of raising awareness of imminent existential risk from superintelligent AI.00:00 - Teaser00:34 - Carl Feynman’s Background02:40 - Early Concerns About AI Doom03:46 - Eliezer Yudkowsky and the Early AGI Community05:10 - Accelerationist vs. Doomer Perspectives06:03 - Mainline Doom Scenarios: Gradual Disempowerment vs. Foom07:47 - Timeline to Doom: Point of No Return08:45 - What’s Your P(Doom)™09:44 - Public Perception and Political Awareness of AI Risk11:09 - AI Morality, Alignment, and Chatbots Today13:05 - The Alignment Problem and Competing Values15:03 - Can AI Truly Understand and Value Morality?16:43 - Multiple Competing AIs and Resource Competition18:42 - Alignment: Wanting vs. Being Able to Help Humanity19:24 - Scenarios of Doom and Odds of Success19:53 - Mainline Good Scenario: Non-Doom Outcomes20:27 - Heaven, Utopia, and Post-Human Vision22:19 - Gradual Disempowerment Paper and Economic Displacement23:31 - How Humans Get Edged Out by AIs25:07 - Can We Gaslight Superintelligent AIs?26:38 - AI Persuasion & Social Influence as Doom Pathways27:44 - Riding the Doom Train: Headroom Above Human Intelligence29:46 - Orthogonality Thesis and AI Motivation32:48 - Alignment Difficulties and Deception in AIs34:46 - Elon Musk, Maximal Curiosity & Mike Israetel’s Arguments36:26 - Beauty and Value in a Post-Human Universe38:12 - Multiple AIs Competing39:31 - Space Colonization, Dyson Spheres & Hanson’s “Alien Descendants”41:13 - What Counts as Doom vs. Not Doom?43:29 - Post-Human Civilizations and Value Function44:49 - Expertise, Rationality, and Doomer Credibility46:09 - Communicating Doom: Missing Mood & Public Receptiveness47:41 - Personal Preparation vs. Belief in Imminent Doom48:56 - Why Can't We Just Hit the Off Switch?50:26 - The Treacherous Turn and Redundancy in AI51:56 - Doom by Persuasion or Entertainment53:43 - Differences with Eliezer Yudkowsky: Singleton vs. Multipolar Doom55:22 - Why Carl Chose Doom Debates56:18 - Liron’s OutroShow NotesCarl’s Twitter — https://x.com/carl_feynmanCarl’s LessWrong — https://www.lesswrong.com/users/carl-feynmanGradual Disempowerment — https://gradual-disempowerment.aiThe Intelligence Curse — https://intelligence-curse.aiAI 2027 — https://ai-2027.comAlcor cryonics — https://www.alcor.orgThe LessOnline Conference — https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    57:05
  • Richard Hanania vs. Liron Shapira — AI Doom Debate
    Richard Hanania is the President of the Center for the Study of Partisanship and Ideology. His work has been praised by Vice President JD Vance, Tyler Cowen, and Bryan Caplan among others.In his influential newsletter, he’s written about why he finds AI doom arguments unconvincing. He was gracious enough to debate me on this topic. Let’s see if one of us can change the other’s P(Doom)!0:00 Intro1:53 Richard's politics2:24 The state of political discourse3:30 What's your P(Doom)?™6:38 How to stop the doom train8:27 Statement on AI risk9:31 Intellectual influences11:15 Base rates for AI doom15:43 Intelligence as optimization power31:26 AI capabilities progress53:46 Why isn't AI yet a top blogger?58:02 Diving into Richard's Doom Train58:47 Diminishing Returns on Intelligence1:06:36 Alignment will be relatively trivial1:15:14 Power-seeking must be programmed1:21:27 AI will simply be benevolent1:27:17 Superintelligent AI will negotiate with humans1:33:00 Super AIs will check and balance each other out1:36:54 We're mistaken about the nature of intelligence1:41:46 Summarizing Richard's AI doom position1:43:22 Jobpocalypse and gradual disempowerment1:49:46 Ad hominem attacks in AI discourseShow NotesSubscribe to Richard Hanania's Newsletter: https://richardhanania.comRichard's blogpost laying out where he gets off the AI "doom train": https://www.richardhanania.com/p/ai-doomerism-as-science-fictionRichard's interview with Steven Pinker: https://www.richardhanania.com/p/pinker-on-alignment-and-intelligenceRichard's interview with Robin Hanson: https://www.richardhanania.com/p/robin-hanson-says-youre-going-toMy Doom Debate with Robin Hanson: https://www.youtube.com/watch?v=dTQb6N3_zu8My reaction to Steven Pinker's AI doom position, and why his arguments are shallow: https://www.youtube.com/watch?v=-tIq6kbrF-4"The Betterness Explosion" by Robin Hanson: https://www.overcomingbias.com/p/the-betterness-explosionhtml---Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16FgPauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:52:46
  • Emmett Shear (OpenAI Ex-Interim-CEO)'s New “Softmax” AI Alignment Plan — Is It Legit?
    Emmett Shear is the cofounder and ex-CEO of Twitch, ex-interim-CEO of OpenAI, and a former Y Combinator partner. He recently announced Softmax, a new company researching a novel solution to AI alignment.In his recent interview, Emmett explained “organic alignment”, drawing comparisons to biological systems and advocating for AI to be raised in a community-like setting with humans.Let’s go through his talk, point by point, to see if Emmett’s alignment plan makes sense…00:00 Episode Highlights00:36 Introducing Softmax and its Founders01:33 Research Collaborators and Ken Stanley's Influence02:16 Softmax's Mission and Organic Alignment03:13 Critique of Organic Alignment05:29 Emmett’s Perspective on AI Alignment14:36 Human Morality and Cognitive Submodules38:25 Top-Down vs. Emergent Morality in AI44:56 Raising AI to Grow Up with Humanity48:43 Softmax's Incremental Approach to AI Alignment52:22 Convergence vs. Divergence in AI Learning55:49 Multi-Agent Reinforcement Learning01:12:28 The Importance of Storytelling in AI Development01:16:34 Living With AI As It Grows01:20:19 Species Giving Birth to Species01:23:23 The Plan for AI's Adolescence01:26:53 Emmett's Views on Superintelligence01:31:00 The Future of AI Alignment01:35:10 Final Thoughts and Criticisms01:44:07 Conclusion and Call to ActionShow NotesEmmett Shear’s interview on BuzzRobot with Sophia Aryan (source material) — https://www.youtube.com/watch?v=_3m2cpZqvdwBuzzRobot’s YouTube channel — https://www.youtube.com/@BuzzRobotBuzzRobot’s Twitter — https://x.com/buZZrobot/SoftMax’s website — https://softmax.comMy Doom Debate with Ken Stanley (Softmax advisor) — https://www.youtube.com/watch?v=GdthPZwU1CoMy Doom Debate with Gil Mark on whether aligning AIs in groups is a more solvable problem — https://www.youtube.com/watch?v=72LnKW_jae8Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:53:11
  • Will AI Have a Moral Compass? — Debate with Scott Sumner, Author of The Money Illusion
    Prof. Scott Sumner is a well-known macroeconomist who spent more than 30 years teaching at Bentley University, and now holds an Emeritus Chair in monetary policy at George Mason University's Mercatus Center. He's best known for his blog, the Money Illusion, which sparked the idea of Market Monetarism and NGDP targeting.I sat down with him at LessOnline 2025 to debate why his P(Doom) is pretty low. Where does he get off the Doom Train? 🚂00:00 Episode Preview00:34 Introducing Scott Sumner05:20 Is AGI Coming Soon?09:12 Potential of AI in Various Fields36:49 Ethical Implications of Superintelligent AI41:03 The Nazis as an Outlier in History43:36 Intelligence and Morality: The Orthogonality Thesis49:03 The Risk of Misaligned AI Goals01:09:31 Recapping Scott’s PositionShow NotesScott’s current blog, The Pursuit of Happiness:Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:15:45
  • Searle's Chinese Room is DUMB — It's Just Slow-Motion Intelligence
    John Searle's "Chinese Room argument" has been called one of the most famous thought experiments of the 20th century. It's still frequently cited today to argue AI can never truly become intelligent.People continue to treat the Chinese Room like a brilliant insight, but in my opinion, it's actively misleading and DUMB! Here’s why…00:00 Intro00:20 What is Searle's Chinese Room Argument?01:43 John Searle (1984) on Why Computers Can't Understand01:54 Why the "Chinese Room" Metaphor is MisleadingThis mini-episode is taken from Liron's reaction to Sir Roger Penrose. Watch the full episode:Show Notes2008 Interview with John Searle: https://www.youtube.com/watch?v=3TnBjLmQawQ&t=253s1984 Debate with John Searle: https://www.youtube.com/watch?v=6tzjcnPsZ_w“Chinese Room” cartoon: https://miro.medium.com/v2/0*iTvDe5ebNPvg10AO.jpegWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    5:01

Más podcasts de Economía y empresa

Acerca de Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Sitio web del podcast

Escucha Doom Debates, CANCELLED ❌ y muchos más podcasts de todo el mundo con la aplicación de radio.es

Descarga la app gratuita: radio.es

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v7.20.1 | © 2007-2025 radio.de GmbH
Generated: 7/5/2025 - 7:49:44 PM