Doom Debates!

Liron Shapira
Doom Debates!
Último episodio

159 episodios

  • Doom Debates!

    Eliezer Yudkowsky Post-Debate Reaction, Elon's New Frenemy & Liron's Bet on Spencer Pratt!? - Doom Debates Live (5/8/26)

    09/05/2026 | 2 h 11 min
    Liron and Producer Ori explore the community's feedback to Eliezer Yudkowsky's $10,000 debate against an anonymous AI director. Plus, we unpack Eliezer's new post on the "irretrievability" of ASI development, Anthropic feasting on xAI's compute, and Liron's 4x Kalshi bet on... Spencer Pratt.
    Timestamps
    00:00:00 — Welcome
    00:02:30 — The $10,000 Debate Post-Mortem
    00:06:01 — Paul Tudor Jones: “Zero Risk Management on AI”
    00:09:49 — Liron’s Jerry Springer Moment
    00:14:19 — Why Yud Wore Steampunk
    00:20:19 — YouTube Reacts: “Five Minutes In, Already Unhinged”
    00:27:19 — 47F’s Anti-Disparagement Legal Theory
    00:32:18 — Yud Wants Round 2
    00:35:37 — Lumpin Space, Ben Goertzel & Early AI Safety Memories
    00:47:06 — Eliezer’s “Irretrievability” Post
    00:56:35 — The Maginot Line and Murphy’s Curse
    01:08:13 — Blockchain vs AI: Earth’s Compute Swing
    01:15:15 — Anthropic Eats Elon’s Compute
    01:25:29 — Is Elon Tweeting as His Own Mom?
    01:30:03 — Waymo Dodges a Fallen Scooter Rider
    01:33:00 — Why Liron Bet on Spencer Pratt
    01:37:13 — Live Twitter Scroll!
    01:38:11 — Mira Murati: “Directionally Very Bad”
    01:50:07 — AI Copies Itself Across Servers
    01:56:44 — Steven Byrnes: LLMs Aren’t the Final Paradigm
    02:07:55 — Wrap-Up
    Links
    LessOnline 2026 (June 5-7, Berkeley, CA) — https://less.online/
    Eliezer Yudkowsky, “Irretrievability; or, Murphy’s Curse of Oneshotness upon ASI” (LessWrong, May 4, 2026) —https://www.lesswrong.com/posts/fbrz9xhKpEeTKw5zL/irretrievability-or-murphy-s-curse-of-oneshotness-upon-asi
    Anthropic, SpaceX announce Colossus 1 compute deal (CNBC) — https://www.cnbc.com/2026/05/06/anthropic-spacex-data-center-capacity.html
    SpaceXAI announcement: Compute partnership with Anthropic — https://x.ai/news/anthropic-compute-partnership
    Kalshi: Spencer Pratt LA Mayor market — https://kalshi.com/markets/kxmayorla/la/kxmayorla-26
    Morris worm — Wikipedia — https://en.wikipedia.org/wiki/Morris_worm
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Debate with @lumpenspace (AI Accelerationist) — Is it GOOD for AI to replace us?

    07/05/2026 | 59 min
    Claude, also known as @lumpenspace on X, is a prominent AI accelerationist. He gives humanity a 30% chance of being superseded by superintelligence, and he’s fine with it! We unpack his lattice of beliefs, and pinpoint the cruxes of our disagreement on the orthogonality thesis and the capabilities of superintelligence.
    Lumpenspace’s appearance comes after a Molotov cocktail was thrown at Sam Altman’s house, an act that he and I completely condemn.
    Timestamps
    00:00:00 — Cold Open
    00:00:57 — Introducing Lumpenspace
    00:03:46 — Is Lumpenspace Team Beff Jezos?
    00:04:26 — What’s Your P(Doom)?™
    00:06:28 — Worthy Successors & the Transhumanist Door
    00:08:29 — The Orthogonality Thesis: Our Core Disagreement
    00:15:14 — The Universality Threshold & David Deutsch
    00:19:22 — Paperclips Won’t Happen Because “It’s Boring”
    00:27:23 — Natural Selection vs Human Engineering
    00:36:26 — Nanobots: Can ASI Build Them Unseen?
    00:46:49 — Identifying the Cruxes of Disagreement
    00:55:01 — Closing Statements
    Links
    Follow Lumpenspace — https://x.com/lumpenspace
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    NEW: Watch the Eliezer Yudkowsky vs. Secret AI Lab Director Debate on my other channel!

    05/05/2026 | 0 min
    📢 BREAKING: A man known only as "@47fucb4r8c69323" just paid Eliezer Yudkowsky $10,000 to have a raw, uncut, public debate on my YouTube channel!
    Tensions run high as 47f confronts Eliezer about his "If anyone builds it, everyone dies" rhetoric, warning it could incite unstable individuals to harm AI researchers and their families, while Eliezer maintains that the possibility of extinction from superintelligent AI is too high to *not* speak out about.They also clash over whether we truly understand how LLMs work. Yudkowsky highlights the lack of extracted algorithms that explain their qualitative intelligence, while 47f dismisses these concerns as a fundamental misunderstanding of how text modeling works.
    While this may not be the highest quality debate I’ve ever hosted, I hope it’s a step toward more attention and public discourse on this urgent, high-stakes topic.
    P.S. Please consider supporting my work with a tax-deductible donation. Read more…


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Who Paid $10,000 to Debate Yudkowsky? Plus AI Twitter & Investing Tips - Doom Debates Live (5/1/26)

    02/05/2026 | 1 h 32 min
    Livestream with Producer Ori covering Eliezer Yudkowsky's next BIG debate, why $GOOG is such a steal at $4T, and what's going down on AI twitter.
    Timestamps
    00:00:00 — Kicking off the Livestream
    00:06:08 — Eliezer Yudkowsky Debate Announcement
    00:13:17 — Who’s Yudkowsky’s Debate Challenger?
    00:25:32 — METR’s Capability Extrapolation Came True
    00:28:25 — Google: The Big Rally I Called
    00:33:06 — The Federal Reserve’s Extinction Chart
    00:36:06 — Liron’s Private Market Alpha Strategy
    00:40:35 — The Quadrillion-Dollar Economy
    00:47:54 — Bernie Sanders Gets It
    00:53:11 — Is Anthropic a Good Investment?
    01:07:42 — The Goblin Discourse
    01:10:52 — Codex, Claude Code & Live Coding Demo
    01:28:48 — Wrap-Up
    Links
    Yudkowsky debate announcement: https://x.com/liron/status/2050035001203229010
    Liron's Yudkowsky interview: https://x.com/liron/status/2050035003900223770
    METR capability extrapolation: https://x.com/nikolaj2030/status/2050276020146880661
    Bernie Sanders on AI risk: https://x.com/SenSanders/status/2048107882760085618
    Federal Reserve extinction chart: https://www.dallasfed.org/research/economics/2025/0624
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Justin Helps (⁨@Primer on YouTube) is Worried about AI Takeover

    30/04/2026 | 55 min
    Justin Helps is the science educator behind ⁨Primer Learning with 2M subscribers. We cover how he got into AI safety, debate AGI timelines, and why he calculates p(doom) to be 70% by 2100 😱.
    Timestamps
    00:00:00 — Cold Open
    00:00:38 — Introducing Justin Helps
    00:02:03 — What's Your P(Doom)?™
    00:03:38 — Justin's First Exposure to AI X-Risk
    00:04:49 — Major Disagreements with Eliezer Yudkowsky
    00:09:46 — Debating the Timeline to AGI
    00:12:24 — Metaculus Prediction Market Estimates AGI by 2032
    00:20:06 — Misguided Conceptions of AI's Limitations
    00:25:23 — Only a 5% P(Doom) by 2040
    00:28:40 — AIs Will Not Care About the Human Species
    00:31:00 — Summarizing Justin's Position So Far
    00:36:17 — High P(Doom), but We're Not Depressed
    00:40:14 — Justin's "Computer Man" Thought Experiment
    00:51:16 — Should We Pause AGI Development?
    00:54:15 — AI Doom Is a Serious Concern
    Links
    Primer’s Video on AI Doom — https://www.youtube.com/watch?v=Qg5QXY_qZuI
    Primer on YouTube — https://youtube.com/@PrimerBlobs
    Primer’s Website — https://primerlearning.org/
    Justin Helps on X — https://x.com/Helpsypoo
    Harry Potter and the Methods of Rationality — https://hpmor.com/
    Feeling Rational by Eliezer Yudkowsky —https://www.lesswrong.com/posts/SqF8cHjJv43mvJJzx/feeling-rational
    Pause AI — https://pauseai.info
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe

Más podcasts de Economía y empresa

Acerca de Doom Debates!

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com
Sitio web del podcast

Escucha Doom Debates!, Kapital y muchos más podcasts de todo el mundo con la aplicación de radio.es

Descarga la app gratuita: radio.es

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v8.8.16| © 2007-2026 radio.de GmbH
Generated: 5/9/2026 - 2:32:01 PM