PodcastsCienciasDwarkesh Podcast

Dwarkesh Podcast

Dwarkesh Patel
Dwarkesh Podcast
Último episodio

116 episodios

  • Dwarkesh Podcast

    Elon Musk - "In 36 months, the cheapest place to put AI will be space”

    05/2/2026 | 2 h 49 min
    In this episode, John and I got to do a real deep-dive with Elon. We discuss the economics of orbital data centers, the difficulties of scaling power on Earth, what it would take to manufacture humanoids at high-volume in America, xAI’s business and alignment plans, DOGE, and much more.
    Watch on YouTube; read the transcript.
    Sponsors
    * Mercury just started offering personal banking! I’m already banking with Mercury for business purposes, so getting to bank with them for my personal life makes everything so much simpler. Apply now at mercury.com/personal-banking
    * Jane Street sent me a new puzzle last week: they trained a neural net, shuffled all 96 layers, and asked me to put them back in order. I tried but… I didn’t quite nail it. If you’re curious, or if you think you can do better, you should take a stab at janestreet.com/dwarkesh
    * Labelbox can get you robotics and RL data at scale. Labelbox starts by helping you define your ideal data distribution, and then their massive Alignerr network collects frontier-grade data that you can use to train your models. Learn more at labelbox.com/dwarkesh
    Timestamps
    00:00:00 - Orbital data centers
    00:36:46 - Grok and alignment
    00:59:56 - xAI’s business plan
    01:17:21 - Optimus and humanoid manufacturing
    01:30:22 - Does China win by default?
    01:44:16 - Lessons from running SpaceX
    02:20:08 - DOGE
    02:38:28 - TeraFab


    Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
  • Dwarkesh Podcast

    Adam Marblestone – AI is missing something fundamental about the brain

    30/12/2025 | 1 h 49 min
    Adam Marblestone is CEO of Convergent Research. He’s had a very interesting past life: he was a research scientist at Google Deepmind on their neuroscience team and has worked on everything from brain-computer interfaces to quantum computing to nanotech and even formal mathematics.
    In this episode, we discuss how the brain learns so much from so little, what the AI field can learn from neuroscience, and the answer to Ilya’s question: how does the genome encode abstract reward functions? Turns out, they’re all the same question.
    Watch on YouTube; read the transcript.
    Sponsors
    * Gemini 3 Pro recently helped me run an experiment to test multi-agent scaling: basically, if you have a fixed budget of compute, what is the optimal way to split it up across agents? Gemini was my colleague throughout the process — honestly, I couldn’t have investigated this question without it. Try Gemini 3 Pro today gemini.google.com
    * Labelbox helps you train agents to do economically-valuable, real-world tasks. Labelbox’s network of subject-matter experts ensures you get hyper-realistic RL environments, and their custom tooling lets you generate the highest-quality training data possible from those environments. Learn more at labelbox.com/dwarkesh
    To sponsor a future episode, visit dwarkesh.com/advertise.
    Timestamps
    (00:00:00) – The brain’s secret sauce is the reward functions, not the architecture
    (00:22:20) – Amortized inference and what the genome actually stores
    (00:42:42) – Model-based vs model-free RL in the brain
    (00:50:31) – Is biological hardware a limitation or an advantage?
    (01:03:59) – Why a map of the human brain is important
    (01:23:28) – What value will automating math have?
    (01:38:18) – Architecture of the brain
    Further reading
    Intro to Brain-Like-AGI Safety - Steven Byrnes’s theory of the learning vs steering subsystem; referenced throughout the episode.
    A Brief History of Intelligence - Great book by Max Bennett on connections between neuroscience and AI
    Adam’s blog, and Convergent Research’s blog on essential technologies.
    A Tutorial on Energy-Based Learning by Yann LeCun
    What Does It Mean to Understand a Neural Network? - Kording & Lillicrap
    E11 Bio and their brain connectomics approach
    Sam Gershman on what dopamine is doing in the brain
    Gwern’s proposal on training models on the brain’s hidden states


    Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
  • Dwarkesh Podcast

    An audio version of my blog post, Thoughts on AI progress (Dec 2025)

    23/12/2025 | 12 min
    Read the essay here.
    Timestamps
    00:00:00 What are we scaling?
    00:03:11 The value of human labor
    00:05:04 Economic diffusion lag is cope00:06:34 Goal-post shifting is justified
    00:08:23 RL scaling
    00:09:18 Broadly deployed intelligence explosion


    Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
  • Dwarkesh Podcast

    Sarah Paine – Why Russia Lost the Cold War

    19/12/2025 | 1 h 54 min
    This is the final episode of the Sarah Paine lecture series, and it’s probably my favorite one. Sarah gives a “tour of the arguments” on what ultimately led to the Soviet Union’s collapse, diving into the role of the US, the Sino-Soviet border conflict, the oil bust, ethnic rebellions and even the Roman Catholic Church. As she points out, this is all particularly interesting as we find ourselves potentially at the beginning of another Cold War.
    As we wrap up this lecture series, I want to take a moment to thank Sarah for doing this with me. It has been such a pleasure.
    If you want more of her scholarship, I highly recommend checking out the books she’s written. You can find them here.
    Watch on YouTube; read the transcript.
    Sponsors
    * Labelbox can get you the training data you need, no matter the domain. Their Alignerr network includes the STEM PhDs and coding experts you’d expect, but it also has experienced cinematographers and talented voice actors to help train frontier video and audio models. Learn more at labelbox.com/dwarkesh.
    * Sardine doesn’t just assess customer risk for banking & retail. Their AI risk management platform is also extremely good at detecting fraudulent job applications, which I’ve found useful for my own hiring process. If you need help with hiring risk—or any other type of fraud prevention—go to sardine.ai/dwarkesh.
    * Gemini’s Nano Banana Pro helped us make many of the visuals in this episode. For example, we used it to turn dense tables into clear charts so that’d it be easier to quickly understand the trends that Sarah discusses. You can try Nano Banana Pro now in the Gemini app. Go to gemini.google.com.
    Timestamps
    (00:00:00) – Did Reagan single-handedly win the Cold War?
    (00:15:53) – Eastern Bloc uprisings & oil crisis
    (00:30:37) – Gorbachev’s mistakes
    (00:37:33) – German unification and NATO expansion
    (00:48:31) – The Gulf War and the Cold War endgame
    (00:56:10) – How central planning survived so long
    (01:14:46) – Sarah’s life in the USSR in 1988


    Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
  • Dwarkesh Podcast

    Ilya Sutskever – We're moving from the age of scaling to the age of research

    25/11/2025 | 1 h 36 min
    Ilya & I discuss SSI’s strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well.
    Watch on YouTube; read the transcript.
    Sponsors
    * Gemini 3 is the first model I’ve used that can find connections I haven’t anticipated. I recently wrote a blog post on RL’s information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at gemini.google
    * Labelbox helped me create a tool to transcribe our episodes! I’ve struggled with transcription in the past because I don’t just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the exact data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to labelbox.com/dwarkesh
    * Sardine is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a user’s risk of fraud & abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at sardine.ai/dwarkesh
    To sponsor a future episode, visit dwarkesh.com/advertise.
    Timestamps
    (00:00:00) – Explaining model jaggedness
    (00:09:39) - Emotions and value functions
    (00:18:49) – What are we scaling?
    (00:25:13) – Why humans generalize better than models
    (00:35:45) – SSI’s plan to straight-shot superintelligence
    (00:46:47) – SSI’s model will learn from deployment
    (00:55:07) – How to think about powerful AGIs
    (01:18:13) – “We are squarely an age of research company”
    (01:20:23) – Self-play and multi-agent
    (01:32:42) – Research taste


    Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Más podcasts de Ciencias

Acerca de Dwarkesh Podcast

Deeply researched interviews www.dwarkesh.com
Sitio web del podcast

Escucha Dwarkesh Podcast, Podcast de Juan Ramón Rallo y muchos más podcasts de todo el mundo con la aplicación de radio.es

Descarga la app gratuita: radio.es

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

Dwarkesh Podcast: Podcasts del grupo

Aplicaciones
Redes sociales
v8.5.0 | © 2007-2026 radio.de GmbH
Generated: 2/12/2026 - 8:11:49 AM