PodcastsTecnologíaFuture of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute
Future of Life Institute Podcast
Último episodio

499 episodios

  • Future of Life Institute Podcast

    Why We Should Build AI Tools, Not AI Replacements (with Anthony Aguirre)

    11/05/2026 | 1 h 36 min
    Anthony Aguirre is the CEO of the Future of Life Institute. He joins the podcast to discuss A Better Path for AI, his essay series on steering AI away from races to replace people. The conversation covers races for attention, attachment, automation, and superintelligence, and how these can concentrate power and undermine human agency. Anthony argues for purpose-built AI tools under meaningful human control, with liability, access limits, external guardrails, and international cooperation.

    LINKS:
    A Better Path for AI
    What You Can Do

    CHAPTERS:

    (00:00) Episode Preview

    (01:03) Attention, attachment, automation

    (13:58) Superintelligence power race

    (26:39) Escaping replacement dynamics

    (40:15) Pro-human tool AI

    (53:30) Guardrails and verification

    (01:03:24) Defining pro-human AI

    (01:10:37) Agents and accountability

    (01:17:28) International AI cooperation

    (01:25:28) Rethinking AI alignment

    (01:32:43) Optimism and action

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    How to Govern AI When You Can't Predict the Future (with Charlie Bullock)

    07/05/2026 | 1 h 7 min
    Charlie Bullock is a Senior Research Fellow at the Institute for Law and AI. He joins the podcast to discuss radical optionality: how governments can prepare for very advanced AI without locking in premature rules. The conversation covers why law often trails technology, and how transparency, reporting, evaluations, cybersecurity standards, and expanded technical hiring could help. We also discuss private oversight, state versus federal rules, and the risk of concentrating power in companies or government.

    LINKS:
    Radical Optionality website

    Charlie Bullock

    CHAPTERS:

    (00:00) Episode Preview

    (01:04) The pacing problem

    (06:18) Defining radical optionality

    (11:03) Assumptions under uncertainty

    (16:00) Industry convenience concerns

    (20:41) Political will realities

    (26:48) Private governance limits

    (30:28) Government misuse risks

    (36:29) Balancing institutional power

    (42:25) Transparency and reporting

    (49:35) Evaluations, security, talent

    (58:26) State law preemption

    (01:04:20) Historical nuclear analogies

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    Why AI Is Not a Normal Technology (with Peter Wildeford)

    29/04/2026 | 1 h 24 min
    Peter Wildeford is Head of Policy at the AI Policy Network, and a top AI forecaster. He joins the podcast to discuss how to forecast AI progress and what current trends imply for the economy and national security. Peter argues AI is neither a bubble nor a normal technology, and we examine benchmark trends, adoption lags, unemployment and productivity effects, and the rise of cyber capabilities. We also cover robotics, export controls, prediction markets, and when AI may surpass human forecasters.

    LINKS:
    Peter Wildeford Blog

    CHAPTERS:

    (00:00) Episode Preview

    (01:12) AI bubble debate

    (06:25) Normal technology question

    (15:31) Mythos security implications

    (30:47) Robotics and labor

    (40:27) Social economic response

    (48:57) Forecasting methodology

    (59:49) AGI policy timelines

    (01:11:13) Forecasting with AI

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    Why AI Evaluation Science Can't Keep Up (with Carina Prunkl)

    17/04/2026 | 54 min
    Carina Prunkl is a researcher at Inria. She joins the podcast to discuss how to assess the capabilities and risks of general-purpose AI. We examine why systems can solve hard coding and math problems yet still fail at simple tasks, why pre-deployment tests often miss real-world behavior, and how faster capability gains can increase misuse risks. The conversation also covers de-skilling, red teaming, layered safeguards, and warning signs that AIs might undermine oversight.

    LINKS:
    Carina Prunkl personal website

    CHAPTERS:

    (00:00) Episode Preview

    (01:04) Introducing the report

    (02:10) Jagged frontier capabilities

    (05:29) Formal reasoning progress

    (12:36) Risks and evaluation science

    (19:00) Funding evaluation capacity

    (24:03) Autonomy and de-skilling

    (31:32) Authenticity and AI companions

    (41:00) Defense in depth methods

    (48:34) Loss of control risks

    (53:16) Where to read report

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    Defense in Depth: Layered Strategies Against AI Risk (with Li-Lian Ang)

    02/04/2026 | 55 min
    Li-Lian Ang is a team member at Blue Dot Impact. She joins the podcast to discuss how society can build a workforce to protect humanity from AI risks. The conversation covers engineered pandemics, AI-enabled cyber attacks, job loss and disempowerment, and power concentration in firms or AI systems. We also examine Blue Dot's defense-in-depth framework and how individuals can navigate rapid, uncertain AI progress.
    LINKS:
    Li-Lian Ang personal site
    Blue Dot Impact organization site
    CHAPTERS:
    (00:00) Episode Preview
    (00:48) Blue dot beginnings
    (03:04) Evolving AI risk concerns
    (06:20) AI agents in cyber
    (15:52) Gradual disempowerment and jobs
    (23:26) Aligning AI with humans
    (29:08) Power concentration and misuse
    (34:52) Influencing frontier AI labs
    (43:05) Uncertain timelines and strategy
    (50:18) Writing, AI, and action
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
Más podcasts de Tecnología
Acerca de Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Sitio web del podcast

Escucha Future of Life Institute Podcast, Hard Fork y muchos más podcasts de todo el mundo con la aplicación de radio.es

Descarga la app gratuita: radio.es

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Future of Life Institute Podcast: Podcasts del grupo