PodcastsTecnologíaLinear Digressions

Linear Digressions

Katie Malone
Linear Digressions
Último episodio

299 episodios

  • Linear Digressions

    The Hot Mess of AI (Mis-)Alignment

    23/03/2026 | 22 min
    The paperclip maximizer — the classic AI doom scenario where a hyper-competent machine single-mindedly converts the universe into office supplies — might not be the AI risk we should actually lose sleep over. New research from Anthropic's AI safety division suggests misaligned AI looks less like an evil genius and more like a distracted wanderer who gets sidetracked reading French poetry instead of, say, managing a nuclear power plant. This week we dig into a fascinating paper reframing AI misalignment through the lens of bias-variance decomposition, and why longer reasoning chains might actually make things worse, not better.

    - "The Hot Mess Theory of AI Misalignment: How Misalignment Scales with Model Intelligence and Task Complexity" — Anthropic AI Safety. https://arxiv.org/abs/2503.08941
  • Linear Digressions

    The Bitter Lesson

    15/03/2026 | 19 min
    Every AI builder knows the anxiety: you spend months engineering prompts, tuning pipelines, and chaining calls together — then a new model drops and half your work evaporates overnight. It turns out researchers have been wrestling with this exact dynamic for 30 years, and they keep arriving at the same uncomfortable answer. That answer is called the Bitter Lesson — and understanding it might be the most important thing you can do for whatever you're building right now. From Deep Blue to AlexNet to modern LLMs, scale keeps beating sophistication, and knowing which side of that line your work falls on makes all the difference.

    Links

    - Richard Sutton, "The Bitter Lesson"

    - Alon Halevy, Peter Norvig, and Fernando Pereira, "The Unreasonable Effectiveness of Data"

    - Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, "ImageNet Classification with Deep Convolutional Neural Networks"
  • Linear Digressions

    From Atari to ChatGPT: How AI Learned to Follow Instructions

    09/03/2026 | 25 min
    From Atari to ChatGPT: How AI Learned to Follow Instructions by Katie Malone
  • Linear Digressions

    It's RAG time: Retrieval-Augmented Generation

    02/03/2026 | 17 min
    Today we are going to talk about the feature with the worst acronym in generative AI: RAG, or Retrieval Augmented Generation. If you've ever used something like "Chat with My Docs," if you have an internal AI chatbot that has access to your company's documents, or you've created one yourself on some kind of personal project and uploaded a bunch of documents for the AI to use — you have encountered RAG, whether you know it or not.
    It's an extremely effective technique. Works super well for taking general purpose models like ChatGPT or Claude and turning them into AIs that are aware of all the specific information that makes them truly useful in a huge variety of situations. RAG is pretty interesting under the hood, so I thought it would be fun to spend a little while talking about it.
    You are listening to Linear Digressions.
    RAG was first introduced in this paper from Facebook Research in 2021: https://arxiv.org/pdf/2005.11401
  • Linear Digressions

    Chasing Away Repetitive LLM Responses with Verbalized Sampling

    23/02/2026 | 19 min
    One of the things that LLMs can be really helpful with is brainstorming or generating new creative content. They are called Generative AI, after all—not just for summarization and question-and-answer tasks. But if you use LLMs for creative generation, you may find that their output starts to seem repetitive after a little while.
    Let's say you're asking it to create a poem, some dialogue, or a joke. If you ask once, it'll give you something that sounds pretty reasonable. But if you ask the same thing 10 times, it might give you 10 things that sound kind of the same.
    Today's episode is about a technique called verbalized sampling, and it's a way to mitigate this repetitiveness—this lack of diversity in LLM responses for creative tasks. But one of the things I really love about it is that in understanding why this repetitiveness happens and why verbalized sampling actually works as a mitigation technique, you start to get some pretty interesting insights and a deeper understanding of what's going on with LLMs under the surface.
    The paper discussed in this episode is Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity
    https://arxiv.org/abs/2510.01171

Más podcasts de Tecnología

Acerca de Linear Digressions

Demystifying AI for the intelligently curious
Sitio web del podcast

Escucha Linear Digressions, Loop Infinito (by Xataka) y muchos más podcasts de todo el mundo con la aplicación de radio.es

Descarga la app gratuita: radio.es

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v8.8.4| © 2007-2026 radio.de GmbH
Generated: 3/26/2026 - 10:43:52 PM