Powered by RND
PodcastsTecnologíaLarge Language Model (LLM) Talk

Large Language Model (LLM) Talk

AI-Talk
Large Language Model (LLM) Talk
Último episodio

Episodios disponibles

5 de 66
  • Kimi K2
    Kimi K2, developed by Moonshot AI, is an open agentic intelligence model built on a Mixture-of-Experts (MoE) architecture. It features 1 trillion total parameters, with 32 billion active during inference. Trained on 15.5 trillion tokens using the stable MuonClip optimizer, Kimi K2 is optimized for advanced reasoning, coding, and tool use. It offers strong performance and significantly lower pricing than many competitors, making cutting-edge AI accessible and fostering innovation.
    --------  
    15:30
  • Mixture-of-Recursions (MoR)
    Mixture-of-Recursions (MoR) is a unified framework built on a Recursive Transformer architecture, designed to enhance the efficiency of large language models. It achieves this by combining three core paradigms: parameter sharing (reusing shared layers across recursion steps), adaptive computation (dynamically assigning different processing depths to individual tokens via lightweight routers), and efficient Key-Value (KV) caching (selectively storing or sharing KV pairs). This integrated approach enables MoR to deliver large-model quality with significantly reduced computational and memory overhead, improving efficiency for both training and inference.
    --------  
    16:43
  • MeanFlow
    MeanFlow models introduce the concept of average velocity to fundamentally reformulate one-step generative modeling. Unlike Flow Matching, which focuses on instantaneous velocity, MeanFlow directly models the displacement over a time interval. This approach allows for highly efficient one-step or few-step generation using a single network evaluation. MeanFlow is built on a principled mathematical identity between average and instantaneous velocities, guiding network training without requiring pre-training, distillation, or curriculum learning. It achieves state-of-the-art performance for one-step generation, significantly narrowing the gap with multi-step models.
    --------  
    6:47
  • Mamba
    Mamba is a novel deep learning architecture that achieves linear scaling in computation and memory with sequence length, addressing Transformers' quadratic limitations. Its selective State Space Model (SSM) layer dynamically adapts to input context, allowing it to "forget" or "remember" information. Optimizations include a hardware-aware parallel algorithm for its recurrent "selective scan", employing kernel fusion for efficient GPU memory usage and recomputation to reduce memory footprint during training. This results in significantly faster inference (up to 5x throughput) and superior long-context handling.
    --------  
    8:14
  • LLM Alignment
    LLM alignment is the process of steering Large Language Models to operate in a manner consistent with intended human goals, preferences, and ethical principles. Its primary objective is to make LLMs helpful, honest, and harmless, ensuring their outputs align with specific values and are advantageous to users. This critical process prevents unintended or harmful outputs, mitigates issues like specification gaming and reward hacking, addresses biases and falsehoods, and manages the complexity of these powerful AI systems. Alignment is vital to transform unpredictable models into reliable, trustworthy, and beneficial tools, especially as AI capabilities advance.
    --------  
    20:06

Más podcasts de Tecnología

Acerca de Large Language Model (LLM) Talk

AI Explained breaks down the world of AI in just 10 minutes. Get quick, clear insights into AI concepts and innovations, without any complicated math or jargon. Perfect for your commute or spare time, this podcast makes understanding AI easy, engaging, and fun—whether you're a beginner or tech enthusiast.
Sitio web del podcast

Escucha Large Language Model (LLM) Talk, Crossover y muchos más podcasts de todo el mundo con la aplicación de radio.es

Descarga la app gratuita: radio.es

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

Large Language Model (LLM) Talk: Podcasts del grupo

Aplicaciones
Redes sociales
v7.22.0 | © 2007-2025 radio.de GmbH
Generated: 8/4/2025 - 5:41:11 AM