What is llm-d?

Copia URL

llm-d is a Kubernetes-native, open source framework that speeds up distributed large language model (LLM) inference at scale. 

This means when an AI model receives complicated queries with a lot of data, llm-d provides a framework that makes processing faster. 

llm-d was created by Google, NVIDIA, IBM Research, and CoreWeave. Its open source community contributes updates to improve the technology.

How Red Hat AI speeds up inference

LLM prompts can be complex and nonuniform. They typically require extensive computational resources and storage to process large amounts of data. 

llm-d has a modular architecture that can support the increasing resource demands of sophisticated and larger reasoning models like LLMs

A modular architecture allows all the different parts of the AI workload to work either together or separately, depending on the model's needs. This helps the model inference faster.

Imagine llm-d is like a marathon race: Each runner is in control of their own pace. You may cross the finish line at a different time than others, but everyone finishes when they’re ready. If everyone had to cross the finish line at the same time, you’d be tied to various unique needs of other runners, like endurance, water breaks, or time spent training. That would make things complicated. 

A modular architecture lets pieces of the inference process work at their own pace to reach the best result as quickly as possible. It makes it easier to fix or update specific processes independently, too.

This specific way of processing models allows llm-d to handle the demands of LLM inference at scale. It also empowers users to go beyond single-server deployments and use generative AI (gen AI) inference across the enterprise.

How does distributed inference work?  

The llm-d modular architecture is made up of: 

  • Kubernetes: an open source container-orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.
  • vLLM: an open source inference server that speeds up the outputs of gen AI applications.
  • Inference Gateway (IGW): a Kubernetes Gateway API extension that hosts features like model routing, serving priority, and “smart” load-balancing capabilities. 

This accessible, modular architecture makes llm-d an ideal platform for distributed LLM inference at scale.

What is operationalized AI?

4 elementi chiave da considerare per l'implementazione dell'IA

Blog post

Cos'è llm-d e a cosa serve?

Osserviamo una tendenza significativa: sempre più organizzazioni utilizzano l'infrastruttura dei modelli linguistici di grandi dimensioni (LLM) internamente.

L'adattabilità enterprise: predisporsi all'IA per essere pronti a un'innovazione radicale

Questo ebook, redatto da Michael Ferris, COO e CSO di Red Hat, illustra il ritmo del cambiamento e dell'innovazione tecnologica radicale con l'IA che i leader IT devono affrontare nella realtà odierna.

Continua a leggere

Machine learning: Cos'è MLOps?

Con MLOps (Machine Learning Operations) si intende un insieme di metodologie per i flussi di lavoro che semplifica la distribuzione e la gestione dei modelli di machine learning (ML).

Cos'è l'inferenza IA?

L'inferenza IA è il momento in cui un modello di IA fornisce una risposta basata sui dati. È l'ultimo passaggio di un complesso processo tecnologico di machine learning.

Cosa sono i modelli fondativi per l'IA?

Un modello fondativo è una particolare tipologia di modello di machine learning (ML) che viene addestrato per eseguire una specifica gamma di attività.

AI/ML: risorse consigliate

Prodotto in evidenza

  • Red Hat AI

    Soluzioni flessibili che accelerano lo sviluppo e il deployment delle soluzioni di IA negli ambienti hybrid cloud.

Articoli correlati