Stop Wasting GPU Compute. Build the High-Throughput, Low-Latency AI Infrastructure of 2026.
The "VRAM Wall" is the biggest bottleneck in modern AI. Standard Python wrappers and out-of-the-box runtimes are fine for prototyping, but at scale, memory fragmentation and Global Interpreter Lock (GIL) overhead will destroy your throughput. LLM Inference in C++ is the definitive engineering manual for bypassing Python entirely and building custom, bare-metal inference engines that maximize hardware utilization.
Focusing on the cutting-edge 2026 landscape, this book bridges the gap between high-level AI concepts and low-level GPU execution. You will learn how to implement enterprise-grade features like PagedAttention, FlashAttention-3, and Continuous Batching directly in C++ and CUDA, unlocking massive performance gains for large-scale language models.
Inside, you will discover:
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
Da: California Books, Miami, FL, U.S.A.
Condizione: New. Print on Demand. Codice articolo I-9798259069299
Quantità: Più di 20 disponibili