Build real-world AI assistants, not just toy demos.
LLMs in Practice shows you, step by step, how to build retrieval-augmented generation (RAG) systems with embeddings and vector databases, then turn them into production-ready assistants that search, reason, and take action.
Instead of hand-wavy theory, this book walks through a complete stack: ingesting documents, chunking and embedding them, storing vectors, wiring up retrieval, designing grounded prompts, evaluating quality, logging behaviour, securing data, adding tools, and finally deploying everything as a service. Along the way, you see the same patterns implemented in both Python and TypeScript, so you can work in whichever ecosystem you prefer.
You’ll learn how to take a messy folder of PDFs, wikis, and docs and turn it into:
A searchable knowledge base backed by embeddings and a vector database
A grounded RAG pipeline that cites its sources instead of hallucinating
A tools-enabled assistant that not only answers questions, but can create tickets, trigger workflows, or call APIs
An observable system with traces, logs, and a small evaluation set, so you can improve it over time
A deployable service (FastAPI or Express) that real users can talk to
The focus throughout is on small, composable building blocks you can actually ship: tight retrieval functions, clear prompt templates, thin adapters around model providers, and simple web endpoints that wrap it all together. No heavy frameworks required.
By the end of the book, you’ll have a practical roadmap to go from “I can call an LLM API” to “I have a narrow, grounded assistant in production that my team actually uses”—and a set of patterns you can reuse for the next assistant you build.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
Da: GreatBookPrices, Columbia, MD, U.S.A.
Condizione: As New. Unread book in perfect condition. Codice articolo 52169204
Quantità: Più di 20 disponibili
Da: GreatBookPrices, Columbia, MD, U.S.A.
Condizione: New. Codice articolo 52169204-n
Quantità: Più di 20 disponibili
Da: Grand Eagle Retail, Bensenville, IL, U.S.A.
Paperback. Condizione: new. Paperback. Build real-world AI assistants, not just toy demos.LLMs in Practice shows you, step by step, how to build retrieval-augmented generation (RAG) systems with embeddings and vector databases, then turn them into production-ready assistants that search, reason, and take action.Instead of hand-wavy theory, this book walks through a complete stack: ingesting documents, chunking and embedding them, storing vectors, wiring up retrieval, designing grounded prompts, evaluating quality, logging behaviour, securing data, adding tools, and finally deploying everything as a service. Along the way, you see the same patterns implemented in both Python and TypeScript, so you can work in whichever ecosystem you prefer.You'll learn how to take a messy folder of PDFs, wikis, and docs and turn it into: A searchable knowledge base backed by embeddings and a vector databaseA grounded RAG pipeline that cites its sources instead of hallucinatingA tools-enabled assistant that not only answers questions, but can create tickets, trigger workflows, or call APIsAn observable system with traces, logs, and a small evaluation set, so you can improve it over timeA deployable service (FastAPI or Express) that real users can talk toThe focus throughout is on small, composable building blocks you can actually ship: tight retrieval functions, clear prompt templates, thin adapters around model providers, and simple web endpoints that wrap it all together. No heavy frameworks required.By the end of the book, you'll have a practical roadmap to go from "I can call an LLM API" to "I have a narrow, grounded assistant in production that my team actually uses"-and a set of patterns you can reuse for the next assistant you build. This item is printed on demand. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Codice articolo 9798261786542
Quantità: 1 disponibili
Da: PBShop.store UK, Fairford, GLOS, Regno Unito
PAP. Condizione: New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Codice articolo L0-9798261786542
Quantità: Più di 20 disponibili
Da: GreatBookPricesUK, Woodford Green, Regno Unito
Condizione: New. Codice articolo 52169204-n
Quantità: Più di 20 disponibili
Da: GreatBookPricesUK, Woodford Green, Regno Unito
Condizione: As New. Unread book in perfect condition. Codice articolo 52169204
Quantità: Più di 20 disponibili
Da: CitiRetail, Stevenage, Regno Unito
Paperback. Condizione: new. Paperback. Build real-world AI assistants, not just toy demos.LLMs in Practice shows you, step by step, how to build retrieval-augmented generation (RAG) systems with embeddings and vector databases, then turn them into production-ready assistants that search, reason, and take action.Instead of hand-wavy theory, this book walks through a complete stack: ingesting documents, chunking and embedding them, storing vectors, wiring up retrieval, designing grounded prompts, evaluating quality, logging behaviour, securing data, adding tools, and finally deploying everything as a service. Along the way, you see the same patterns implemented in both Python and TypeScript, so you can work in whichever ecosystem you prefer.You'll learn how to take a messy folder of PDFs, wikis, and docs and turn it into: A searchable knowledge base backed by embeddings and a vector databaseA grounded RAG pipeline that cites its sources instead of hallucinatingA tools-enabled assistant that not only answers questions, but can create tickets, trigger workflows, or call APIsAn observable system with traces, logs, and a small evaluation set, so you can improve it over timeA deployable service (FastAPI or Express) that real users can talk toThe focus throughout is on small, composable building blocks you can actually ship: tight retrieval functions, clear prompt templates, thin adapters around model providers, and simple web endpoints that wrap it all together. No heavy frameworks required.By the end of the book, you'll have a practical roadmap to go from "I can call an LLM API" to "I have a narrow, grounded assistant in production that my team actually uses"-and a set of patterns you can reuse for the next assistant you build. This item is printed on demand. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. Codice articolo 9798261786542
Quantità: 1 disponibili