Search preferences
Vai alla pagina principale dei risultati di ricerca

Filtri di ricerca

Tipo di articolo

  • Tutti i tipi di prodotto 
  • Libri (16)
  • Riviste e Giornali (Nessun altro risultato corrispondente a questo perfezionamento)
  • Fumetti (Nessun altro risultato corrispondente a questo perfezionamento)
  • Spartiti (Nessun altro risultato corrispondente a questo perfezionamento)
  • Arte, Stampe e Poster (Nessun altro risultato corrispondente a questo perfezionamento)
  • Fotografie (Nessun altro risultato corrispondente a questo perfezionamento)
  • Mappe (Nessun altro risultato corrispondente a questo perfezionamento)
  • Manoscritti e Collezionismo cartaceo (Nessun altro risultato corrispondente a questo perfezionamento)

Condizioni Maggiori informazioni

  • Nuovo (15)
  • Come nuovo, Ottimo o Quasi ottimo (Nessun altro risultato corrispondente a questo perfezionamento)
  • Molto buono o Buono (1)
  • Discreto o Mediocre (Nessun altro risultato corrispondente a questo perfezionamento)
  • Come descritto (Nessun altro risultato corrispondente a questo perfezionamento)

Legatura

  • Tutte 
  • Rilegato (Nessun altro risultato corrispondente a questo perfezionamento)
  • Brossura (16)

Ulteriori caratteristiche

  • Prima ed. (Nessun altro risultato corrispondente a questo perfezionamento)
  • Copia autograf. (Nessun altro risultato corrispondente a questo perfezionamento)
  • Sovracoperta (Nessun altro risultato corrispondente a questo perfezionamento)
  • Con foto (10)
  • Non Print on Demand (9)

Lingua (1)

Prezzo

Fascia di prezzo personalizzata (EUR)

Paese del venditore

  • Campesato, Oswald

    Lingua: Inglese

    Editore: Mercury Learning and Information, 2024

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: Books From California, Simi Valley, CA, U.S.A.

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    EUR 37,88

    EUR 4,25 shipping
    Spedito in U.S.A.

    Quantità: 1 disponibili

    Aggiungi al carrello

    paperback. Condizione: Very Good.

  • Campesato, Oswald

    Lingua: Inglese

    Editore: Mercury Learning and Information 1/1/2025, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: BargainBookStores, Grand Rapids, MI, U.S.A.

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    EUR 44,14

    Spedizione gratuita
    Spedito in U.S.A.

    Quantità: 5 disponibili

    Aggiungi al carrello

    Paperback or Softback. Condizione: New. Large Language Models for Developers: A Prompt-Based Exploration of Llms. Book.

  • Campesato, Oswald

    Lingua: Inglese

    Editore: Mercury Learning and Information, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: California Books, Miami, FL, U.S.A.

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    EUR 47,35

    Spedizione gratuita
    Spedito in U.S.A.

    Quantità: Più di 20 disponibili

    Aggiungi al carrello

    Condizione: New.

  • Oswald Campesato

    Lingua: Inglese

    Editore: De Gruyter, US, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: Rarewaves USA, OSWEGO, IL, U.S.A.

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    EUR 64,33

    Spedizione gratuita
    Spedito in U.S.A.

    Quantità: Più di 20 disponibili

    Aggiungi al carrello

    Paperback. Condizione: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.

  • Oswald Campesato

    Lingua: Inglese

    Editore: De Gruyter, US, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: Rarewaves.com USA, London, LONDO, Regno Unito

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    EUR 87,95

    Spedizione gratuita
    Spedito da Regno Unito a U.S.A.

    Quantità: Più di 20 disponibili

    Aggiungi al carrello

    Paperback. Condizione: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.

  • Campesato, Oswald

    Lingua: Inglese

    Editore: Mercury Learning & Information, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: Revaluation Books, Exeter, Regno Unito

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    EUR 76,18

    EUR 17,07 shipping
    Spedito da Regno Unito a U.S.A.

    Quantità: 2 disponibili

    Aggiungi al carrello

    Paperback. Condizione: Brand New. 1012 pages. 6.00x1.90x9.00 inches. In Stock.

  • Oswald Campesato

    Lingua: Inglese

    Editore: De Gruyter, US, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: Rarewaves USA United, OSWEGO, IL, U.S.A.

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    EUR 66,34

    EUR 42,57 shipping
    Spedito in U.S.A.

    Quantità: Più di 20 disponibili

    Aggiungi al carrello

    Paperback. Condizione: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.

  • Immagine del venditore per Large Language Models for Developers | A Prompt-based Exploration of LLMs venduto da preigu

    Oswald Campesato

    Lingua: Inglese

    Editore: De Gruyter, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: preigu, Osnabrück, Germania

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    EUR 53,65

    EUR 70,00 shipping
    Spedito da Germania a U.S.A.

    Quantità: 5 disponibili

    Aggiungi al carrello

    Taschenbuch. Condizione: Neu. Large Language Models for Developers | A Prompt-based Exploration of LLMs | Oswald Campesato | Taschenbuch | 1012 S. | Englisch | 2025 | De Gruyter | EAN 9781501523564 | Verantwortliche Person für die EU: Walter de Gruyter GmbH, De Gruyter GmbH, Genthiner Str. 13, 10785 Berlin, productsafety[at]degruyterbrill[dot]com | Anbieter: preigu.

  • Oswald Campesato

    Lingua: Inglese

    Editore: De Gruyter, US, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: Rarewaves.com UK, London, Regno Unito

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    EUR 82,80

    EUR 73,99 shipping
    Spedito da Regno Unito a U.S.A.

    Quantità: Più di 20 disponibili

    Aggiungi al carrello

    Paperback. Condizione: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.

  • Oswald Campesato

    Lingua: Inglese

    Editore: de Gruyter, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: PBShop.store UK, Fairford, GLOS, Regno Unito

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    Print on Demand

    EUR 51,38

    EUR 8,69 shipping
    Spedito da Regno Unito a U.S.A.

    Quantità: Più di 20 disponibili

    Aggiungi al carrello

    PAP. Condizione: New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000.

  • Oswald Campesato

    Lingua: Inglese

    Editore: de Gruyter, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: PBShop.store US, Wood Dale, IL, U.S.A.

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    Print on Demand

    EUR 59,88

    Spedizione gratuita
    Spedito in U.S.A.

    Quantità: Più di 20 disponibili

    Aggiungi al carrello

    PAP. Condizione: New. New Book. Shipped from UK. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000.

  • Oswald Campesato

    Lingua: Inglese

    Editore: Mercury Learning And Information, De Gruyter Jan 2025, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    Print on Demand

    EUR 58,95

    EUR 23,00 shipping
    Spedito da Germania a U.S.A.

    Quantità: 1 disponibili

    Aggiungi al carrello

    Taschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES- Covers the full lifecycle of working with LLMs, from model selection to deployment- Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization- Teaches readers to enhance model efficiency with advanced optimization techniques- Includes companion files with code and images -- available from the publisher 1046 pp. Englisch.

  • Oswald Campesato

    Lingua: Inglese

    Editore: De Gruyter, New York, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: CitiRetail, Stevenage, Regno Unito

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    Print on Demand

    EUR 56,85

    EUR 42,12 shipping
    Spedito da Regno Unito a U.S.A.

    Quantità: 1 disponibili

    Aggiungi al carrello

    Paperback. Condizione: new. Paperback. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architectures attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES Covers the full lifecycle of working with LLMs, from model selection to deployment Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization Teaches readers to enhance model efficiency with advanced optimization techniques Includes companion files with code and images -- available from the publisher This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engi This item is printed on demand. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability.

  • Oswald Campesato

    Lingua: Inglese

    Editore: De Gruyter Mouton, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: moluna, Greven, Germania

    Valutazione del venditore 4 su 5 stelle 4 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    Print on Demand

    EUR 51,60

    EUR 48,99 shipping
    Spedito da Germania a U.S.A.

    Quantità: Più di 20 disponibili

    Aggiungi al carrello

    Condizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Oswald Campesato (San Francisco, CA) specializes in Deep Learning, Python, Data Science, and Generative AI. He is the author/co-author of over forty-five books including Google Gemini for Python, Large Language Models, and GPT-4 for Developers (all Mercury .

  • Oswald Campesato

    Lingua: Inglese

    Editore: Mercury Learning And Information, De Gruyter Jan 2025, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    Print on Demand

    EUR 58,95

    EUR 60,00 shipping
    Spedito da Germania a U.S.A.

    Quantità: 1 disponibili

    Aggiungi al carrello

    Taschenbuch. Condizione: Neu. This item is printed on demand - Print on Demand Titel. Neuware -This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES¿ Covers the full lifecycle of working with LLMs, from model selection to deployment¿ Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization¿ Teaches readers to enhance model efficiency with advanced optimization techniques¿ Includes companion files with code and images -- available from the publisherWalter de Gruyter, Genthiner Straße 13, 10785 Berlin 1046 pp. Englisch.

  • Oswald Campesato

    Lingua: Inglese

    Editore: Mercury Learning And Information, De Gruyter Akademie Forschung, 2025

    ISBN 10: 1501523562 ISBN 13: 9781501523564

    Da: AHA-BUCH GmbH, Einbeck, Germania

    Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

    Contatta il venditore

    Print on Demand

    EUR 65,89

    EUR 67,40 shipping
    Spedito da Germania a U.S.A.

    Quantità: 1 disponibili

    Aggiungi al carrello

    Taschenbuch. Condizione: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES- Covers the full lifecycle of working with LLMs, from model selection to deployment- Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization- Teaches readers to enhance model efficiency with advanced optimization techniques- Includes companion files with code and images -- available from the publisher.