In recent years, transformer-based AI language models have gained prominence due to their powerful capabilities in a variety of tasks including generation of images, video, text and code. Large Language Models (LLMs) exist with parameters counts of a trillion parameters and greater. Such models are proprietary and unavailable for organizations to deploy privately. Even if such deployments were possible, the tremendous resource requirements of LLMs preclude their deployment on infrastructure smaller than enterprise and hyper-scale data centers. Small Language Models (SLMs), with far lower parameter counts of billions or fewer are a viable alternative for use on small servers and edge devices including PCs. While SLMs possess similar generative capabilities as LLMs, the reduction in model size is correlated with a decrease in accuracy when evaluated across a broad range of generative applications, including code generation in multiple languages.
To mitigate this shortcoming, an SLM may be fine-tuned with a curated code dataset consisting of code examples in a target programming language. This praxis presents results illustrating how two fine-tuned SLMs variants have been created that improve average accuracy in C++ code generation by more than 9%, and Rust code generation by more than 14%.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
Da: PBShop.store UK, Fairford, GLOS, Regno Unito
PAP. Condizione: New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Codice articolo L0-9785161063675
Quantità: Più di 20 disponibili
Da: Majestic Books, Hounslow, Regno Unito
Condizione: New. Codice articolo 407694736
Quantità: 4 disponibili
Da: Books Puddle, New York, NY, U.S.A.
Condizione: New. Codice articolo 26406540879
Quantità: 4 disponibili
Da: Biblios, Frankfurt am main, HESSE, Germania
Condizione: New. Codice articolo 18406540869
Quantità: 4 disponibili
Da: AHA-BUCH GmbH, Einbeck, Germania
Taschenbuch. Condizione: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - In recent years, transformer-based AI language models have gained prominence due to their powerful capabilities in a variety of tasks including generation of images, video, text and code. Large Language Models (LLMs) exist with parameters counts of a trillion parameters and greater. Such models are proprietary and unavailable for organizations to deploy privately. Even if such deployments were possible, the tremendous resource requirements of LLMs preclude their deployment on infrastructure smaller than enterprise and hyper-scale data centers. Small Language Models (SLMs), with far lower parameter counts of billions or fewer are a viable alternative for use on small servers and edge devices including PCs. While SLMs possess similar generative capabilities as LLMs, the reduction in model size is correlated with a decrease in accuracy when evaluated across a broad range of generative applications, including code generation in multiple languages.To mitigate this shortcoming, an SLM may be fine-tuned with a curated code dataset consisting of code examples in a target programming language. This praxis presents results illustrating how two fine-tuned SLMs variants have been created that improve average accuracy in C++ code generation by more than 9%, and Rust code generation by more than 14%. Codice articolo 9785161063675
Quantità: 2 disponibili
Da: preigu, Osnabrück, Germania
Taschenbuch. Condizione: Neu. Methods to Improve AI Code Generation | Mohd Rashid | Taschenbuch | Englisch | 2026 | RASHID PUBLICATIONS | EAN 9785161063675 | Verantwortliche Person für die EU: Libri GmbH, Europaallee 1, 36244 Bad Hersfeld, gpsr[at]libri[dot]de | Anbieter: preigu Print on Demand. Codice articolo 135060925
Quantità: 5 disponibili