In recent years, transformer-based AI language models have gained prominence due to their powerful capabilities in a variety of tasks including generation of images, video, text and code. Large Language Models (LLMs) exist with parameters counts of a trillion parameters and greater. Such models are proprietary and unavailable for organizations to deploy privately. Even if such deployments were possible, the tremendous resource requirements of LLMs preclude their deployment on infrastructure smaller than enterprise and hyper-scale data centers. Small Language Models (SLMs), with far lower parameter counts of billions or fewer are a viable alternative for use on small servers and edge devices including PCs. While SLMs possess similar generative capabilities as LLMs, the reduction in model size is correlated with a decrease in accuracy when evaluated across a broad range of generative applications, including code generation in multiple languages. To mitigate this shortcoming, an SLM may be fine-tuned with a curated code dataset consisting of code examples in a target programming language. This praxis presents results illustrating how two fine-tuned SLMs variants have been created that improve average accuracy in C++ code generation by more than 9%, and Rust code generation by more than 14%.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Anbieter: Majestic Books, Hounslow, Vereinigtes Königreich
Zustand: New. Artikel-Nr. 407694736
Anzahl: 4 verfügbar