The definitive guide to LLMs, from architectures, pretraining, and fine-tuning to Retrieval Augmented Generation (RAG), multimodal AI, risk mitigation, and practical implementations with ChatGPT, Hugging Face, and Vertex AI
Transformers for Natural Language Processing and Computer Vision, Third Edition, explores Large Language Model (LLM) architectures, practical applications, and popular platforms (Hugging Face, OpenAI, and Google Vertex AI) used for Natural Language Processing (NLP) and Computer Vision (CV).
The book guides you through a range of transformer architectures from foundation models and generative AI. You’ll pretrain and fine-tune LLMs and work through different use cases, from summarization to question-answering systems leveraging embedding-based search. You'll also implement Retrieval Augmented Generation (RAG) to enhance accuracy and gain greater control over your LLM outputs. Additionally, you’ll understand common LLM risks, such as hallucinations, memorization, and privacy issues, and implement mitigation strategies using moderation models alongside rule-based systems and knowledge integration.
Dive into generative vision transformers and multimodal architectures, and build practical applications, such as image and video classification. Go further and combine different models and platforms to build AI solutions and explore AI agent capabilities.
This book provides you with an understanding of transformer architectures, including strategies for pretraining, fine-tuning, and LLM best practices.
This book is ideal for NLP and CV engineers, data scientists, machine learning practitioners, software developers, and technical leaders looking to advance their expertise in LLMs and generative AI or explore latest industry trends.
Familiarity with Python and basic machine learning concepts will help you fully understand the use cases and code examples. However, hands-on examples involving LLM user interfaces, prompt engineering, and no-code model building ensure this book remains accessible to anyone curious about the AI revolution.
(N.B. Please use the Read Sample option to see further chapters)
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
EUR 5,76 für den Versand von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & DauerAnbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
Zustand: New. In. Artikel-Nr. ria9781805128724_new
Anzahl: Mehr als 20 verfügbar