Learn how large language models like GPT and Gemini work under the hood in plain English.
How Large Language Models Work translates years of expert research on Large Language Models into a readable, focused introduction to working with these amazing systems. It explains clearly how LLMs function, introduces the optimization techniques to fine-tune them, and shows how to create pipelines and processes to ensure your AI applications are efficient and error-free.
In How Large Language Models Work you will learn how to:
How Large Language Models Work is written by some of the best machine learning researchers at Booz Allen Hamilton, including researcher Stella Biderman, Director of AI/ML Research Drew Farris, and Director of Emerging AI Edward Raff. In clear and simple terms, these experts lay out the foundational concepts of LLMs, the technology’s opportunities and limitations, and best practices for incorporating AI into your organization.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Edward Raff is a Director of Emerging AI at Booz Allen Hamilton, where he leads the machine learning research team. He has worked in healthcare, natural language processing, computer vision, and cyber security, among fundamental AI/ML research. The author of Inside Deep Learning, Dr. Raff has over 100 published research articles at the top artificial intelligence conferences. He is the author of the Java Statistical Analysis Tool library, a Senior Member of the Association for the Advancement of Artificial Intelligence, and twice chaired the Conference on Applied Machine Learning and Information Technology and the AI for Cyber Security workshop. Dr. Raff's work has been deployed and used by anti-virus companies all over the world.
Drew Farris is a Director of AI/ML Research at Booz Allen Hamilton. He works with clients to build information retrieval, as well as machine learning and large scale data management systems, and has co-authored Booz Allen's Field Guide to Data Science, Machine Intelligence Primer and Manning Publications' Taming Text, the 2013 Jolt Award-winning book on computational text processing. He is a member of the Apache Software Foundation and has contributed to a number of open source projects including Apache Accumulo, Lucene, Mahout and Solr.
Stella Biderman is a machine learning researcher at Booz Allen Hamilton and the executive director of the non-profit research center EleutherAI. She is a leading advocate for open source artificial intelligence and has trained many of the world's most powerful open source artificial intelligence algorithms. She has a master's degree in computer science from the Georgia Institute of Technology and degrees in Mathematics and Philosophy from the University of Chicago.
From the back cover:
About the reader:
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
Anbieter: PBShop.store UK, Fairford, GLOS, Vereinigtes Königreich
PAP. Zustand: New. New Book. Shipped from UK. Established seller since 2000. Artikel-Nr. PB-9781633437081
Anzahl: 15 verfügbar
Anbieter: Majestic Books, Hounslow, Vereinigtes Königreich
Zustand: New. Artikel-Nr. 409256110
Anzahl: 2 verfügbar
Anbieter: Romtrade Corp., STERLING HEIGHTS, MI, USA
Zustand: New. This is a Brand-new US Edition. This Item may be shipped from US or any other country as we have multiple locations worldwide. Artikel-Nr. ABNR-319265
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
Paperback. Zustand: Brand New. 176 pages. 9.00x7.25x0.50 inches. In Stock. Artikel-Nr. xr1633437086
Anzahl: 2 verfügbar
Anbieter: Kennys Bookstore, Olney, MD, USA
Zustand: New. 2025. paperback. . . . . . Books ship from the US and Ireland. Artikel-Nr. V9781633437081
Anzahl: Mehr als 20 verfügbar
Anbieter: Speedyhen, London, Vereinigtes Königreich
Zustand: NEW. Artikel-Nr. NW9781633437081
Anzahl: 1 verfügbar
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Taschenbuch. Zustand: Neu. Neuware - Learn how large language models like GPT and Gemini work under the hood in plain English.How Large Language Models Work translates years of expert research on Large Language Models into a readable, focused introduction to working with these amazing systems. It explains clearly how LLMs function, introduces the optimization techniques to fine-tune them, and shows how to create pipelines and processes to ensure your AI applications are efficient and error-free. In How Large Language Models Work you will learn how to: Test and evaluate LLMs Use human feedback, supervised fine-tuning, and Retrieval Augmented Generation (RAG) Reducing the risk of bad outputs, high-stakes errors, and automation bias Human-computer interaction systems Combine LLMs with traditional ML How Large Language Models Work is authored by top machine learning researchers at Booz Allen Hamilton, including researcher Stella Biderman, Director of AI/ML Research Drew Farris, and Director of Emerging AI Edward Raff. They lay out how LLM and GPT technology works in plain language that's accessible and engaging for all. About the Technology Large Language Models put the "I" in "AI." By connecting words, concepts, and patterns from billions of documents, LLMs are able to generate the human-like responses we've come to expect from tools like ChatGPT, Claude, and Deep-Seek. In this informative and entertaining book, the world's best machine learning researchers from Booz Allen Hamilton explore foundational concepts of LLMs, their opportunities and limitations, and the best practices for incorporating AI into your organizations and applications. About the Book How Large Language Models Work takes you inside an LLM, showing step-by-step how a natural language prompt becomes a clear, readable text completion. Written in plain language, you'll learn how LLMs are created, why they make errors, and how you can design reliable AI solutions. Along the way, you'll learn how LLMs "think," how to design LLM-powered applications like agents and Q&A systems, and how to navigate the ethical, legal, and security issues. What's Inside Customize LLMs for specific applications Reduce the risk of bad outputs and bias Dispel myths about LLMs Go beyond language processing About the Readers No knowledge of ML or AI systems is required. About the Author Edward Raff, Drew Farris and Stella Biderman are the Director of Emerging AI, Director of AI/ML Research, and machine learning researcher at Booz Allen Hamilton. Table of Contents 1 Big picture: What are LLMs 2 Tokenizers: How large language models see the world 3 Transformers: How inputs become outputs 4 How LLMs learn 5 How do we constrain the behavior of LLMs 6 Beyond natural language processing 7 Misconceptions, limits, and eminent abilities of LLMs 8 Designing solutions with large language models 9 Ethics of building and using LLMs Get a free Elektronisches Buch (PDF or ePub) from Manning as well as access to the online liveBook format (and its AI assistant that will answer your questions in any language) when you purchase the print book. Artikel-Nr. 9781633437081
Anzahl: 1 verfügbar