Skip to product information
1 of 1

BookDomains.ma

LLM Engineer's Handbook: Master the art of engineering large language models from concept to production by Paul Iusztin , Maxime Labonne

LLM Engineer's Handbook: Master the art of engineering large language models from concept to production by Paul Iusztin , Maxime Labonne

Regular price Dh 497.00 MAD
Regular price Dh 595.00 MAD Sale price Dh 497.00 MAD
Sale Sold out
Quantity
📖
📚
🔥 OFFER ENDS SOON

Build Your Library & Save!

The more you read, the more you save

⏰ THIS OFFER EXPIRES IN:

23
Hours
:
59
Minutes
:
59
Seconds
📚
Buy Any 3 Books
Start your collection
SAVE 79 DH
📖📚
Buy Any 4 Books
Great readers choice
SAVE 105 DH
⭐ BEST VALUE ⭐
📚📖📚
Buy Any 5 Books
Book lover's bundle
SAVE 140 DH

Auto-Applied

🚚

Fast Delivery

📦

Safe Packaging

Step into the world of LLMs with this practical guide that takes you from the fundamentals to deploying advanced applications using LLMOps best practices

Get With Your Book: PDF Copy, AI Assistant, and Next-Gen Reader Free

Key Features

  • Build and refine LLMs step by step, covering data preparation, RAG, and fine-tuning
  • Learn essential skills for deploying and monitoring LLMs, ensuring optimal performance in production
  • Utilize preference alignment, evaluation, and inference optimization to enhance performance and adaptability of your LLM applications

Book Description

Artificial intelligence has undergone rapid advancements, and Large Language Models (LLMs) are at the forefront of this revolution. This LLM book offers insights into designing, training, and deploying LLMs in real-world scenarios by leveraging MLOps best practices. The guide walks you through building an LLM-powered twin that’s cost-effective, scalable, and modular. It moves beyond isolated Jupyter notebooks, focusing on how to build production-grade end-to-end LLM systems.

Throughout this book, you will learn data engineering, supervised fine-tuning, and deployment. The hands-on approach to building the LLM Twin use case will help you implement MLOps components in your own projects. You will also explore cutting-edge advancements in the field, including inference optimization, preference alignment, and real-time data processing, making this a vital resource for those looking to apply LLMs in their projects.

By the end of this book, you will be proficient in deploying LLMs that solve practical problems while maintaining low-latency and high-availability inference capabilities. Whether you are new to artificial intelligence or an experienced practitioner, this book delivers guidance and practical techniques that will deepen your understanding of LLMs and sharpen your ability to implement them effectively.

What you will learn

  • Implement robust data pipelines and manage LLM training cycles
  • Create your own LLM and refine it with the help of hands-on examples
  • Get started with LLMOps by diving into core MLOps principles such as orchestrators and prompt monitoring
  • Perform supervised fine-tuning and LLM evaluation
  • Deploy end-to-end LLM solutions using AWS and other tools
  • Design scalable and modularLLM systems
  • Learn about RAG applications by building a feature and inference pipeline

Who this book is for

This book is for AI engineers, NLP professionals, and LLM engineers looking to deepen their understanding of LLMs. Basic knowledge of LLMs and the Gen AI landscape, Python and AWS is recommended. Whether you are new to AI or looking to enhance your skills, this book provides comprehensive guidance on implementing LLMs in real-world scenarios

Table of Contents

  1. Understanding the LLM Twin Concept and Architecture
  2. Tooling and Installation
  3. Data Engineering
  4. RAG Feature Pipeline
  5. Supervised Fine-Tuning
  6. Fine-Tuning with Preference Alignment
  7. Evaluating LLMs
  8. Inference Optimization
  9. RAG Inference Pipeline
  10. Inference Pipeline Deployment
  11. MLOps and LLMOps
View full details