45,99 €
inkl. MwSt.
Versandkostenfrei*
Erscheint vorauss. 23. Dezember 2025
payback
23 °P sammeln
  • Broschiertes Buch

This is the first hands-on guide that takes you from a simple Hello, LLM to production-ready microservices, all within the JVM. You ll integrate hosted models such as OpenAI s GPT-4o, run alternatives with Ollama or Jlama, and embed them in Spring Boot or Quarkus apps for cloud or on-pre deployment.
You ll learn how prompt-engineering patterns, Retrieval-Augmented Generation (RAG), vector stores such as Pinecone and Milvus, and agentic workflows come together to solve real business problems. Robust test suites, CI/CD pipelines, and security guardrails ensure your AI features reach
…mehr

Produktbeschreibung
This is the first hands-on guide that takes you from a simple Hello, LLM to production-ready microservices, all within the JVM. You ll integrate hosted models such as OpenAI s GPT-4o, run alternatives with Ollama or Jlama, and embed them in Spring Boot or Quarkus apps for cloud or on-pre deployment.

You ll learn how prompt-engineering patterns, Retrieval-Augmented Generation (RAG), vector stores such as Pinecone and Milvus, and agentic workflows come together to solve real business problems. Robust test suites, CI/CD pipelines, and security guardrails ensure your AI features reach production safely, while detailed observability playbooks help you catch hallucinations before your users do. You ll also explore DJL, the future of machine learning in Java.

This book delivers runnable examples, clean architectural diagrams, and a GitHub repo you can clone on day one. Whether you re modernizing a legacy platform or launching a green-field service, you ll have a roadmap for adding state-of-the-art generative AI without abandoning the language and ecosystem you rely on.

What You Will Learn
Establish generative AI and LLM foundationsIntegrate hosted or local models using Spring Boot, Quarkus, LangChain4j, Spring AI, OpenAI, Ollama, and JlamaCraft effective prompts and implement RAG with Pinecone or Milvus for context-rich answersBuild secure, observable, scalable AI microservices for cloud or on-prem deploymentTest outputs, add guardrails, and monitor performance of LLMs and applicationsExplore advanced patterns, such as agentic workflows, multimodal LLMs, and practical image-processing use cases

Who This Book Is For

Java developers, architects, DevOps engineers, and technical leads who need to add AI features to new or existing enterprise systems. Data scientists and educators will also appreciate the code-first, Java-centric approach.
Autorenporträt
Satej Kumar Sahu is a Principal Engineer at Zalando SE with 15 years of hands-on experience designing large-scale, data-intensive systems for global brands including Boeing, Adidas, and Honeywell. A specialist in software architecture, big-data pipelines, and applied machine learning, he has shepherded multiple projects from whiteboard sketches to production deployments serving millions of users. Satej has been working with Large Language Models since their earliest open-source releases, piloting Retrieval-Augmented Generation (RAG) and agentic patterns long before they became industry buzzwords. He is the author of two previous programming books—Building Secure PHP Applications and PHP 8 Basics—and is a frequent speaker at developer conferences and meet-ups across the world. When he isn’t translating cutting-edge AI research into practical code, you’ll find him mentoring engineering teams, contributing to open-source projects, or tinkering with the newest transformer models in his home lab.