"ModelDB: Experiment Tracking for Machine Learning Workflows" is an authoritative guide to the principles and practicalities of managing complex machine learning experiments at scale. It offers readers a thorough foundation in the motivations for experiment tracking, such as tackling reproducibility, fostering collaboration, and supporting scalable research and deployment. The book methodically breaks down the architecture, metadata modeling, and lineage tracking required for robust experiment management, illuminating the design choices and trade-offs that inform cutting-edge experiment tracking systems.
Delving into ModelDB's modular architecture, the book explores core components including metadata storage, API design, graph-based lineage tracking, and security controls. Building on these foundations, it presents practical strategies for capturing and managing rich experiment metadata-ranging from datasets and code versions to hyperparameters, artifacts, and evaluation metrics. Integration guides illustrate how ModelDB seamlessly fits into diverse machine learning ecosystems, supporting popular frameworks such as scikit-learn, TensorFlow, and PyTorch, as well as data versioning tools, CI/CD pipelines, and MLOps workflows.
Complete with extensive coverage of analysis, visualization, scalability, and security, the book also offers actionable insights into extending ModelDB via plugins and custom metadata. Case studies, best practices, and lessons learned from real-world deployments underscore the transformative value of systematic experiment tracking. Whether in academia or industry, this book equips practitioners, architects, and researchers with the tools and knowledge to institutionalize reproducibility and drive innovation in modern ML workflows.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.