Computer Vision - ECCV 2024 Workshops
Milan, Italy, September 29-October 4, 2024, Proceedings, Part XI
Herausgegeben:Del Bue, Alessio; Canton, Cristian; Pont-Tuset, Jordi; Tommasi, Tatiana
Computer Vision - ECCV 2024 Workshops
Milan, Italy, September 29-October 4, 2024, Proceedings, Part XI
Herausgegeben:Del Bue, Alessio; Canton, Cristian; Pont-Tuset, Jordi; Tommasi, Tatiana
- Broschiertes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
The multi-volume set LNCS 15623 until LNCS 15646 constitutes the proceedings of the workshops that were held in conjunction with the 18th European Conference on Computer Vision, ECCV 2024, which took place in Milan, Italy, during September 29 October 4, 2024.
These LNCS volumes contain 574 accepted papers from 53 of the 73 workshops. The list of workshops and distribution of the workshop papers in the LNCS volumes can be found in the preface that is freely accessible online.
Andere Kunden interessierten sich auch für
- Computer Vision - ECCV 2024 Workshops61,99 €
- Computer Vision - ECCV 2024 Workshops61,99 €
- Computer Vision - ECCV 2024 Workshops112,99 €
- Computer Vision - ECCV 2024 Workshops112,99 €
- Computer Vision - ECCV 2024 Workshops61,99 €
- Computer Vision - ECCV 2024 Workshops112,99 €
- Computer Vision - ECCV 2024 Workshops112,99 €
-
-
-
The multi-volume set LNCS 15623 until LNCS 15646 constitutes the proceedings of the workshops that were held in conjunction with the 18th European Conference on Computer Vision, ECCV 2024, which took place in Milan, Italy, during September 29 October 4, 2024.
These LNCS volumes contain 574 accepted papers from 53 of the 73 workshops. The list of workshops and distribution of the workshop papers in the LNCS volumes can be found in the preface that is freely accessible online.
These LNCS volumes contain 574 accepted papers from 53 of the 73 workshops. The list of workshops and distribution of the workshop papers in the LNCS volumes can be found in the preface that is freely accessible online.
Produktdetails
- Produktdetails
- Lecture Notes in Computer Science 15633
- Verlag: Springer / Springer Nature Switzerland / Springer, Berlin
- Artikelnr. des Verlages: 978-3-031-91978-7
- Seitenzahl: 388
- Erscheinungstermin: 1. Juni 2025
- Englisch
- Abmessung: 235mm x 155mm x 21mm
- Gewicht: 587g
- ISBN-13: 9783031919787
- ISBN-10: 3031919785
- Artikelnr.: 73807106
- Herstellerkennzeichnung
- Springer Nature c/o IBS
- Benzstrasse 21
- 48619 Heek
- Tanja.Keller@springer.com
- Lecture Notes in Computer Science 15633
- Verlag: Springer / Springer Nature Switzerland / Springer, Berlin
- Artikelnr. des Verlages: 978-3-031-91978-7
- Seitenzahl: 388
- Erscheinungstermin: 1. Juni 2025
- Englisch
- Abmessung: 235mm x 155mm x 21mm
- Gewicht: 587g
- ISBN-13: 9783031919787
- ISBN-10: 3031919785
- Artikelnr.: 73807106
- Herstellerkennzeichnung
- Springer Nature c/o IBS
- Benzstrasse 21
- 48619 Heek
- Tanja.Keller@springer.com
DARES: Depth Anything in Robotic Endoscopic Surgery with Self-supervised Vector LoRA of the Foundation Model.- LocalMamba: Visual State Space Model with Windowed Selective Scan.- Compositional Text-to-Image Generation with Feedforward Layout Generation.- PackMamba Efficient Processing of Variable-Length Sequences in Mamba training.- Down-Sampling Inter-Layer Adapter for Parameter and Computation Efficient Ultra-Fine-Grained Image Recognition.- Memory-Efficient Vision Transformers: An Activation-Aware Mixed-Rank Compression Strategy.- LLaMA-NAS: Efficient Neural Architecture Search for Large Language Models.- Improving Hyperparameter Optimization with Checkpointed Model Weights.- MagicDec: Breaking the Latency-Throughput Tradeoff for Long Contexts with Speculative Decoding.- Mixed Non-linear Quantization for Vision Transformers.- CycleBNN: Cyclic Precision Training in Binary Neural Networks.- DailyMAE: Towards Pretraining Masked Autoencoders in One Day.- EPTQ: Enhanced Post-Training Quantization via Hessian-guided Network-wise Optimization.- Generalized SAM: Efficient Fine-Tuning of SAM for Variable Input Image Sizes.- LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field.- Giving each task what it needs - leveraging structured sparsity for tailored multi-task learning.- ERF-NAS: Efficient Receptive Field-based Zero-Shot NAS for Object Detection.- CA3D: Convolutional-Attentional 3D Nets for Efficient Video Activity Recognition on the Edge.- Memory-Optimized Once-For-All network.- Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion.- Latent Distillation for Continual Object Detection at the Edge.- MCUBench: A Benchmark of Tiny Object Detectors on MCUs.- Optimizing Resource Consumption in Diffusion Models through Hallucination Early Detection.
DARES: Depth Anything in Robotic Endoscopic Surgery with Self-supervised Vector LoRA of the Foundation Model.- LocalMamba: Visual State Space Model with Windowed Selective Scan.- Compositional Text-to-Image Generation with Feedforward Layout Generation.- PackMamba Efficient Processing of Variable-Length Sequences in Mamba training.- Down-Sampling Inter-Layer Adapter for Parameter and Computation Efficient Ultra-Fine-Grained Image Recognition.- Memory-Efficient Vision Transformers: An Activation-Aware Mixed-Rank Compression Strategy.- LLaMA-NAS: Efficient Neural Architecture Search for Large Language Models.- Improving Hyperparameter Optimization with Checkpointed Model Weights.- MagicDec: Breaking the Latency-Throughput Tradeoff for Long Contexts with Speculative Decoding.- Mixed Non-linear Quantization for Vision Transformers.- CycleBNN: Cyclic Precision Training in Binary Neural Networks.- DailyMAE: Towards Pretraining Masked Autoencoders in One Day.- EPTQ: Enhanced Post-Training Quantization via Hessian-guided Network-wise Optimization.- Generalized SAM: Efficient Fine-Tuning of SAM for Variable Input Image Sizes.- LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field.- Giving each task what it needs - leveraging structured sparsity for tailored multi-task learning.- ERF-NAS: Efficient Receptive Field-based Zero-Shot NAS for Object Detection.- CA3D: Convolutional-Attentional 3D Nets for Efficient Video Activity Recognition on the Edge.- Memory-Optimized Once-For-All network.- Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion.- Latent Distillation for Continual Object Detection at the Edge.- MCUBench: A Benchmark of Tiny Object Detectors on MCUs.- Optimizing Resource Consumption in Diffusion Models through Hallucination Early Detection.