University of Cambridge > > CAPE Advanced Technology Lecture Series > DetClip: Scalable Open-Vocabulary Object Detection via Fine-grained Visual-language Alignment

DetClip: Scalable Open-Vocabulary Object Detection via Fine-grained Visual-language Alignment

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Mark Leadbeater.

For online attendance register at:


We will present efficient and scalable training framework that incorporates large-scale image-text pairs to achieve open-vocabulary object detection (OVD). Unlike previous OVD frameworks that typically rely on a pre-trained vision-language model (e.g., CLIP ) or exploit image-text pairs via a pseudo labelling process, DetCLIP directly learns the fine-grained word-region alignment from massive image-text pairs in an end-to-end manner. We employ a maximum word-region similarity between region proposals and textual words to guide the contrastive objective. To enable the model to gain localization capability while learning broad concepts, DetCLIP is trained with a hybrid supervision from detection, grounding and image-text pair data under a unified data formulation. By jointly training with an alternating scheme and adopting low-resolution input for image-text pairs, DetCLIP exploits image-text pair data efficiently and effectively.


Dr Wei ZHang joined Huawei in 2012. Before that, he was an assistant researcher in Shenzhen Institute of Advanced Technology Chinese Academy of Sciences and in The Chinese University of Hong Kong (CUHK). He received his Ph.D. degree in computer science from CUHK in 2010, his MS degree from Tsinghua University in 2005 and his B.S. from Nankai University in 2002. Co-organizer of the “Self-supervised Learning for Next-Generation Industry-level Autonomous Driving” workshop at ECCV 2022 and ICCV 2021 .

This talk is part of the CAPE Advanced Technology Lecture Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity