COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Query-based Hard-Image Retrieval for Object Detection at Test Time
Query-based Hard-Image Retrieval for Object Detection at Test TimeAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Mateja Jamnik. Hi I’m one of Mateja’s recently-graduated PhD students. This is a paper that I worked on during my employment at Five AI, a self-driving car company in Cambridge. I worked on verification of object-detectors. An object-detector here is a neural network that draws bounding boxes around objects in images (e.g., RetinaNet, FasterRCNN). The problem we faced was finding examples of images which an object-detector does badly on within an unannotated dataset. We needed a solution where we could define what ‘badly’ means for our specific task; in an autonomous vehicle you care much more about misdetections that are close to the vehicle. In this talk I’ll walk through our paper containing our simple solution to this tricky problem. Here is a more formal abstract: There is a longstanding interest in capturing the error behaviour of object detectors by finding images where their performance is likely to be unsatisfactory. In real-world applications such as autonomous driving, it is also crucial to characterise potential failures beyond simple requirements of detection performance. For example, a missed detection of a pedestrian close to an ego vehicle will generally require closer inspection than a missed detection of a car in the distance. The problem of predicting such potential failures at test time has largely been overlooked in the literature and conventional approaches based on detection uncertainty fall short in that they are agnostic to such fine-grained characterisation of errors. In this work, we propose to reformulate the problem of finding “hard” images as a query-based hard image retrieval task, where queries are specific definitions of “hardness”, and offer a simple and intuitive method that can solve this task for a large family of queries. Our method is entirely post-hoc, does not require ground-truth annotations, is independent of the choice of a detector, and relies on an efficient Monte Carlo estimation that uses a simple stochas- tic model in place of the ground-truth. We show experimentally that it can be applied successfully to a wide variety of queries for which it can reliably identify hard images for a given detector without any labelled data. We provide results on ranking and classification tasks using the widely used RetinaNet, Faster-RCNN, Mask-RCNN, and Cascade Mask- RCNN object detectors. The code for this project is available at https://github.com/fiveai/hardest. This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsDanby Society CCIMI Short Course: Random matrices, Dyson-Schwinger equations and the topological expansion BioengineeringOther talksA resolved view of z>6 quasar host galaxies Coffee in Cavendish: Are we alone in the universe? (Probably) A continuous analog of the binary Darboux transformation for the KdV equation Uncertainty quantification and data assimilation |