University of Cambridge > Talks.cam > Robotics Seminar Series > Reinforcement Learning and Learning-guided Search for Generalizability for Multi-agent Mobility Systems

Reinforcement Learning and Learning-guided Search for Generalizability for Multi-agent Mobility Systems

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Amanda Prorok.

Designing transportation systems is an extensive process, involving constant iteration between specifying modeling assumptions and solving for system performance. However, increasing system complexity pushes classical solution paradigms to their limits, thus inhibiting engineers from understanding and designing future transportation systems. This talk explores the generalizability of alternative data-driven solution paradigms––that is, how gracefully they cope with changes to modeling assumptions. The talk considers two such approaches: deep reinforcement learning (RL) and learning-guided search. Despite superior performance of deep RL in some problems, experimental findings suggest that the methods are fragile to problem variations and thus are presently not suitable for iterative design. On the other hand, new learning-guided search methods effectively accelerate state-of-the-art solvers by up to 2-7 times. Furthermore, experiments demonstrate their generalizability across problem variations, thereby indicating promise for iterative design. Applications discussed include mixed autonomy traffic, traffic signal control, vehicle routing problems, multi-robot warehousing, and integer linear programming.

This talk is part of the Robotics Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity