Case Study
Monday, June 30
03:30 PM - 04:00 PM
Live in San Francisco
Less Details
Autonomous vehicles encounter innumerous problems with huge variety in the real world, making it extremely crucial to find scalable ways to resolve the challenges. Data-driven paradigm with emerging end-to-end stack and world model demonstrated great potential in scalability, with the following two key questions remaining open. Can we efficiently leverage huge amounts of unlabeled data collected from real world towards foundation models for autonomy via self-supervised learning? Can we leverage the capabilities introduced by the language modality and LLM in traffic scene/behavior understanding and code generation towards generalizable autonomy stack with automated development/test pipeline? In this talk, recent works from Berkeley DeepDrive addressing the questions above are presented, such as self-supervised scene reconstruction for autonomy, dataset connecting interactive driving behavior and language reasoning, and LLM for code diagnosis and repair in autonomy stack.
In this session, you will learn more about:
Wei Zhan serves as a Co-Director of Berkeley DeepDrive, a research center at UC Berkeley focusing on AI for autonomy, mobility and robotic applications. He is an Assistant Professional Researcher at UC Berkeley leading the autonomy group of MSC Lab with 10+ Ph.D. students and Postdoc. His research is focused on AI for autonomous systems leveraging control, robotics, computer vision and machine learning techniques to tackle challenges involving a large variety of sophisticated dynamics, interactive human behavior and complex scenes in a scalable way, and make the autonomous system sustainably evolve. He received his Ph.D. degree from UC Berkeley. His publications received the Best Student Paper Award in IV’18, Best Paper Award – Honorable Mention of IEEE Robotics and Automation Letters, and Best Paper Award of ICRA’24. One of his publications also got ICLR’23 notable top 5% oral presentation. He led the construction of INTERACTION dataset and the organization of its prediction challenges in NeurIPS’20 and ICCV’21.