View on GitHub

Traffic Camera Pipeline

Xinhe Ren*David Wang*Michael LaskeyKen Goldberg

Download this project as a .zip file

Learning Driving Behavior at Traffic Intersections via Automated Extraction of Trajectories from Online Video Streams

Abstract

Simulators are useful for developing algorithms and systems for autonomous driving, but it is challenging to model realistic multi-agent driving behavior. We study how to leverage an online public traffic cam video stream to extract data of driving behavior to use in an open-source traffic simulator, \UDS{}. To tackle challenges like frame-skip, perspective, and low resolution, we implement a Traffic Camera Pipeline (TCP). TCP leverages recent advances in deep object detection and filtering to extract trajectories from the video stream to corresponding locations in a bird’s eye view traffic simulator. After collecting 2618 car and 1213 pedestrian trajectories, we modify the simulator’s multi-agent planner to reflect the learned behaviors in the real world intersection. Specifically, we examine how to increase plausibility for the simulator’s high-level behavioral logic and the generated motion plans for pedestrians and cars through online traffic videos.