Background Information

Object tracking is a two-stage problem, as it requires not only Object Recognition (e.g. Convolutional Neural Network - or CNN), but also some semblance of permanence in-between video frames. This is because the Object Detector does not know the context of the frame, or whether or not the car detected in the last frame is the same as that in the current frame. For that, an Object Tracker is required; an Object Tracker will correlate objects between video frames using a few different approaches. In addition to all of these issues posed by tracking, we must also consider that in the context of the Testbed scalability is a high priority.

Our approach has plug-and-play compatibility with almost any Object Recognition / Detection model (such as CNN, RCNN, etc.), and implements a multi-processed software architecture with a Kalman-Filter based tracker, which functions by associating detections to “tracklets” (items that have been tracked) using all tracklets’ predicted new locations and all detections’ current locations. If the two locations match up, the tracklet has been “re-associated” to a current detection, and the object is seen as a continuous event. Once a tracklet can no longer be re-associated, its data is submitted to our real-time publish/subscribe database. This allows us to perform further analysis later, such as the visualizations below:

Results of tracking, categorized by direction. You can see locations where parallel parking have occurred, as well as pedestrian flow.

Results of tracking, categorized by Type. You can see signs of jay walking, but stop patterns, and more.

You can see the real-time scalable object tracker in action below:

Georgia 1
Lindsay 2
Douglas 1
Georgia 3