Quantcast

Documentation Center

  • Trial Software
  • Product Updates

People Tracking

This example shows how to detect and track people in a video sequence with a stationary background using the following process: 1) Use the first few frames of the video to estimate the background image. 2) Separate the pixels that represent the people from the pixels that represent the background. 3) Group pixels that represent individual people together and calculate the appropriate bounding box for each person. 4) Match the people in the current frame with those in the previous frame by comparing the bounding boxes between frames.

Example Model

The following figure shows the People Tracking example model:

Segmentation Subsystem

In the Segmentation subsystem, the Autothreshold block uses the difference in pixel values between the normalized input image and the background image to determine which pixels correspond to the moving objects in the scene.

Detection Subsystem

In the Detection subsystem, the Close block merges object pixels that are close to each other to create blobs. For example, pixels that represent a portion of a person?s body are grouped together. Next, the Blob Analysis block calculates the bounding boxes of these blobs. In the final step, the Detection subsystem merges the individual bounding boxes so that each person is enclosed by a single bounding box.

Tracking Subsystem

In the Tracking subsystem, the Kalman Filter block uses the locations of the bounding boxes detected in the previous frames to predict the locations of these bounding boxes in the current frame. To determine the locations of specific people from one frame to another, the example compares the predicted locations of the bounding boxes with the detected locations. This enables the example to assign a unique color to each person. The example also uses the Kalman Filter block to reduce the effect of noise in the detection of the bounding box locations.

People Tracking Results

In the Detected window, the people in the scene are surrounded by bounding boxes. The example assigns each bounding box a color based on the order that each person is detected. For example, the first person detected has a red bounding box and the second person detected has a green box. The color of these boxes changes because the people in the scene are not tracked.

In the Tracked window, each person has a unique bounding box color for the duration of the video.

Double-click the Edit Parameters box and select the Plot positions of bounding boxes over time check box. Then, run the example again.

In the Positions window, the example plots the coordinates of the bounding boxes over time. The coordinates of each bounding box are defined by the row and column location of its upper-left corner as well as its width and height. Accordingly, each person in the video corresponds to four lines in the plot.

Because the Kalman Filter block reduces noise, the bounding box positions calculated by the Tracking subsystem have smoother trajectories than those calculated by the Detection subsystem.

Available Example Versions

Floating-point: viptrackpeople.mdlviptrackpeople.mdl

Fixed-point: viptrackpeople_fixpt.mdlviptrackpeople_fixpt.mdl

Was this topic helpful?