Task2: Multi-Object Tracking

Object tracking aims to associate objects at different spatial positions and temporal frames. The superior properties of PANDA make it naturally suitable for long-term multi-object tracking. Yet the complex scenes with crowded pedestrian impose various challenges as well.

Part 1: Multi-pedestrian tracking

Overview

Given an input video sequence, the Multi-Object Tracking Challenge (Task 2) requires the participating algorithms to recover the trajectories of pedestrians in the video. There will be 2 sub-tasks: Multi-pedestrian tracking with/without public detection results. The challenge will provide 15 challenging sequences, including 10 video sequences for training (24,201 frames in total) and 5 sequences for testing (12,968 frames in total), which are available on the download page. We manually annotate the bounding boxes of pedestrians in each video frame. In addition, we also provide two kinds of useful annotations, i.e., occlusion degree and face orientation of each person. Annotations on the training sets are publicly available. Please also see the Download page for more details.

Challenge Guidelines

The multi-object tracking evaluation page lists detailed information regarding how submissions will be scored. To limit overfitting while providing researchers more flexibility to test their algorithms, we have divided the test set into two splits, including test-challenge and test-dev. Test-dev (3 video clips) is designed for debugging and validation experiments and allows for unlimited submission. The up-to-date results of the test-dev set are available to view on the leaderboard of multi-object tracking.

We encourage the participants to use the provided training data, while also allow them to use additional training data. The use of external data must be indicated during submission.

The train video clips and corresponding annotations as well as the video clips in the test-challenge set are available on the download page. Before participating, every user is required to create an account using an institutional email address. If you have any problem in registration, please contact us. After registration, the users should submit the results in their accounts. The submitted results will be evaluated according to the rules described on the evaluation page. Please refer to the evaluation page for detailed explanation.

Tools and Instructions

We provide extensive API support for the PANDA images, annotation and evaluation code. Please visit our GitHub repository to download the PANDA API. For addition questions, please find the answers here or contact us.

Part 2: Multi-vehicle tracking

TBD

Citation

When using our datasets in your research, please cite:

@inproceedings{yuan2017multiscale,
title={Multiscale gigapixel video: A cross resolution image matching and warping approach},
author={Yuan, Xiaoyun and Fang, Lu and Dai, Qionghai and Brady, David J and Liu, Yebin},
booktitle={Computational Photography (ICCP), 2017 IEEE International Conference on},
pages={1--9},
year={2017},
organization={IEEE}
}

Privacy

This dataset is for non-commercial use only. However, if you find yourself or your personal belongings in the data, please contact us, and we will immediately remove the respective images from our servers.

Top