FAQ

1. Can external data be used during training?

Answer: We encourage the participants to use the provided training data, while also allow them to use additional training data. The use of external data must be indicated during submission.

2. Why there is no validation set?

As the number of scenarios in the panda data set is still limited so far, in order to ensure sufficient data for training and final testing, the authorities temporarily do not set the validation set that does not provide annotation files. If required by the algorithm researcher, the validation set can be separated from the training set freely.

3. Will the objects locating in the ignored regions be filtered out automatically in evaluation?

Answer: the bounding boxes in the submitted results overlapping the ignore regions locating in the ignored regions will be filtered out in the evaluation. This is done by COCO API

4. In multi-object tracking tasks, do participating teams need to restore pedestrian trajectories based on public pedestrian detection results?

Answer: Unlike MOTChallenge, we do not require participating teams to use public test results in multi-object tracking tasks, nor is it officially provided.

5. I found an annotation error in the provided ground truth or a bug in the toolkit.

Answer: Thank you! We are aware of some deficiencies in the provided annotations and are still trying to improve these. We do appreciate all kinds of feedback, so please don’t hesitate to contact us to report any findings.

6. Where could I find the evaluation results of the challenge?

Answer: Teams need to submit their results on the competition page. After the team submits the results and after calculation, we will feedback the result to the team and add it on the leaderboard ASAP.

Top