To obtain the data, please follow the instructions under this link. After approval of your request, you will be granted access to the Data Download page to download the data.
To be eligible for the official ranking, any submission must be described in a corresponding paper (see section on Paper Submission). The paper can describe all three allowed submissions per task (see Submissions Instructions).
Participants can use the training data in any way they wish for training the models. Using additional (public or not) data for training or unsupervised training on the test data is not eligible for prizes since we want to compare models trained on the same data with a held-out test set. However, if participants want to evaluate such approaches, we can provide evaluations on the test set upon request, and they are welcome to report their work as a paper in the official proceedings.
Members of the organizers' research groups may participate in the challenge but are not eligible for prizes.
The results and winner will be announced publicly. Once participants submit their docker submission on the test set to the challenge organizers via the challenge website, they will be considered fully vested in the challenge so that their performance results (without identifying the participant unless permission is granted) will become part of any presentations, publications, or subsequent analyses derived from the Challenge at the discretion of the organizers.
The participating teams are encouraged to publish their results in the LNCS proceedings of the challenge (following the MICCAI proceedings timeline and subject to acceptance). Participants can submit their results elsewhere when citing the overview paper, and (if so) no embargo will be applied.
The participating teams are also strongly encouraged to disclose or share their code, although it is not mandatory to be eligible.