The sixth challenge edition is being organized as part of ISBI 2021 to be held virtually in April 2021. In this edition, the primary focus is put on methods that exhibit better generalizability and work across most, if not all, of the 13 already existing datasets, instead of developing methods optimized for one or a few datasets only. Apart from this primary track that prioritizes highly competitive methods only, there is also a second track running that allows anyone to compete over the datasets of their choice without the hunt for generalizability across multiple datasets. The submissions for both tracks will be collected, evaluated, and announced at the corresponding ISBI 2021 challenge workshop according to the following schedule:
October 30th, 2020
The qualification, evaluation, and ranking criteria for the sixth challenge edition are released.
November 1st, 2020
The pre-registration period for the sixth challenge edition is opened
November 30th, 2020 January 15th, 2021
New silver reference segmentation annotations for 13 existing training datasets are released
December 31st, 2020
The pre-registration period for the sixth challenge edition is closed
January 15th, 2021
Pre-registered competitors, who passed the primary track qualification criteria, are announced
February 25th, 2021
The registration period for the secondary track of the sixth challenge edition is closed
March 1st, 2021
Deadline for submitting results to both the primary and secondary tracks, including algorithm implementations, detailed descriptions of the approaches, and parameters used
April 13th, 2021
ISBI 2021 Challenge Workshop
May-June 2021
Validation of the results submitted by re-running the respective algorithms on our evaluation servers
Pre-Registration
To pre-register for the ISBI 2021 challenge edition please send an email to the main challenge coordinator at:
To simplify the pre-registration process, please include your full name and affiliation in your email. Please be aware that pre-registrations are handled on a weekly basis. Therefore, you may have to wait a few days before your pre-registration is confirmed.
Please note that by pre-registering for the challenge, you explicitly agree with the use of your personal data provided for the ISBI 2021 challenge related activities, but you are not obliged in any way to participate. Your pre-registration is considered as an expression of interest to participate in the ISBI 2021 challenge edition. If no results are submitted for evaluation nor the qualification criteria are met, your pre-registration will be canceled and your personal data provided will be discarded right after the submission deadline on March 1st, 2021.
Thirteen real datasets with complete gold reference tracking annotations available are selected for the primary track. These include eight 2D+t and five 3D+t datasets, the training sequences of which will also be complemented with gold and silver segmentation reference annotations. An ideal submission for the primary track then consists of a set of six results per dataset, being created using the same approach with parameters/models optimized/trained using one of the six following training data configurations: gold segmentation truth per dataset, silver segmentation truth per dataset, a mixture of gold and silver segmentation truths per dataset, gold segmentation truths across all the 13 datasets, silver segmentation truths across all the 13 datasets, and a mixture of gold and silver segmentation truths across all the 13 datasets. Other than these training data configurations cannot be exploited.
Despite anyone can pre-register for the ISBI 2021 challenge edition, only pre-registrants with highly competitive methods for at least one of the 13 included datasets will be allowed to compete in the primary track. More precisely, a method is considered highly competitive for a particular dataset only if its OPCSB score is not below the threshold established as the difference between OPCSB of the third-ranked method for the particular dataset in the Cell Segmentation Benchmark leaderboard as of January 15th, 2021, and the standard deviation of OPCSB within individual human annotations for the particular dataset.
As of January 15th, 2020, the final qualification thresholds and the qualified methods are:
Dataset
Qualification Threshold (OPCSB)
Qualified Methods
BF-C2DL-HSC
0.851475
CALT-US, MU-Lux-CZ, CSU-CN, KIT-Sch-GE
BF-C2DL-MuSc
0.827065
HDU-CN, CALT-US, KIT-Sch-GE
DIC-C2DH-HeLa
0.882132
CALT-US, MU-Lux-CZ, AC (4), CSU-CN, UVA-NL, AU-TR, TUG-AT, BGU-IL (5)
The performance of a particular method for a given dataset and training data configuration is computed as OPCSB, being set to 0 in case of missing results. The overall performance of a particular method is obtained by averaging its performance over all the 13 included datasets and all the 6 specified training data configurations. The overall performance of individual competing methods then yields the final ranking.
Ten best-performing, primary track competitors will be invited to participate in the summarizing challenge manuscript that will be published in a top-tier journal.
The pre-registrants who do not pass the primary track qualification criteria or are not interested in the primary track scope can compete over 2D+t and 3D+t datasets of their choice. An expected secondary track submission then consists of one result per dataset, with the possibility of using more than one method and without any limitations on the training data configurations used.
For the sake of a fair primary track competition for both new and previous participants, the deadline has been postponed to the date when all primary track competitiors are announced, not allowing the previous participants with highly competitive methods to start working on the primary track submissions while the new participants still hunt for the primary track qualification.
No, a primary track competitor can submit any subset of the results expected. However, every missing piece of results contributes 0 to the overall performance of a particular method, and thus affects its final ranking.
Up to 42 different parameter configurations can be exploited. These include one parameter configuration for each of the 39 per-dataset-scenarios (i.e., 13 datasets times 3 training data configurations) and one parameter configuration for each of the 3 all-datasets-scenarios.
Yes, but only before January 1st, 2021. No monthly evaluated submissions are allowed between January 1st and the ISBI challenge workshop for the primary and/or secondary track competitors.
Yes, but only at the level of parameter configurations and minor refining steps to reflect the specifics of individual primary track datasets, not being fully covered in the subset of datasets analyzed during the qualification phase. The main methodological backbone of the qualified algorithm must remain unchanged.
Every cell in the field of interest of each frame is assigned a gold-truth tracking marker. Its majority can be covered by at most one gold-truth segmentation mask and one silver-truth segmentation mask. In case both such segmentation masks exist, only the former one must be taken. Note that it is allowed to exploit the type of taken segmentation masks as the a priori knowledge in the learning process though.