The sixth challenge edition has been organized as part of ISBI 2021 held virtually in April 2021. In this edition, the primary focus has been put on methods that exhibit better generalizability and work across most, if not all, of the 13 already existing datasets, instead of developing methods optimized for one or a few datasets only. Apart from this primary track that prioritized highly competitive methods only, there was also a second track running that allowed anyone to compete over the datasets of their choice without the hunt for generalizability across multiple datasets. The submissions for both tracks have been collected, evaluated, and announced at the corresponding ISBI 2021 challenge workshop according to the following schedule:
October 30th, 2020
The qualification, evaluation, and ranking criteria for the sixth challenge edition are released.
November 1st, 2020
The pre-registration period for the sixth challenge edition is opened
November 30th, 2020 January 15th, 2021
New silver reference segmentation annotations for 13 existing training datasets are released
December 31st, 2020
The pre-registration period for the sixth challenge edition is closed
January 15th, 2021
Pre-registered competitors, who passed the primary track qualification criteria, are announced
February 25th, 2021
The registration period for the secondary track of the sixth challenge edition is closed
March 1st, 2021
Deadline for submitting results to both the primary and secondary tracks, including algorithm implementations, detailed descriptions of the approaches, and parameters used
April 13th, 2021
ISBI 2021 Challenge Workshop
May-June 2021
Validation of the results submitted by re-running the respective algorithms on our evaluation servers
Thirteen real datasets with complete gold reference tracking annotations available have been selected for the primary track. These include eight 2D+t and five 3D+t datasets, the training sequences of which have also been complemented with gold and silver segmentation reference annotations. An ideal submission for the primary track then consisted of a set of six results per dataset, being created using the same approach with parameters/models optimized/trained using one of the six following training data configurations: gold segmentation truth per dataset, silver segmentation truth per dataset, a mixture of gold and silver segmentation truths per dataset, gold segmentation truths across all the 13 datasets, silver segmentation truths across all the 13 datasets, and a mixture of gold and silver segmentation truths across all the 13 datasets. Other than these training data configurations could not be exploited.
Despite anyone could pre-register for the ISBI 2021 challenge edition, only pre-registrants with highly competitive methods for at least one of the 13 included datasets were allowed to compete in the primary track. More precisely, a method is considered highly competitive for a particular dataset only if its OPCSB score is not below the threshold established as the difference between OPCSB of the third-ranked method for the particular dataset in the Cell Segmentation Benchmark leaderboard as of January 15th, 2021, and the standard deviation of OPCSB within individual human annotations for the particular dataset.
As of January 15th, 2020, the final qualification thresholds and the qualified methods were:
Dataset
Qualification Threshold (OPCSB)
Qualified Methods
BF-C2DL-HSC
0.851475
CALT-US, MU-Lux-CZ, CSU-CN, KIT-Sch-GE
BF-C2DL-MuSc
0.827065
HDU-CN, CALT-US, KIT-Sch-GE
DIC-C2DH-HeLa
0.882132
CALT-US, MU-Lux-CZ, AC (4), CSU-CN, UVA-NL, AU-TR, TUG-AT, BGU-IL (5)
The performance of a particular method for a given dataset and training data configuration is computed as OPCSB, being set to 0 in case of missing results. The overall performance of a particular method is obtained by averaging its performance over all the 13 included datasets and all the 6 specified training data configurations. The overall performance of individual competing methods then yields the final ranking.
All nine primary track competitors have been invited to participate in the summarizing challenge manuscript that is being finilized and will be submitted to a top-tier journal shortly.
The pre-registrants who did not pass the primary track qualification criteria or were not interested in the primary track scope could compete over 2D+t and 3D+t datasets of their choice. An expected secondary track submission then consisted of one result per dataset, with the possibility of using more than one method and without any limitations on the training data configurations used.
For the sake of a fair primary track competition for both new and previous participants, the deadline has been postponed to the date when all primary track competitors are announced, not allowing the previous participants with highly competitive methods to start working on the primary track submissions while the new participants still hunt for the primary track qualification.
No, a primary track competitor can submit any subset of the results expected. However, every missing piece of results contributes 0 to the overall performance of a particular method, and thus affects its final ranking.
Up to 42 different parameter configurations can be exploited. These include one parameter configuration for each of the 39 per-dataset-scenarios (i.e., 13 datasets times 3 training data configurations) and one parameter configuration for each of the 3 all-datasets-scenarios.
Yes, but only before January 1st, 2021. No monthly evaluated submissions are allowed between January 1st and the ISBI challenge workshop for the primary and/or secondary track competitors.
Yes, but only at the level of parameter configurations and minor refining steps to reflect the specifics of individual primary track datasets, not being fully covered in the subset of datasets analyzed during the qualification phase. The main methodological backbone of the qualified algorithm must remain unchanged.
Every cell in the field of interest of each frame is assigned a gold-truth tracking marker. Its majority can be covered by at most one gold-truth segmentation mask and one silver-truth segmentation mask. In case both such segmentation masks exist, only the former one must be taken. Note that it is allowed to exploit the type of taken segmentation masks as the a priori knowledge in the learning process though.
No. A single method/model should be submitted for all datasets. A typical solution will contain a 2D method/model working per slice in 3D. Alternatively, one can submit a 3D method/model that handles 2D data as a special case (allows one of the dimensions to have the size of 1). Note, however, that there are generally no 3D gold segmentation annotations available (only 2D annotations of selected slices); therefore, proper supervised training in 3D is currently not possible.
In the primary track, we want to compare the ability of the methods to infer from 6 different training configurations. Therefore, if the method does not use the training data at all, it cannot generate valid primary track submissions. Conversely, if the method automatically selects the optimal parameter set from the training data, it can indeed produce 6 different sets of rerults based on each one of the 6 training data configurations. If that is the case, the method is eligible for the primary track.
Yes. We plan to perform such a study. Please make sure the submitted method can work with 2D+time as well as 3D+time datasets of arbitrary size (spatially as well as temporally).