To complement the existing segmentation-only and segmentation-and-tracking benchmarks, a new linking-only benchmark was introduced at ISBI 2024 in Athens, Greece, in May 2024, allowing objective evaluation of object-linking methods over standardized, yet imperfect segmentation inputs. The Cell Linking Benchmark has initially been populated by including all the five algorithms that competed within the ISBI 2024 challenge edition. To conduct and publish a comprehensive and insightful analysis of the technical and biological performance of state-of-the-art cell linking algorithms, we open a second call for cell linking submissions. The submissions of two different types are going to simultaneously be collected, evaluated, and analyzed according to the following schedule:
June 10th, 2024
Second call for submissions opened
December 1st, 2024
Deadline for submitting results, including command-line executables of the algorithms used
December 2nd, 2024 - December 8th, 2024
The received submissions are checked for completeness and consistency
December 15th, 2024
Deadline for revising incomplete or inconsistent submissions
December 16th, 2024 - January 31st, 2025
Validation and evaluation of the submissions received by rerunning the algorithms on our evaluation servers
January 12th, 2025
Deadline for submitting reusable versions of competing algorithms, including their detailed descriptions and parameter configurations used
February-April 2025
Preparation of a manuscript with a detailed analysis of the collected results
The registered participants compete over the set of 13 real datasets (eight 2D+t and five 3D+t ones), with complete gold tracking truth and imperfect segmentation masks available for the training datasets. An expected submission consists of a set of 13 segmentation-and-tracking results for the 13 included training datasets, created using the same approach with no limitations on the training data configurations used. For more detailed information, please check this document.
The performance of a particular algorithm for a given test dataset is primarily evaluated using the LNK, BIO, and OPCLB measures. Furthermore, the detailed biological performance of the algorithm, evaluated using the CT, TF, BC(i), and CCA measures, is provided as complementary information. The overall, measure-specific performance of the algorithm used for its ranking is then obtained by averaging its measure-specific performance scores over all the included test datasets.
Apart from submitting a set of 13 segmentation-and-tracking results for the 13 included training datasets, the participants must provide command-line versions of their algorithms used to produce the submitted results, thus allowing the challenge organizers to validate all submitted results by rerunning the algorithms on the training datasets on their own and to generate results for evaluation by running the algorithms on the test datasets, disclose descriptions of the algorithms used, including the details on parameter configurations chosen, the training data used, and the training protocols followed, and prepare their algorithms in a reusable form. The submission instructions are the same as for regular submissions to the Cell Linking Benchmark. For more detailed submission instructions, please check this document.
The registered participants compete over 2D+t and 3D+tdatasets of their choice, with complete gold tracking truth and imperfect/perfect segmentation masks available for the real/computer-generated training datasets. An expected submission consists of one segmentation-and-tracking result per training dataset, with the possibility of using different algorithms for different datasets and no limitations on the training data configurations used. For more detailed information, please check this document.
The performance of a particular algorithm for a given test dataset is primarily evaluated using the LNK, BIO, and OPCLB measures. Furthermore, the detailed biological performance of the algorithm, evaluated using the CT, TF, BC(i), and CCA measures, is provided as complementary information.
Apart from submitting segmentation-and-tracking results for the chosen training datasets, the participants must provide command-line versions of their algorithms used to produce the submitted results, thus allowing the challenge organizers to validate all submitted results by rerunning the algorithms on the training datasets on their own and to generate results for evaluation by running the algorithms on the test datasets, disclose descriptions of the algorithms used, including the details on parameter configurations chosen and the training protocols followed, and prepare their algorithms in a reusable form. For more detailed submission instructions, please check this document.