Submission of Results

Please upload your results at the challenge FTP server (ftps:// on the port number 21 with explicit TSL/SSL encryption enabled). The results and software executables must comply with the guidelines described in the following two documents:

Submission of results and executables.pdf

Naming and file content conventions.pdf

Ideally, we would like to encourage all participants to submit their results for all datasets to get a complete picture of the strengths and weaknesses of all algorithms under different scenarios. However, given the varying nature of the datasets (nuclei or cells, different cell types, 2D or 3D, noise level and general image quality) and microscopy modalities (fluorescence, phase contrast, DIC, brightfield) you can also submit results for only certain datasets, or submit more than one algorithm, each one targeting one or several specific datasets.

All participating teams wishing to be included in the challenge report and in any future publication, are required to provide a working version of the algorithm used to produce the submitted results, either in the form of an executable or compilable source code. The challenge committee reserves the right to proceed to random tests to verify submitted results by rerunning the algorithms on the challenge datasets. The provided software will not be released publicly if participants do not agree with that. It will be used only for verification purposes.

To encourage the participation of groups that may be discouraged by the public display of potentially poor results, the rankings display only the names of the top-three ranked participants for each dataset. The other lower ranked algorithms are not listed, but the participants are informed about the absolute performance of their algorithms. In any case, the absolute performance of their algorithms is integrated into the rankings only after the challenge organizers are given the explicit permission to do so from the participants. Upon request, we can provide a complete table containing the performance of a particular method for all datasets analyzed, linked to the participant’s method description page.

Please note that all results submitted to the Cell Tracking Benchmark will automatically be evaluated within the Cell Segmentation Benchmark too, except for the Fluo-N3DL-DRO, Fluo-N3DL-TRIC, and Fluo-N3DL-TRIF datasets, in which only a subset of cells is evaluated and hence the two benchmarks differ in treating extra cells detected and segmented. For these datasets, participants are encouraged to additionally submit complete segmentation results that will automatically be filtered out by our evaluation software and used for the Cell Segmentation Benchmark.

The evaluation of submitted results is performed on multi-core workstations with at least 32 GB of RAM, running either Microsoft Windows or a GNU-derived Linux operating system and each being equipped with a single NVIDIA Quadro P6000 GPU card with 24 GB of RAM. The challenge organizers reserve the right to exclude results from the evaluation, the reproduction of which takes more than a few days using the mentioned workstations.

The submissions for both the Cell Tracking Benchmark and the Cell Segmentation Benchmark are evaluated monthly, by processing all results received by the end of each month.