The following table presents the best three results across the 13 included datasets for generalizable submissions and the best three results per dataset for regular submissions, both being evaluated using the OPCTB, SEG, and TRA measures. The detailed evaluation of technical and biological performance of all generalizable submissions can be found in this document, and that of all regular submissions is available here. Refer to Participants for detailed descriptions of individual algorithms and to this document for the mapping between the tags currently used for individual algorithms and their past alternatives.
2023-11-10: The regular submission AC (6) updated.
2023-08-01: The benchmark opened for online generalizable submissions.
2023-02-27: New regular submission AC (6) included.
2022-06-01: The regular submission AC (5) renamed to BGU-IL (5). The invited generalizable submissions transferred.