PHAROS


Adaptation, Fairness, Explainability


in AI Medical Imaging Workshop


(PHAROS-AFE-AIMI)

in conjunction with the International Conference on Computer Vision (ICCV), 2025

13:00 - 18:00 HST, 20 October 2025, Honolulu, Hawaii

About PHAROS-AFE-AIMI

A main target of the Workshop is to present and evaluate novel approaches for predictive modeling of big medical image datasets, focusing on deep learning models, on transparent and human-centered integration of Generative AI and Large Models (such as LLMs, VLMs) in Health Services, creating trust among citizens. It addresses key challenges at the intersection of computer vision and healthcare AI, including multidisease diagnosis, model explainability, fairness, domain adaptation, and continual learning. Given the increasing interest in trustworthy AI, interpretable deep learning models, and the responsible deployment of AI in sensitive applications, this workshop will foster discussions on critical advancements shaping the future of AI in medical imaging. The Workshop is organised in compliance with the PHAROS AI Factory, which is one of the first seven AI Factories selected and funded by the European Union. This ensures that the workshop’s topics have real-world applicability and a strong foundation in cutting-edge research.

A Competition is also organized, which is split into two Challenges (the Multi-Source Covid-19 Detection Challenge and the Fair Disease Diagnosis Challenge) offering two large databases and letting research groups test and compare their state-of-the-art AI approaches.

The PHAROS-AFE-AIMI Workshop is the fifth in the AI-MIA series of Workshops held at CVPR 2024 and IEEE ICASSP 2023, ECCV 2022 and ICCV 2021 Conferences.

For any requests or enquiries regarding the Workshop, please contact: stefanos@cs.ntua.gr.

Organisers



General Chair



Stefanos Kollias

National Technical University of Athens & GRNET, Greece stefanos@cs.ntua.gr


Program Chairs



Dimitrios Kollias

Queen Mary University of London, UK d.kollias@qmul.ac.uk

Xujiong Ye

University of Exeter, UK X.Ye2@exeter.ac.uk

Francesco Rundo

University of Catania, Italy francesco.rundo@unict.it

        Data Chairs

                    Anastasios Arsenos,                 National Technical University of Athens
                    Paraskevi Theofilou,                 National Technical University of Athens
                    Manos Seferis,                           National Technical University of Athens

The Workshop



Call for Papers

Original high-quality contributions (in terms of databases, surveys, studies, foundation models, techniques and methodologies) are solicited on -but are not limited to- the following topics and technologies:

    Learning methods to leverage knowledge from well-explored domains and gain insights in areas with low data availability, enhancing the accuracy of predictions for underrepresented health conditions;

    AutoML, allowing users to automate the selection of hyperparameters, and feature engineering

    AI models on multimodal biomedical data (e.g. imaging, sequencing, medical records, open datasets) to enable a holistic approach to patient diagnostics and treatment

    Domain-specific AI models (e.g. in breast cancer, Alzheimer’s disease)

    AI prediction models for disease progression (focusing on large scale visual language models for medical imaging, as well as on survival machine learning models for Electronic Health Records), including segmentation, classification, fair and explainable decision making

    Interpretable and actionable insights into AI-driven decisions, (e.g. by combining enhanced visualizations with natural language explanations, enabling the end users to comprehend the models’ outcomes)

    Integration of uncertainty quantification mechanisms (e.g., prediction intervals, variance heatmaps) into intelligent clinical decision interfaces, based on Bayesian methods, Monte Carlo dropout, and ensemble modeling;

    Drift monitoring and out-of-distribution detection for assessment of model performance and identification of data drifts in real-time, along with actionable recommendations (e.g., human review, adaptation

    Continual learning and model transportability (ensuring development of adaptive AI models that will evolve with new data while maintaining their performance across diverse environments and over time)

    Domain adaptation for timely model updates and refinements in response to changing data distributions, new clinical contexts, and varying patient demographics



Workshop Important Dates


Paper Submission Deadline:                                                             23:59:59 AoE (Anywhere on Earth) July 8, 2025

Review decisions sent to authors; Notification of acceptance:       August 8, 2025

Camera ready version:                                                                       August 18, 2025




Submission Information

The paper format should adhere to the paper submission guidelines for main ICCV 2025 proceedings style. Please have a look at the Submission Guidelines Section here.

We welcome full long paper submissions (between 5 and 8 pages, excluding references or supplementary materials). All submissions must be anonymous and conform to the ICCV 2025 standards for double-blind review.

All papers should be submitted using this CMT website*.

All accepted manuscripts will be part of ICCV 2025 conference proceedings.


* The Microsoft CMT service was used for managing the peer-reviewing process for this workshop. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.

The Competition



The PHAROS-AFE-AIMI Workshop includes a Competition which is split into two Challenges: (i) Multi-Source Covid-19 Detection Challenge; (ii) Fair Disease Diagnosis Challenge.



How to participate

In order to participate, teams will have to register. There is a maximum number of 8 participants in each team. You should follow the below procedure for registration.

The lead researcher should send an email from their official address (no personal emails will be accepted) to d.kollias@qmul.ac.uk with:

      i) subject "PHAROS-AFE-AIMI Competition: Team Registration";

      ii) this EULA filled in, signed and attached;

      iii) the lead researcher's official academic/industrial website; the lead researcher cannot be a student (UG/PG/Ph.D.);

      iv) the emails of each team member, each one in a separate line in the body of the email;

      v) the team's name;

      vi) the point of contact name and email address (which member of the team will be the main point of contact for future communications, data access etc)

As a reply, you will receive access to the dataset's images and annotations.



Competition Contact Information

For any queries you may have regarding the Challenges, please contact d.kollias@qmul.ac.uk.


General Information

At the end of the Challenges, each team will have to send us:

      i) their predictions on the test set,

      ii) a link to a Github repository where their solution/source code will be stored,

      iii) a link to an ArXiv paper with 4-8 pages describing their proposed methodology, data used and results.

After that, the winner of each Challenge, along with a leaderboard, will be announced.

There will be one winner per Challenge. The top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the ICCV 2025 proceedings. All other teams are also able to submit paper(s) describing their solutions and final results; the accepted papers will be part of the ICCV 2025 proceedings.

The Competition's white paper (describing the Competition, the data, the baselines and results) will be ready at a later stage and will be distributed to the participating teams.



General Rules

1) Participants can contribute to any of the 2 Challenges.

2) In order to take part in any Challenge, participants will have to register as described above.

4) The winner and the two runner-ups in each Challenge will be asked to also share their trained models so as to check out the validity of the approach.



Competition Important Dates


Call for participation announced, team registration begins, data available:           May 26, 2025

Test set release:                                                                                                               June 26, 2025

Final submission deadline (Predictions, Code and ArXiv paper):                               23:59:59 AoE (Anywhere on Earth) July 2, 2025

Winners Announcement:                                                                                                 July 5, 2025

Final Paper Submission Deadline:                                                                                 23:59:59 AoE (Anywhere on Earth) July 8, 2025

Review decisions sent to authors; Notification of acceptance:                                   August 8, 2025

Camera ready version:                                                                                                   August 18, 2025

Multi-Source Covid-19 Detection Challenge

Description

CT scans have been collected from four distinct hospitals and medical centers. Each scan has been manually annotated to indicate whether it belongs to the Covid-19 or non-Covid-19 category. The dataset is divided into training, validation, and test subsets. Participants will receive the CT scans along with the source identifier for each scan, represented by an ID number from 0 to 3. Competing teams are required to develop AI, machine learning, or deep learning models for Covid-19 classification. Model performance will be evaluated on the test set, which also contains CT scans collected from the same four distinct hospitals and medical centers. Model performance will be based on the average macro F1 score achieved across all four sources (hospitals and medical centers), ensuring fair and robust assessment across diverse data distributions.

Rules

The participating teams will be able to use any publicly available datasets as long as they report it in their submitted final paper.

Performance Assessment

The performance measure (P) is the average macro F1 score achieved across all four sources:

GRNET


Baseline Results

TBC

Fair Disease Diagnosis Challenge

Description

The dataset consists of chest CT scans of lung cancer, Covid-19 and healthy subjects. Each scan has been annotated to indicate whether it belongs to a healthy subject or a subject diagnosed with Adenocarcinoma, or with Squamous Cell Carcinoma, or with Covid-19. Each scan contains information on whether the subject is male or female. The dataset is divided into training, validation, and test subsets. Participants will receive the CT scans along with the male/female subject information. Competing teams are required to develop AI, machine learning, or deep learning models for fair disease diagnosis. Model performance will be evaluated on the test set and will be based on the average of per-gender macro F1-scores.

Rules

Participating teams may use any publicly available pre-trained models, provided these models have not been specifically pre-trained to classify between healthy individuals and those diagnosed with Adenocarcinoma, Squamous Cell Carcinoma, or Covid-19. However, all model fine-tuning and methodological development must be conducted exclusively using the dataset provided for the competition.

Performance Assessment

The performance measure (P) is the average of per-gender macro F1-scores. To calculate this, at first one needs to split the set by gender (Subset A: all male samples; Subset B: all female samples) and then compute the macro F1 on each. Therefore, the Final Score is:

GRNET


Baseline Results

TBC

References


  If you use the above data, you must cite all following papers and the white paper that will be distributed at a later stage: