PHAROS AI Factory


for Medical Imaging & Healthcare


(PHAROS-AIF-MIH)

in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2026

Wed June 3 - Sun June 7, 2026, Denver Colorado

About PHAROS-AIF-MIH

The PHAROS-AIF-MIH Workshop (linked to the healthcare vertical of PHAROS AI Factory) aims to foster discussion and presentation of ideas regarding challenges of analysis of CT scans, MRIs, X-rays, EHRs and content metadata in the context of Digital Pathology and Radiology, adopting recent advances in GenAI, multimodal large language models, privacy preserving architectures, ethics by design, AI models trained in supercomputing infrastructures, knowledge distillation and Agentic AI. A Competition will be also organized focusing on multi-disease diagnosis through multiple medical image datasets. Baseline models will be provided to participants who will develop AI models improving performance and fairness of results. The topics covered by the PHAROS-AIF-MIH Workshop are closely related to the focus of the healthcare thematic area (vertical) of the PHAROS AI Factory, one of the first seven AI Factories selected and funded by the European Union in late 2024 (currently EU, through EuroHPC-JU, has selected 19 AI Factories, operating all over Europe).

A Competition is also organized, which is split into two Challenges (the Multi-Source Covid-19 Detection Challenge and the Fair Disease Diagnosis Challenge) offering two large databases and letting research groups test and compare their state-of-the-art AI approaches.

The PHAROS-AIF-MIH Workshop is the sixth in the AI-MIA series of Workshops held at ICCV 2025 and CVPR 2024 and IEEE ICASSP 2023, ECCV 2022 and ICCV 2021 Conferences.

For any requests or enquiries regarding the Workshop, please contact: stefanos@cs.ntua.gr.

Organisers



General Chair



Stefanos Kollias

National Technical University of Athens & GRNET, Greece stefanos@cs.ntua.gr


Program Chairs



Dimitrios Kollias

Queen Mary University of London, UK d.kollias@qmul.ac.uk

Xujiong Ye

University of Exeter, UK X.Ye2@exeter.ac.uk

Francesco Rundo

University of Catania, Italy francesco.rundo@unict.it

        Data Chairs

                    Anastasios Arsenos,                 National and Kapodistrian University of Athens
                    Paraskevi Theofilou,                 National Technical University of Athens
                    Manos Seferis,                           National Technical University of Athens

The Workshop



Call for Papers

Original high-quality contributions (in terms of databases, surveys, studies, foundation models, techniques and methodologies) are solicited on -but are not limited to- the following topics and technologies:

    predictive modeling and deep learning models for medical imaging

    transparent, human-centered integration of GenAI (e.g., LLMs, VLMs) in health services

    creating trust among citizens on the use of responsible AI for healthcare

    efficient and friendly use of services, respect of privacy - taking into consideration the AI Act and Data Act

    learning methods to leverage knowledge from well-explored domains - gain insight in low data availability cases - enhance prediction accuracy for underrepresented health conditions

    AutoML, allowing automated selection of hyperparameters, feature engineering

    AI models on multimodal data (e.g. imaging, sequencing, medical records, open datasets)

    domain-specific AI models (e.g. in breast cancer, Alzheimer’s disease)

    AI prediction models for disease progression (e.g., survival ML for Electronic Health Records)

    segmentation, classification, fair & explainable medical decision making

    interpretable and actionable insights into AI-driven decisions

    integration of uncertainty quantification mechanisms (e.g., prediction intervals, variance heatmaps) - Bayesian methods, Monte Carlo dropout, ensemble modeling

    drift monitoring and out-of-distribution detection, continual learning and model transportability - domain adaptation for timely model updates and refinements

    Agentic AI for medical imaging & healthcare

    High Performance Computing and AI for healthcare



Workshop Important Dates


Paper Submission Deadline:                                                             23:59:59 AoE (Anywhere on Earth) March 18, 2026

Review decisions sent to authors; Notification of acceptance:       April 6, 2026

Camera ready version:                                                                       April 10, 2026




Submission Information

The paper format should adhere to the paper submission guidelines for main CVPR 2026 proceedings style. Please have a look at the Submission Guidelines Section here.

We welcome full long paper submissions (between 5 and 8 pages, excluding references or supplementary materials). All submissions must be anonymous and conform to the CVPR 2026 standards for double-blind review.

All papers should be submitted using this CMT website*.

All accepted manuscripts will be part of CVPR 2026 conference proceedings.


The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.

The Competition



The PHAROS-AIF-MIH Workshop includes a Competition which is split into two Challenges: (i) Multi-Source Covid-19 Detection Challenge; (ii) Fair Disease Diagnosis Challenge.



How to participate

In order to participate, teams will have to register. There is a maximum number of 8 participants in each team. You should follow the below procedure for registration.

The lead researcher should send an email from their official address (no personal emails will be accepted) to d.kollias@qmul.ac.uk with:

      i) subject "PHAROS-AIF-MIH Competition: Team Registration";

      ii) this EULA filled in, signed and attached;

      iii) the lead researcher's official academic/industrial website; the lead researcher cannot be a student (UG/PG/Ph.D.);

      iv) the emails of each team member, each one in a separate line in the body of the email;

      v) the team's name;

      vi) the point of contact name and email address (which member of the team will be the main point of contact for future communications, data access etc)

As a reply, you will receive access to the dataset's images and annotations.



Competition Contact Information

For any queries you may have regarding the Challenges, please contact d.kollias@qmul.ac.uk.


General Information

At the end of the Challenges, each team will have to send us:

      i) their predictions on the test set,

      ii) a link to a Github repository where their solution/source code will be stored,

      iii) a link to an ArXiv paper with 2-8 pages describing their proposed methodology, data used and results.

After that, the winner of each Challenge, along with a leaderboard, will be announced.

There will be one winner per Challenge. The top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the CVPR 2026 proceedings. All other teams are also able to submit paper(s) describing their solutions and final results; the accepted papers will be part of the CVPR 2026 proceedings.

The Competition's white paper (describing the Competition, the data, the baselines and results) will be ready at a later stage and will be distributed to the participating teams.



General Rules

1) Participants can contribute to any of the 2 Challenges.

2) In order to take part in any Challenge, participants will have to register as described above.

4) The winner and the two runner-ups in each Challenge will be asked to also share their trained models so as to check out the validity of the approach.



Competition Important Dates


Call for participation announced, team registration begins, data available:           January 30, 2026

Test set release:                                                                                                               March 9, 2026

Final submission deadline (Predictions, Code and ArXiv paper):                               23:59:59 AoE (Anywhere on Earth) March 15, 2026

Winners Announcement:                                                                                                 March 17, 2026

Final Paper Submission Deadline:                                                                                 23:59:59 AoE (Anywhere on Earth) March 18, 2026

Review decisions sent to authors; Notification of acceptance:                                   April 6, 2026

Camera ready version:                                                                                                   April 10, 2026

Multi-Source Covid-19 Detection Challenge

Description

CT scans have been collected from four distinct hospitals and medical centers. Each scan has been manually annotated to indicate whether it belongs to the Covid-19 or non-Covid-19 category. The dataset is divided into training, validation, and test subsets. Participants will receive the CT scans along with the source identifier for each scan, represented by an ID number from 0 to 3. Competing teams are required to develop AI, machine learning, or deep learning models for Covid-19 classification. Model performance will be evaluated on the test set, which also contains CT scans collected from the same four distinct hospitals and medical centers. Model performance will be based on the average macro F1 score achieved across all four sources (hospitals and medical centers), ensuring fair and robust assessment across diverse data distributions.

Rules

The participating teams will be able to use any publicly available datasets as long as they report it in their submitted final paper.

Performance Assessment

The performance measure (P) is the average macro F1 score achieved across all four sources:

GRNET


Fair Disease Diagnosis Challenge

Description

The dataset consists of chest CT scans of lung cancer, Covid-19 and healthy subjects. Each scan has been annotated to indicate whether it belongs to a healthy subject or a subject diagnosed with Adenocarcinoma, or with Squamous Cell Carcinoma, or with Covid-19. Each scan contains information on whether the subject is male or female. The dataset is divided into training, validation, and test subsets. Participants will receive the CT scans along with the male/female subject information. Competing teams are required to develop AI, machine learning, or deep learning models for fair disease diagnosis. Model performance will be evaluated on the test set and will be based on the average of per-gender macro F1-scores.

Rules

Participating teams may use any publicly available pre-trained models, provided these models have not been specifically pre-trained to classify between healthy individuals and those diagnosed with Adenocarcinoma, Squamous Cell Carcinoma, or Covid-19. However, all model fine-tuning and methodological development must be conducted exclusively using the dataset provided for the competition.

Performance Assessment

The performance measure (P) is the average of per-gender macro F1-scores. To calculate this, at first one needs to split the set by gender (Subset A: all male samples; Subset B: all female samples) and then compute the macro F1 on each. Therefore, the Final Score is:

GRNET


References


  If you use the above data, you must cite all following papers and the white paper that will be distributed at a later stage: