They are looking for collaborators here, can you please circulate. Join the conversation here.
More info in their wiki
Artificial intelligence (AI) can provide the basis for tools that improve global health care, bringing us closer to the realization of the third sustainable development goal: good health and well-being for all.
However, before AI-based tools are integrated in medical practice and applied to patients, it must be demonstrated that they serve the intended purpose without unintended effects. Of the AI-based tools that are currently used in medical practice, most were developed and regulated for a limited (e.g., national) public. Consequently, the adoption of AI-based tools is fragmented across the globe. As health is an issue that transcends borders, ITU/WHO FG-AI4H encourages a collective effort among stakeholders (including developers, regulators, healthcare practitioners, and public health institutes) from across the globe to ensure the safety and trustworthiness of AI-based tools and to permit their widespread implementation.
This Open Code Project aims to produce the digital building blocks (six software packages) that compose the FG-AI4H Assessment Platform. The assessment platform, which can be distinguished from AI “challenge” platforms through its consideration of regulatory guidelines and the needs of other AI for health stakeholders, supports the end-to-end assessment of AI for health algorithms.
The life cycle of AI for health has several steps. First, annotated health data are compiled. Then, an AI for health model is developed and carefully evaluated. At each step, medical, technological, and regulatory considerations play a critical role. Within this life cycle, we have identified two opportunities to advance the field of AI for health:
- The Arbiter Problem:
a. Challenge: Companies do not want to share their data and solution; both are opaque to regulators.
b. Opportunity: The software platform can serve as a safe and neutral arbiter between parties.
- Health AIs at Scale:
a. Challenge: Regulatory compliance of AI for health is a country-dependent process, which brings considerable costs.
b. Opportunity: Map country requirements to automated tests.
The Open Code Project capitalizes on these opportunities to make AI for health usable at scale.
Our product vision is the following:
For health AI companies and regulators, who need to proof that a health AI product is fit for purpose, the AI4H Assessment Platform is a software platform that supports the end-to-end process for assessing health AI algorithms on a global level. Unlike e.g. EvalAI and other existing AI assessment platform products our platform focuses specifically on healthcare and covers all additional aspects, including ground truth annotation, data & metadata management and reporting for health AI regulators.
The Open Code Project produces software comprising the foundation of the FG-AI4H Assessment Platform and addressing the aforementioned—and other—challenges in the field of AI for health. These are: Data Acquisition Package (DAP), Data Storage Package (DP), an Annotation Package (AP), a Prediction Package (PP), an Evaluation Package (EP), and a Reporting Package (RP). The following table highlights the purpose, functionalities, and target groups for each package.