UK to Pilot Algorithmic Impact Assessment (AIA) for Medical AI
Published on 21/02/2022
Last week, the UK Government
announced the pilot of an impact assessment tool to support the ethical development and adoption of artificial intelligence (AI) within healthcare. The Ada Lovelace Institute developed the tool, called an Algorithmic Impact Assessment (AIA), for the NHS AI Lab. This is the latest, national-level move to address bias within AI and medical devices. In 2021 the UK Government launched an
independent review of the health impact of potential bias in medical devices - including AI and other software for medical purposes. The results are expected in early 2022.
AI will revolutionise healthcare. But AIs need to be developed to the highest standards that ensure effectiveness and safety for doctors and patients. It has been widely acknowledged that some AI models and medical devices may exhibit bias. Bias, as defined by the ISO/IEC, is “favouritism towards some things, people or groups over others.” In practice this means that a medical device could work less effectively and, more worryingly, be incorrect for certain patient groups. This includes biases relating to gender and race. What’s more, a research paper found that potential bias was among the reasons why
2,212 machine learning models[1] for detecting and diagnosing Covid-19 were not fit for further development or clinical practice.
The AIA is publicly available
here. The Ada Lovelace Institute designed the tool to help AI researchers and developers to assess potential algorithmic biases. Within the pilot scheme, teams who wish to access the high quality NHS imaging data from both the National Covid-19 Chest Imaging Database (NCCID) and the National Medical Imaging Platform (NMIP) will need to conduct an AIA. The assessment gives a structured means of gaining input from people who may be affected by the use of the future AI so that ethical issues are considered, and best and worst case scenarios are developed to explore potential harms.
gliff.ai, an innovative company offering software for developing trustworthy AI in healthcare, sees the Algorithmic Impact Assessment (AIA) as a step in the right direction.
“We absolutely agree that taking steps to assess and manage bias in AI models is of critical importance,” says Lucille Valentine, gliff.ai’s Head of Regulation and Compliance. “AI developers must be able to explain where their data came from and that it is representative of the patient population. They must also carefully consider how the AI will be used, and what they’ve done to mitigate potential harm from algorithmic bias. We need to be able to trust the AI in healthcare applications - to do that, we must ensure that the developers can be confident in their data and algorithms.”
The CEO of gliff.ai, Bill Shepherd, wishes to engage more with stakeholders such as the NHS AI Lab and the Ada Lovelace Institute, to help transition ethical AI from theory into practice. “At gliff.ai, we firmly believe that AI should be fair, accountable, transparent and ethical. That’s why we’re building software tools to help others develop trustworthy AI applications for healthcare. Not a week goes by when our team isn’t discussing how to identify and limit bias in AI development. Being in the industry sector, with expertise in both medicine and data science, we can offer helpful perspectives on matters such as tackling algorithmic bias.”
[1] Machine learning is a sub-field of AI which focuses on data and algorithms so that models can learn to mimic the decisions of expert humans.