Building confidence in AI for healthcare

Better clinical decision support, population health interventions, personal patient care and research – these are just a few of the promising use cases for artificial intelligence (AI).

In addition to its benefits, AI can introduce risks that could potentially undermine confidence in AI solutions and need to be addressed. These risks include, among others, the promulgation of biases inherent in the source data; a lack of minimal transparency in the calculation algorithms; Performance of AI in a “lab environment” not extending to applications in real scenarios; lack of precision in predictions over time (model drift) due to lack of understanding and calibration of model parameters; or cybersecurity risks.

How, then, can the healthcare industry continue the momentum of AI and prevent the next AI winter (flattening the adoption curve) from taking hold? First, there are several organizational prerequisites that need to be in place before substantial investments in AI are made. They include a clear vision of the problems that AI will help solve; internal talent with both technical expertise in AI and an understanding of the health field; and a review process to assess the potential risks and ethical implications of each AI solution. Once these prerequisites are met, additional steps can be taken to ensure the long-term success and ROI of your AI project.

In this article, we will describe how to mitigate three important groups of risks.

Risk 1: Poor management of data and algorithms

Bias outputs from AI models can be the result of an AI model trained on data that does not accurately represent the population the solution is designed to support. For example, if an AI solution predicts the health outcomes of a general population, but the data used to train the AI ​​algorithm is limited to the elderly, there is a significant risk that the model predictions for other age groups are not valid.

Likewise, the selection of inappropriate target variables for the prediction can generate bias. For example, researchers found that a prediction algorithm widely used by health insurers to identify people who may need health interventions had significant bias. In this case, past health expenditure was used as an indicator of “health status” and to predict future needs. However, past use of resources is not an accurate predictor of health care needs for certain segments of the population. This algorithm did not accurately represent the health care needs of minority populations because they receive fewer health services and accumulate less health costs compared to other segments of the population for a number of reasons (e.g. example, lack of insurance coverage). In this case, past health care use was not a valid “indicator” of the health status of this minority population segment. To minimize bias in source data, developers should sample large data sets to ensure that their training data accurately represents the population for which predictions are sought.

In addition, it is essential to keep careful records of the provenance of AI algorithms. These records detail the components, inputs, systems, and processes that affect the data collected. Based on this information, developers, implementers and users of AI clearly understand where relevant data comes from and how it was collected.

Finally, you will need a plan to tie these elements together into a single operating model, supported by compliance and monitoring protocols that include documenting model predictions, incoming data, security requirements, problems and bugs.

One way to set up and automate a repeatable, scalable, and integrated AI system in your organization is to use “AIOps”. Like DevOps for software development, AIOps are processes, strategies, and frameworks for operationalizing AI in response to real-world challenges. AIOps combines the development, data, algorithms and responsible AI teams into an integrated, automated and documented modular solution for AI development, sustainment and high impact sustainable results.

Risk 2: Lack of cybersecurity

Malicious actors regularly and aggressively target healthcare systems. In May and June 2021 alone, we saw four major healthcare ransomware outages. In one incident, the threat actors behind the attack stole a mine of data for more than 150,000 patients.

AI systems offer many benefits, but they are also susceptible to cyber attacks and need to be strengthened. All components of the technical delivery stack, associated data sets, and the enabling infrastructure can be the target of conflicting attacks. Incorrect access credentials for development and production environments can create additional unintended vulnerabilities. AI carries an inherent risk of sensitive data leakage due to the aggregation of data and its widespread use within an organization.

Trust is essential to AI adoption, and nothing puts trust at risk like the prospect of security attacks, data breaches, or data breaches. As the healthcare community as a whole continues to be a prime target for cyber attacks, it is critical to put in place the basic controls and measures necessary to mitigate risk, including a workforce. properly trained coupled with governance and data management processes that enable secure access to data. and an understanding of how and where the data will be used.

Additionally, AI offers potential benefits for cybersecurity itself. Therefore, AI can be part of the cybersecurity solution used to analyze and detect potential cybersecurity risks, thereby strengthening the protection of the use of AI in healthcare.

Risk 3: Lack of continuous monitoring and maintenance

AI tools can “shift” or “drift” over time due to changing parameters and data. AI models operating in the field are subject to many more variables than when developed in a “lab” setting. They must be updated to adapt to a changing environment in order to maintain their accuracy and reliability. Large, unanticipated model shifts in healthcare use, such as those caused by the COVID-19 pandemic, are a good example of a global disruption that could affect a model’s results if they are not taken into account. In this case, this major disruption of usual care patterns (e.g. the COVID-19 pandemic. This example illustrates the importance of transparency in model parameters to allow any correction based on major or unforeseen changes.

Organizations can use systematic monitoring of patterns to mitigate risk. Regular checks, supplemented by user feedback who understand how models operate, can detect significant model changes, feedback loops, or anything beyond established parameters for accuracy. You can then recycle your models and algorithms over time as conditions change. It is essential to communicate regularly with the appropriate stakeholders to know where the data comes from, how the data is fed into and how this relates to the decisions made from the outputs of the model.

Shaping the future of AI

Now is the time for healthcare organizations to proactively shape the future of AI. One way is to deliberately address AI risks by creating well-calibrated organizational and project controls throughout the AI ​​development and implementation cycle. In doing so, the healthcare industry can maintain user confidence in AI and realize its transformative potential.

About John Tuttle

Check Also

Quit vaping programs are becoming essential for today’s workers

Vaping has become a growing concern in the workplace, with nearly 11 million American adults …

Leave a Reply

Your email address will not be published.