Uncategorized

Transparency of artificial intelligence/machine learning-enabled medical devices


Workshop participants agreed on the need for transparency of AI/ML devices to more clearly communicate how a device works for an intended population, the presence and management of potential bias, and the role of the device in the clinical workflow. Multiple stakeholders expressed a need to establish a common framework and language around AI/ML devices in healthcare that can be leveraged to educate users, helping to empower them to make informed healthcare decisions.

Researchers at the workshop emphasized the importance of considering each stakeholders’ unique needs and tailoring information sharing to those needs, suggesting an opportunity for a human-centered design approach to transparency. This references the principles and practices of human factors engineering16 and human-centered design17. It focuses on a holistic approach to understanding, addressing, and involving users, their environments, and workflows to address the complex considerations associated with a given use case (like the use of AI/ML devices).

The subsequent sections expand on some workshop discussions, while Table 1 summarizes additional feedback stakeholders shared unique to their experiences. Taken together, the varied feedback provided by stakeholders reveals the opportunity for a human-centered approach to the transparency of AI/ML devices.

Table 1 Important transparency considerations expressed at the workshop.

Patients

For some patients and caregivers, a gap exists between their awareness or familiarity, if any, of AI/ML and their knowledge of how a specific AI/ML device could impact their health and healthcare, leading them to feel uncomfortable making decisions and deferring key shared decision making to their healthcare provider. Patients shared concerns that could be applicable in both situations where they used an AI/ML device and when their healthcare provider used an AI/ML device during their care. One key concern that emerged from patients at the workshop was whether AI/ML devices would inform or replace provider decisions, and how that could impact their care. Opportunities exist to empower patients by sharing educational resources, including questions to ask their healthcare provider (questions that could provide insight into a healthcare provider’s experience with and knowledge of an AI/ML device, questions about how a device performs for patients like themselves, questions about where they can find further information about a device, etc.). Building on the need for additional information, patients at the workshop also expressed their concern that a user’s technical literacy limits could further impact the quality of care they receive or the safety or effectiveness of the device they are using. Other transparency considerations identified as important to patients included data security and ownership, the cost of the device compared to the current standard of care, insurance coverage of the device, and the need for high-speed internet access or other technical infrastructure requirements.

Healthcare providers

As with patients, some healthcare providers may be unfamiliar with how to use and effectively incorporate AI/ML devices into their clinical environment. At the workshop, providers expressed a desire to trust these devices at face value without needing an in-depth review to determine if they will work for their patients coming from various demographic groups and backgrounds that may not be captured in the data used to train or clinically validate the device. Providers voiced they may feel uncomfortable working with such devices as they currently find that information on AI/ML-device training, testing, and real-world performance may be difficult to understand or unavailable. Healthcare providers identified an opportunity to be more transparent in the delivery of this information not only in the data available and the media type in which it is communicated but also through who shares this information (device manufacturers, government agencies, professional societies, etc.). They expressed a need for transparency when communicating changes to the device and its performance and noted the importance of having a reliable mechanism to report device malfunction and performance drift to manufacturers.

Payors

During the workshop, payor participants discussed that while the general trustworthiness of an AI/ML device can be demonstrated by its clinical use and validation, how the device specifically performs may vary by patient population, site, or environment of use. This can be particularly important to consider when an algorithm learns continuously instead of being “locked.” Given this potential for the AI/ML device to evolve, payors expressed concern with the coverage of “unlocked” or learning algorithms. Payor participants emphasized the importance of employing diversified datasets and discussed the possibility of monitoring the real-world performance of devices to ensure that they are performing as intended and improving patient outcomes.

Industry

Industry members at the workshop shared their thoughts on a risk-based approach to transparency to maintain the least burdensome regulatory framework for AI/ML devices, while also mitigating potential proprietary risk that may arise with sharing information in an effort to be transparent. They expressed that their existing relationships with stakeholders are sufficient to communicate information about AI/ML devices to users through current device manuals, user training, and feedback processes. Workshop participants noted that the FDA is a trusted source of information for patients on manufacturers’ AI/ML devices and recommended manufacturers work with the FDA on transparent communications regarding these devices.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *