Skip to Content
SBMI Horizontal Logo

New guidance for ensuring AI safety in clinical care published in JAMA by UTHealth Houston, Baylor College of Medicine researchers

Dean Sittig, PhD (Photo by UTHealth Houston)
Dean Sittig, PhD (Photo by UTHealth Houston)

As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an article co-written by Dean Sittig, PhD, professor with McWilliams School of Biomedical Informatics at UTHealth Houston and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine.

The guidance was published today, Nov. 27, 2024, in the Journal of the American Medical Association.

“We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings,” Sittig said. “It is a tool that has the potential to revolutionize medical care, but without safeguards in place, AI could generate false or misleading outputs that could potentially harm patients if left unchecked.” 

Drawing from expert opinion, literature reviews, and experiences with health IT use and safety assessment, Sittig and Singh developed a pragmatic approach for health care organizations and clinicians to monitor and manage AI systems.

“Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of health care and patient outcomes,” Singh said. “All health care delivery organizations should check out these recommendations and start proactively preparing for AI now.”

Some of the recommended actions for health care organizations are listed below:

  •       Review guidance published in high-quality, peer-reviewed journals and conduct rigorous real-world testing to confirm AI’s safety and effectiveness.
  •       Establish dedicated committees with multidisciplinary experts to oversee AI system deployment and ensure adherence to safety protocols. Committee members should meet regularly to review requests for new AI applications, consider their safety and effectiveness before implementing them, and develop processes to monitor their performance.
  •       Formally train clinicians on AI usage and risk, but also be transparent with patients when AI is part of their care decisions. This transparency is key to building trust and confidence in AI’s role in health care. 
  •       Maintain a detailed inventory of AI systems and regularly evaluate them to identify and mitigate any risks.
  •       Develop procedures to turn off AI systems should they malfunction, ensuring smooth transitions back to manual processes.

“Implementing AI into clinical settings should be a shared responsibility among health care providers, AI developers, and electronic health record vendors to protect patients,” Sittig said. “By working together, we can build trust and promote the safe adoption of AI in health care.”

Also providing input to the article were Robert Murphy, MD, associate professor and associate dean, and Debora Simmons, PhD, RN, assistant professor, both from the Department of Clinical and Health Informatics at McWilliams School of Biomedical Informatics; and Trisha Flanagan, RN, MSN.

Laura Frnka-Davis

site var = sbmi