AI in healthcare is kind of like an overenthusiastic intern—it’s full of potential, but someone probably should be watching it a little closer. In this episode, we dive into why artificial intelligence might be more “oops” than “awesome” when it comes to patient safety. A recent ECRI report flagged AI as a top safety concern and offered up smart recommendations like stronger governance and better training. From glitchy decision-making to eyebrow-raising cybersecurity breaches, we’re unpacking why AI still needs some serious adult supervision in the healthcare world.
In this episode:
AI Has A Patient Safety Problem – Ep 503
Today’s Episode is brought to you by:
Kardon
and
HIPAA for MSPs with Security First IT
Subscribe on Apple Podcast. Share us on Social Media. Rate us wherever you find the opportunity.
Great idea! Share Help Me With HIPAA with one person this week!
Learn about offerings from the Kardon Club
and HIPAA for MSPs!
Thanks to our donors. We appreciate your support!
If you would like to donate to the cause you can do that at HelpMeWithHIPAA.com
Like us and leave a review on our Facebook page: www.Facebook.com/HelpMeWithHIPAA
When you see a couple of numbers on the left side of the text below click that and go directly to that part of the audio. Get the best of both worlds from the show notes to the audio and back!
5th OCR SRA Resolution
[03:44]Resolution Agreement with Health Fitness Corporation
Health Fitness is a BA in Illinois. They do wellness plans for CE’s all over the country. Between Oct 2018 and Jan 2019 OCR got 4 breach reports from them on behalf of their CEs. That got the attention of OCR, as we would expect. It seems that someone changed the config of a web facing server to all search engines to index PHI in August 2015. That continued until the end of June 2018 when someone discovered it. Since we are talking about it we know they had not done an SRA, right. This is the 5th one.
They agreed to pay $227,816 and go on a 2 year corrective action plan.
“Conducting an accurate and thorough risk analysis is not only required but is also the first step to prevent or mitigate breaches of electronic protected health information,” said OCR Acting Director Anthony Archeval. “Effective cybersecurity includes knowing who has access to electronic health information and ensuring that it is secure.”
AI Has A Patient Safety Problem
[09:48] Our mission at HSCC is stated succinctly with our tagline: Cyber safety is patient safety.That is why this report caught my attention. It is from a company named ECRI that has some interesting points about the impact technology has on patient safety. Two of the top 10 concerns involved technology not clinical care. Ensuring Safe AI Use in Healthcare: A Governance Imperative
Who is ECRI
ECRI is a nonprofit organization focused on research based improvements to patient safety with the intent of making sure medical accidents are avoided if they can be. This list is created by their team of interdisciplinary staff with experts all over the spectrum of healthcare. From doctors and nurses to risk management and health IT. Those experts nominate their list of concerns to be considered by a team who then reviews them and selects the top 10 based on the following criteria:
- Severity. How serious would the harm be to patients if this safety issue were to occur?
- Frequency. How likely is it for the safety issue to occur?
- Breadth. If the safety issue were to occur, how many patients would be affected?
- Insidiousness. Is the problem difficult to recognize or challenging to rectify once it occurs?
- Profile. Would the safety issue place a lot of pressure on the organization?
Items 2 and 4 in the final 2025 list of the top 10 specifically hit home for us:
- Insufficient Governance of Artificial Intelligence in Healthcare
- Medical Errors and Delays in Care Resulting from Cybersecurity Breaches
The report is really detailed and provides their resources for decision making and recommendations to address these issues.
With all of this in mind the advancements are great but the concerns are equally broad. We have to point out that AI can save your life—but only if it doesn’t kill you first.
AI Governance – Something we are already worried about here
While they realize the value AI can bring and the fact that is rapidly growing within healthcare they also point out “common issues with AI technology—such as bias, transparency, and privacy and security concerns—can have unique and dangerous consequences in healthcare”
Why it matters:
- Bad training data = bad decisions
- AI-related errors can be difficult to trace
- Bias baked into algorithms can worsen disparities
They point out a 2023 report showed that only 16% of hospital executives surveyed said they had an organization-wide AI governance policy for usage and data access. Also, a 2024 survey of over 2,000 RNs found that 60% DISAGREED with the statement:
“I trust my employer will implement A.I. with patient safety as the first priority”
Patients surveyed are also not completely on board with this whole AI making decisions.
[21:51]Recommendations:
- Culture, Leadership, and Governance – Establish strong governance, leadership, and organizational culture to ensure the safe and effective use of AI in healthcare. It emphasizes the need for clear policies, multidisciplinary oversight, staff training, and continuous monitoring of AI-related risks, legal compliance, and impacts on patient safety and health equity.
- Patient and Family Engagement – Maintain transparency and collaboration with patients and families when using AI in healthcare. It includes obtaining informed consent, gathering patient feedback on AI tools, and involving advisory councils to help shape clear, patient-centered communication about AI use.
- Workforce Safety and Wellness – Prioritize workforce safety and wellness when implementing AI technologies. It calls for assessing how AI impacts clinical workflows and staff experiences, and emphasizes the importance of listening to and addressing staff concerns about AI use.
- Learning System – Build a learning system to support safe AI use in healthcare. It stresses the importance of adapting incident reporting systems to capture AI-related issues, reinforcing clinical judgment over AI output, and training staff to recognize and report AI-related errors or biases.
Cybersecurity breaches – This problem is real and being studied at UCSD among other places.
[36:38] We’ve discussed this many times. There are certain things we know a cyber attack causes when hospitals are hit.From their list and our list we already know about one breach can:
- Higher mortality if you are in a hospital when it is hit
- Delay test results
- Interrupt access to medication
- Cause longer hospital stays or worse outcomes
- Overload nearby hospitals (and medical practices) not hit by the attack
- Delay treatment for patients getting regular therapy for anything from cancer to allergies
- Overload staff creating serious burn out.
When patient records are lost in an attack it can impact patient safety with the potential loss of important findings and access to care history. You no longer know if the records are accurate.
Like we always point out. If we want the shiny tech benefits, we have to manage the shadow side too. We spend every day trying to get people to understand that.
Recommendations:
- Culture, Leadership, and Governance – Integrate cybersecurity into organizational culture, leadership, and governance. It encourages the use of frameworks like NIST’s, clearly defined roles, regular compliance monitoring, and an enterprise risk management approach—along with including cyberattack response in emergency preparedness plans.
- Patient and Family Engagement – Educate patients and families about cybersecurity, particularly when they use digital tools like telehealth or patient portals. It encourages sharing practical tips and resources to help them stay informed and secure.
- Workforce Safety and Wellness – Protect and support the workforce in the context of cybersecurity. It stresses the need for effective staff training, preparation for maintaining patient care during system outages, and supporting staff well-being through rest, mental health resources, and stress management during cyber incidents.
- Learning System – Promote a culture of continuous learning and preparedness in cybersecurity by regularly assessing risks, monitoring data access, evaluating security measures, and conducting incident response drills. These proactive efforts help ensure the organization is equipped to detect, respond to, and recover from cyber threats effectively.
If this episode taught us anything, it’s that AI in healthcare is equal parts promising and panic-inducing. The ECRI report didn’t just highlight the problems—it served up a to-do list for healthcare organizations everywhere, urging them to build governance structures, engage patients, and monitor AI like a hawk with a clipboard. Because whether it’s quietly reinforcing biases or loudly crashing mid-diagnosis, the risks are real—and rising. Until the bots learn bedside manner and HIPAA compliance, let’s all agree to proceed with caution…and maybe a little side-eye.
Remember to follow us and share us on your favorite social media site. Rate us on your podcasting apps, we need your help to keep spreading the word. As always, send in your questions and ideas!
HIPAA is not about compliance,
it’s about patient care.TM
Special thanks to our sponsors Security First IT and Kardon.



