If you thought AI was just about asking ChatGPT for dinner ideas, think again. This episode unpacks the next-level madness of agentic AI—those industrious bots that not only check your emails but might just decide how your healthcare practice runs. We’re talking phishing attacks on steroids, decision-making algorithms with questionable judgment, and the jaw-dropping ways AI is working for—and against—us in cybersecurity. It’s part fascinating, part terrifying, and 100% worth listening to.
In this episode:
Battle of the Bots – AI on Offense and Defense – Ep 520
Today’s Episode is brought to you by:
Kardon
and
HIPAA for MSPs with Security First IT
Subscribe on Apple Podcast. Share us on Social Media. Rate us wherever you find the opportunity.
Great idea! Share Help Me With HIPAA with one person this week!
Learn about offerings from the Kardon Club
and HIPAA for MSPs!
Thanks to our donors. We appreciate your support!
If you would like to donate to the cause you can do that at HelpMeWithHIPAA.com
Like us and leave a review on our Facebook page: www.Facebook.com/HelpMeWithHIPAA
When you see a couple of numbers on the left side of the text below click that and go directly to that part of the audio. Get the best of both worlds from the show notes to the audio and back!
Reminder EOL coming for Windows 10
[05:27]Google finds custom backdoor being installed on SonicWall network devices – Ars Technica
Google finds custom backdoor being installed on SonicWall network devices – Ars Technica
Note to self this kind of legacy tech = you are a sitting duck.
If you’re clinging to old gear because “it still works,” you’re basically holding a digital grenade with the pin halfway out and boom will happen at any moment. Please – do not keep EOL devices as your front door. Seriously, you need to understand that any locks on your door are all broken and hanging open to anyone that drives by to see they can just walk in.
This is why being able to get urgent patches matters.
[09:36]Microsoft Releases Urgent Patch for SharePoint RCE Flaw Exploited in Ongoing Cyber Attacks
Microsoft Releases Urgent Patch for SharePoint RCE Flaw Exploited in Ongoing Cyber Attacks
A vivid example of why patching now matters — because “we’ll get to it next week” is attacker-speak for “thank you for your cooperation.”
Cybersecurity vendors are leaning on AI to find these vulnerabilities faster (Google’s bot caught one in code used by billions).
Humans can’t keep up alone — not when attackers are moving at machine speed.
Battle of the bots: AI on Offense and Defense
[12:01]ChatGPT Agent shows that there’s a whole new world of AI security threats on the way we need to worry about | TechRadar
ChatGPT agent | OpenAI Help Center
ChatGPT agent | OpenAI Help Center
Since ChatGPT launched? Phishing attacks up 40x, deepfakes up 20x.
AI makes attacks scalable, not necessarily smarter — it’s a quantity game now – a whole new level of spray and pray is happening now.
A quote from the NYT article about trying to defend yourself just as a human against the machines today: you are not in a good position because they equated it to being outnumbered 1000 to 1 due to the ability to scale with AI rapidly.
We have to remember that AI is not a fix-all it is more of an accelerant to make everything faster and en masse than ever before.
Agentic AI – Meet Your New Unsupervised Intern
[21:25]
All of this being said the world is heading into this new era at breakneck speed. The AI tools will be making decisions on their own with the limited knowledge sets we give them. They don’t know everything someone with decades of experience will know but they certainly will know enough to perform amazing things for us. But, as with everything there is always a downside – what we call risk. How much are you willing to risk to get the value?
Here are some interesting examples of how these agentic AI tools are helpful but the risk they introduce at the same time.
Automating Patient Communication
Uses:
- Appointment reminders
- Pre-visit instructions
- Answering FAQs via website chat
Risks to Consider:
- PHI Exposure: If the agent isn’t configured properly, PHI might be shared without safeguards.
- Insecure Integrations: Connection to scheduling systems must meet HIPAA requirements.
- No Human Oversight: AI might give incorrect or outdated medical advice if not tightly controlled.
Managing Routine Administrative Tasks
Uses:
- Automated follow-ups on paperwork (e.g., missing intake forms)
- Internal reminders for staff credential renewals
- Collecting patient feedback through surveys
Risks to Consider:
- Data Misrouting: Sending reminders to wrong recipients due to poor integration.
- Unauthorized Access: Improper role-based access controls on systems where agents operate.
- Assumed Compliance: Assuming the agent’s vendor is HIPAA-compliant without verifying BAAs.
Data Insights & Reporting
Uses:
- Analyzing appointment no-show rates
- Identifying trends in billing denials
- Summarizing audit logs for security reviews
Risks to Highlight:
- Access Control: Agents pulling data from EHR or billing systems need proper safeguards.
- Aggregation Risks: Combining datasets might inadvertently create new PHI exposure risks.
- Accuracy Assumptions: AI analysis can look authoritative but may misinterpret data.
Document Drafting & Compliance Support
Uses:
- Drafting privacy notices, consent forms, or office policies
- Preparing reports for HIPAA security risk analysis (SRA)
- Providing recommendations based on 405(d) HICP
Risks to Highlight:
- Over-Reliance on AI: Treating drafts as legally vetted documents without proper review.
- Generic Templates: AI-generated policies might not reflect specific organizational realities or risks.
- Data Handling: Inputting sensitive internal documents into systems without understanding data retention policies.
Internal Knowledgebase & Training
Uses:
- Answering staff questions on procedures (“How do we escalate a privacy incident?”)
- Delivering just-in-time HIPAA or security reminders
- Tracking training completion
Risks to Highlight:
- Outdated Content: AI giving old guidance if not regularly updated.
- Shadow IT: Staff using consumer-grade AI tools without security controls.
- Access Leaks: Making internal knowledge accessible to unauthorized users.
Assisting with Security Monitoring
Uses:
- Summarizing security logs
- Notifying about anomalous activity trends
- Assisting with incident response documentation
Risks to Highlight:
- False Positives/Negatives: AI missing subtle indicators of compromise or over-alerting.
- Tool Overlap: Confusion between AI outputs and formal security platforms.
- Confidentiality: Potential exposure of sensitive infrastructure details via AI.
What do we need to learn from all of this?
[40:31]- AI’s not the hero or the villain — it’s the multiplier.
- If your systems are unpatched or unsupported, AI just makes it easier for the bad guys to find the holes faster than ever.
- Cybersecurity today isn’t about perfect protection — it’s about speed, awareness, and making sure your tech isn’t doing your enemies a favor.
These tools are great assistants — but like any assistant, they need supervision. AI can help you move faster, but if you don’t clearly define the rules, understand the risks, and monitor outputs, you could inadvertently create more work cleaning up privacy and security messes later.
In conclusion, AI isn’t your enemy—or your savior. It’s just your really eager new coworker who works 24/7, never takes lunch breaks, and might accidentally leak patient data if you’re not paying attention. As we rocket into a future full of digital decision-makers, remember: just because your AI can talk like a human doesn’t mean it should be left alone to run the show.
Remember to follow us and share us on your favorite social media site. Rate us on your podcasting apps, we need your help to keep spreading the word. As always, send in your questions and ideas!
HIPAA is not about compliance,
it’s about patient care.TM
Special thanks to our sponsors Security First IT and Kardon.


