.st0{fill:#FFFFFF;}

ChatGPT Explains Itself – Ep 392 

 February 3, 2023

By  Donna Grindle

AI is the latest nerd language spreading wildly across… well, everywhere. ChatGPT, an automated AI-powered chatbot, is designed to provide automated conversational responses to users in a friendly and natural way. Today, we discuss and show you how ChatGPT explains itself and how it could be used in healthcare.

A 5 star review is all we ask from our listeners.
1x
Free HIPAA Training
Subscribe to the weekly email update from HMWH

I have read and agreed to your Privacy Policy

In this episode:

ChatGPT Explains Itself – Ep 392

Today’s Episode is brought to you by:

Kardon

and

HIPAA for MSPs with Security First IT

 Subscribe on Apple Podcast. Share us on Social Media. Rate us wherever you find the opportunity. 

The Privacy and Security Boot Camp

3.5 day In Person Event

Mar 12, 13, 14 and 15, 2023

PriSecBootCamp.com

Great idea! Share Help Me With HIPAA with one person this week!

Learn about offerings from the Kardon Club

and HIPAA for MSPs!

Thanks to our donors. We appreciate your support!

If you would like to donate to the cause you can do that at HelpMeWithHIPAA.com

Like us and leave a review on our Facebook page: www.Facebook.com/HelpMeWithHIPAA

If you see a couple of numbers on the left side you can click that and go directly to that part of the audio. Get the best of both worlds from the show notes to the audio and back!

Thanks to our donors.


Listener Comments

[05:59] Ken (Still not related to David)

I listened to your program on LastPass. It’s not owned by LogMeIn now. Update info: https://www.pcmag.com/news/lastpass-becomes-independent-firm-but-its-still-owned-by-private-equity they are independently owned by private equity.

For more information you might want to refer listeners to https://twit.tv/sn -TWIT.TV podcast page: Security Now

Sorry if we created some confusion there. We are aware that they were spun back off as a separate entity, but in the announcement both LastPass and GoTo were involved in the breach. In this article, LastPass, GoTo announce security incident, it explains that the breach occurred in both the development and cloud storage environment. Including that “the third-party cloud storage service in question is shared by GoTo” and LastPass. The vault info was in the cloud storage which is shared with GoTo.

Thanks Ken – we did not make that clear.

[11:47] Another message is from Misty:

I work as the HIPAA Privacy Officer at a University and there are so many topics that I feel like get left out of most HIPAA training that I would love to hear you cover. Student medical records and FERPA and especially student athletes and how HIPAA and FERPA affects communication between the athletic trainers and providers. Thank you!!

Well now, Misty, you don’t ask for much. LOL! You are correct, the finer details behind HIPAA are rarely covered in general training. That is why we have our Boot Camps and Kardon Club available to those who have to deal with the finer details like you clearly have to in your job. We will see what we can do about an episode or segment about the tricky connection, overlap, or whatever you call it between FERPA and HIPAA.

Notes In The News

[15:23] Ransomware gang cloned victim’s website to leak stolen data – A whole new way to make ransomed data attacks worse got our attention. They cloned the company website so it looked like the company was leaking their own data. If this turns out to be successful, add one more thing to your list as to why your company website must be secured. They don’t have to clone your site, they can steal it too.

[18:29] Shout out to the best breach title I have seen in a while:

Colonoscopy Prep Retail Website Breach Festered for Years another good one from Marianne McGee

As if colonoscopies weren’t invasive enough, nearly a quarter-million patients who underwent an intestinal probe since 2019 now must grapple with a data breach tied to a hacking incident at a third-party vendor to gastroenterologists.

Captify Health online retail service Your Patient Advisor suffered a “malicious code” incident that persisted for more than three years, from May 26, 2019, to April 20, 2022.

ChatGPT Explains Itself

[20:04] What is the latest nerd language spreading wildly everywhere you look? OpenAI

How does it matter?

AI Chatbots Now Let You Talk to Historical Figures Like Shakespeare and Andy Warhol

If You’re Not Already Doing These 10 Productivity Hacks in ChatGPT, You’re Definitely Missing Out | Inc.com

Google plans to demo AI chatbot search as it panics about ChatGPT – The Verge

Microsoft to Invest $10 Billion in OpenAI, the Creator of ChatGPT – The New York Times

Health Entities Should Vet Risks of ChatGPT Use

HC3: Artificial Intelligence and Its Current Potential to Aid in Malware Development

Here is what I asked ChatGPT

[36:34] I am going to record a podcast episode about ChatGPT and how it could be used in a healthcare setting. What are the top three things I should include about you and healthcare applications? Also, what concerns should we have about the accuracy of information you provide to healthcare clinicians?

Here is what it said

  1. One important thing to mention about using ChatGPT in a healthcare setting is its ability to generate human-like responses, which can be useful for creating simulated patient interactions for training or research purposes.
  2. Another potential application for ChatGPT in healthcare is its ability to assist with data entry and documentation tasks, which can help reduce the workload of healthcare providers and improve efficiency.
  3. Additionally, ChatGPT’s ability to understand and respond to natural language inputs can make it a useful tool for creating chatbots that can assist patients with finding information and answering questions.

Regarding concerns about the accuracy of information provided by ChatGPT, it’s important to note that the model is only as accurate as the data it was trained on. It may not be able to provide accurate or reliable information about certain rare or complex medical conditions. Additionally, it is important to use caution when interpreting the model’s responses and consider the limitations of the data that it was trained on. It is always recommended to consult a qualified medical professional for any medical questions or concerns.

Then I asked it this

Are there any healthcare privacy and security concerns if a clinician uses you to provide some healthcare information?

This is what it said

Yes, there are privacy and security concerns that should be considered when using a language model like ChatGPT in a healthcare setting.

  1. One concern is that the model may not be able to fully understand and adhere to healthcare privacy regulations such as HIPAA.
  2. Another concern is that the model’s responses may contain sensitive or protected health information (PHI) and it is important to have proper protocols in place to protect and secure this data.
  3. Additionally, if the model is integrated into a clinical workflow or system, there is a risk of unauthorized access to PHI or other sensitive information, it is important to have appropriate access controls and authentication mechanisms in place to prevent this.

To mitigate these concerns, it is important to conduct a thorough risk assessment and implement appropriate controls such as encryption, access controls, and monitoring to protect patient data. It’s also important to train the model on data that has been properly de-identified in compliance with regulations and guidelines, and to validate the model’s output before it’s used in a clinical setting.

Then I asked

What are the three most important steps to include in a risk assessment that you just mentioned?

And it said this

  1. Data Analysis: A thorough analysis of the data used to train the model and the data that the model will be processing in a healthcare setting should be conducted. This includes understanding the types of data that will be used, where it came from, and how it will be stored and processed. This analysis will help identify any potential risks or vulnerabilities and provide a foundation for the risk assessment.
  2. Threat Identification: Identify the potential threats, risks, and vulnerabilities that could compromise patient privacy or security. This includes identifying the threat actors, their motivations, and the likelihood of them exploiting a vulnerability.
  3. Control implementation and Validation: Based on the analysis and threat identification, implement controls and procedures to mitigate identified risks and vulnerabilities. This includes access controls, encryption, monitoring, and incident response procedures. Once controls are implemented, it’s important to validate that they are working as intended and that they are sufficient to protect patient data. This may involve regular testing and monitoring of the system, conducting penetration testing and vulnerability assessments.

It’s also worth mentioning that it is important to keep the risk assessment up to date as the technology, data and regulations can change over time.

Well, there you have it folks. It tells you exactly what you should do!

[44:17] That’s so cool, right? Yes. But there are several concerns regarding bringing AI into healthcare. Like the potential for security and privacy issues. Also, things like how many patients was it trained to treat where my age, with my history, with all of the details about me that makes me unique? It can’t possibly have all the information. Bottom line is, we can’t make an assumption that it knows everything.

Another concern is the eventual overreliance on AI. You will start to see people getting to the point where they rely on AI for answers or to do things so they feel like they don’t need to know it or how to do something themselves. And so there’s the potential for a rapid digression of people’s abilities and skills because they’re too reliant on AI.

So, finally we ask ChapGPT for any closing words and it was spot on again with the following:

In conclusion, while ChatGPT has the potential to revolutionize the healthcare industry by assisting with tasks such as data entry, patient interactions and information retrieval, it is important to keep in mind the potential privacy and security risks that come with this technology. Implementing best practices, conducting thorough risk assessments, and validating the system’s controls are key to ensuring patient data is protected. Additionally, it’s important to keep in mind that the model’s responses should be used as a tool and not a definitive answer, and to consult a qualified medical professional for any medical questions or concerns.

AI is going like most things… for every good thing it brings to the table, there’s a bad thing that’s going to come out of it, too. Both of those will have to be assessed and dealt with. There are no simple answers. Many security tools already use AI to help protect against threats, but hackers use it too. It all comes back to everyone has to be on the privacy and security team.

Remember to follow us and share us on your favorite social media site. Rate us on your podcasting apps, we need your help to keep spreading the word. As always, send in your questions and ideas!

HIPAA is not about compliance,

it’s about patient care.TM

Special thanks to our sponsors Security First IT and Kardon.

HelpMeWithHIPAA.com Is A
Collaborative Project

Created & Sponsored By: