Skip To Navigation Skip To Content Skip To Footer
    Hire Physicians Who Fit, Succeed and Stay - Recruit a Physician - Jackson Physician Search and MGMA
    Podcast
    Home > Podcasts > Podcasts
    Daniel Williams
    Daniel Williams, MBA, MSEM





    In this episode of the MGMA Insights Podcast, host Daniel Williams welcomes Chris Bevil, Chief Information Security Officer at InfoSystems, Inc. Tennessee, to discuss the intersection of cybersecurity and generative AI (Gen AI) in healthcare. A leader in health informatics and cybersecurity, Bevil brings over two decades of experience with insights into how healthcare organizations can leverage AI technology while maintaining compliance and security. This episode explores the future of AI in medical practices, how to protect patient data, and actionable strategies to minimize cybersecurity risks.

    AI in Healthcare: Addressing Fears and Embracing Opportunities

    One of the key concerns about AI is the fear that it may replace human jobs. However, Chris Bevil is quick to address this misconception. “Generative AI isn’t going to take your job—unless you refuse to adapt to it. AI is meant to be a tool, not a replacement, as long as you understand how to utilize it effectively and securely.” He emphasizes that AI can streamline administrative tasks, potentially saving up to 10 to 12 hours a week, according to studies from Microsoft and LinkedIn.

    There is a distinction between AI and generative AI (Gen AI). Bevil explains that while AI covers a broad range of technologies, “Gen AI is what most medical practices are already using in some capacity, often without realizing it, as it's embedded in many existing platforms.”

    Securing Patient Data in an AI-Driven World

    As AI becomes more ingrained in healthcare practices, safeguarding patient data must remain a top priority. “When using tools like ChatGPT or other generative AI platforms, you don’t always know where the data is going,” warns Bevil. Because of this, healthcare organizations must establish clear policies and guardrails for AI use to prevent the exposure of Protected Health Information (PHI).

    Bevil provides a real-world example of how easily AI can be misused in healthcare settings. “At an MGMA state conference, a speaker mentioned using AI for performance reviews, but many didn’t realize the risks of inputting personal data into these tools,” he says. “You need to be cautious and anonymize data to avoid any potential breaches.”

    The Importance of a Defined AI Strategy

    Highlighting the need for a well-defined strategy with AI, Bevil encourages medical practices to form a committee that includes various stakeholders to ensure that AI is used in a compliant and secure manner. “You need a governance strategy in place before integrating AI into your practice,” he says. “Make sure your policies and procedures are updated to reflect AI usage, and always have a compliance plan in case of an incident.”

    Without a clear AI policy, organizations risk unintended data exposure. While tools like Microsoft’s Copilot for 365 offer enhanced security features, staying updated on any vulnerabilities is still crucial. “Things are changing every day,” Bevil says. “For example, Microsoft Copilot recently had a vulnerability and it’s important to stay aware of these changes to protect your data.”

    Phishing and Cybersecurity: The Human Factor

    With the rise of AI-generated phishing emails, identifying threats has become more challenging, which makes training staff to recognize such phishing scams a must. “As cybersecurity professionals, we can build all the firewalls we want, but one click on a phishing email can bypass all of it,” says Bevil. “The usual tips — like looking for bad grammar — no longer apply because generative AI can create flawless emails.”

    Bevil suggests regular in-depth training for staff members while underscoring the need for a comprehensive risk assessment, not just a surface-level evaluation of policies and procedures. “Teach your team to question even the most polished emails. If something seems too perfect, it might be suspicious.”

    Ensuring Compliance in an AI Environment

    Remaining compliant while integrating AI technologies can seem daunting, but Bevil reassures listeners that a risk management framework is essential. He advises using resources from NIST (National Institute of Standards and Technology) to build a strategy that covers the legal and regulatory requirements for AI use. “By forming a committee and mapping out a strategy, organizations can ensure compliance while still benefiting from AI.”

    While highlighting the need for continuous updates in governance policies, Bevil leaves a message of optimism for healthcare leaders. “AI doesn’t have to be scary. Once you understand how to use it securely, it can become a powerful tool in your practice.”

    Be sure to join Chris Bevil at the 2024 Leaders Conference on Monday, Oct. 7, for his session, "Navigating Cyber for Gen AI in Practice," where he will provide deeper insights into the cybersecurity challenges and opportunities presented by generative AI.

    Resources:

    2024 MGMA Leaders Conference: Register Here

    NIST Cybersecurity Framework: Access the Framework

    MGMA Resources:

    We Want to Hear From You:

    Let us know if there's a topic you want us to cover, an expert you would like us to interview, or if you'd like to appear on an MGMA podcast. Email us at:

    • Daniel Williams: dwilliams@mgma.com
    • Ryan Reaves: rreaves@mgma.com
    • Colleen Luckett: cluckett@mgma.com

    Thank you for tuning in to the MGMA Podcast Network. Please subscribe and leave us a review. See you next time!


    Daniel Williams

    Written By

    Daniel Williams, MBA, MSEM

    Daniel provides strategic content planning and development to engage healthcare professionals, managers and executives through e-newsletters, webinars, online events, books, podcasts and conferences. His major emphasis is in developing and curating relevant content in healthcare leadership and innovation that informs, educates and inspires the MGMA audience. You can reach Daniel at dwilliams@mgma.com or 877.275.6462 x1298.


    Explore Related Content

    More Podcasts

    Ask MGMA
    An error has occurred. The page may no longer respond until reloaded. Reload 🗙