Skip To Navigation Skip To Content Skip To Footer
    Insight Article
    Home > Articles > Article
    Jacqueline Wu
    Jacqueline Wu
    Leslie R. Jebson
    Leslie R. Jebson, MHA, FACMPE, FACHE

    Chat Generative Pre-Trained Transformer (ChatGPT) has garnered international acclaim for its approach to machine learning (ML) and generative artificial intelligence (AI). Thanks to its advanced neural network architecture rooted in natural language processing (NLP), ChatGPT — developed by OpenAI and modified and enhanced by the public — is an expansive, user-friendly, conversational tool that can serve as a valuable resource in strengthening the patient experience.

    Healthcare professionals and patients alike look forward to its application in creating a more informative and accessible communication channel. As it continues to evolve, ChatGPT is a clear model for AI’s potential to revolutionize communication, creativity and problem-solving for all fields.

    ChatGPT’s take on its role in patient experience

    To evaluate its effectiveness, we asked ChatGPT how its impact contributes to current patient experience in medical practices.

    Here is a summary of its responses:

    1. Personalized Patient Interactions

    ChatGPT states that patients can take an active role in their healthcare with its chat-style conversations to learn about their medical conditions, treatment options and appointment schedules as well as personalized healthcare tips and reminders sent based on the patient’s medical history for medication adherence, follow-up, and recommended lifestyle changes.

    2. Streamlined Communication

    Acting as a virtual assistant, ChatGPT claims to enhance clinical team coordination for routine administrative tasks and alleviate patient anxiety through timely responses and dissemination of general healthcare information such as clinic policies, post-appointment instructions, and common procedure information.

    3. Improved Accessibility and Availability

    With patient satisfaction revolving around timeliness, convenience and friendliness, ChatGPT boasts supporting patients 24/7 conveniently and comfortably at home. It also claims to increase provider availability to immediately respond to patient needs for increased patient satisfaction.

    How much of this is accurate? For this article, we will only be discussing the free version of ChatGPT-3.5.

    ChatGPT offers repetitive and overlapping responses for three separate arguments, primarily focused on administrative tasks and patient communication. Let’s dissect what ChatGPT claims.

    Can ChatGPT be personalized?

    ChatGPT can streamline patient interactions, but it cannot be seen as personalized. The latest version of ChatGPT has only been updated through April 2023 and is not HIPAA compliant. Consequently, it cannot provide patient identifiers or real-time or location-specific information to evaluate diagnoses and provide updated treatment options. ChatGPT has been shown to be a generalized, objective tool for patient interactions that does not provide full accuracy and privacy. References generated by ChatGPT have been insufficient, unsupportive or even fraudulent.1

    Due to ChatGPT’s inability to differentiate reliable and unreliable resources, it is currently subpar as a clinical decision-making tool for evidence-based practices.2 In 2023, Stanford analyzed 64 queries to ChatGPT-3.5 and 4, finding that its outputs were not “so incorrect as to cause patient harm” 91% to 93% of the time, but its accuracy for patient-specific clinical information ranged only from 21% to 41%.3,4 Responses also varied greatly by the wording of queries. So, transitioning to a fully functioning “Dr. ChatGPT” may still be some time away.

    Patient engagement tool

    Generative AI can be framed as a tool for engaging patients through interactions, offering healthcare tips and reminders about general health information based on a provider’s diagnoses. This approach has already been underway for early users of generative AI in healthcare. However, providers worldwide have evaluated ChatGPT’s patient education capabilities and found its responses to diagnosis-related questions often include inaccuracies and suffer from low readability scores. Much higher scores were given for providing advice and discharge instructions according to the clinician’s primary diagnoses.5 Therefore, it’s evident that ChatGPT can presently best serve as a tool for initial engagement or concluding interactions, streamlining monotonous administrative tasks such as appointment reminders, rescheduling, follow-up emails with appointment information, promoting lifestyle changes, etc. According to a 2023 MGMA Stat poll, out of the 1 in 10 medical groups that used generative AI, respondents primarily use it for patient communications such as triage calls, appointment reminders and marketing efforts.6 Additionally, ChatGPT’s digital nature makes it a tool available 24/7, allowing patients to access it conveniently and promptly from the comfort of their home.

    Shifting repetitive clinical documentation to QA testing

    Another primary application of generative AI is using speech recognition platforms to virtually scribe for clinical documentation. Since the adoption of clinical documentation and the requirements set by the Affordable Care Act, nearly 75% of physicians with burnout symptoms have identified the EHR as a major reason.7 According to the American Medical Association, for each hour of care provided, two hours are spent on administrative tasks.8 Physicians have voiced concerns that current clinical documentation consists mainly of redundant and cumbersome data entry, with significant portions of it going beyond fundamental patient care.9 Consequently, physicians have opted for alternatives through employing scribes or using transcription devices to facilitate more meaningful face-to-face patient interactions and boost revenue. While numerous AI-powered scribing companies have claimed to significantly reduce the hours of physician documentation, physicians will always need to QA (quality assurance) test every note. In fact, some physicians found that proofreading and editing AI-generated notes can be more time-consuming and less satisfying than crafting their own notes, even exploring creative ways to reduce data entry time by synthesizing patient visits with services like ChatGPT.10 This trend of checking for accuracy and efficiency could permanently shift medical transcription services toward a “human in the loop” QA testing model, similar to the past use of human scribes. Today, healthcare organizations like DeepScribe have shifted the scribe role to QA, employing the “human in the loop” strategy to ensure up to 95% accuracy in AI-generated visit notes. Dr. Diyi Yang from Stanford University gave a lecture in which he explained that natural language processing requires human feedback to address discrepancies between factual or trivial content and human values. Human feedback is needed for performance, fairness, explainability and personal belief inputs.11 While ChatGPT and similar AI tools increasingly benefit clinical documentation, further research is needed to evaluate their accuracy, human involvement and whether their primary impact lies in streamlining administrative tasks or boosting revenue by enhancing patient engagement and reducing physician burnout.

    Available health educator

    ChatGPT also has the potential to serve as a mediator between patients and the provider. Patients often arrive without any knowledge about their symptoms, but with ChatGPT they can seek guidance on healthcare decisions before seeing a doctor, comparable to consulting a health educator. Paired with its friendly natural language processing, ChatGPT has even been proven to show more sympathy toward patients than physicians.12,13 Physicians could use ChatGPT to supplement traditional information-gathering methods and gain insight into the patient’s condition. However, ChatGPT should not be used as anything more than a supplemental tool prior to physical medical examinations.

    According to a 2023 study by Tebra, patients were surveyed about using AI in their healthcare journeys, specifically therapy.14 Approximately one in four patients reported that they are more likely to talk to AI than attend in-person therapy. Of those who have already turned to ChatGPT for therapy advice, 80% found it to be an effective alternative.15 This highlights the evolving role of AI in healthcare and the increasing willingness of patients to use AI-powered solutions.

    Concluding thoughts

    A primary concern about ChatGPT revolves around its privacy and accuracy in healthcare. ChatGPT can become a more effective and precise tool if customized for various medical specialties, a goal that the ChatGPT Plus team at Open AI is working toward.16 Since April 2023, early users can access specialized versions of ChatGPT tailored for specific cases, departments and proprietary datasets, allowing the specific user community to influence AI behavior. Unlike traditional clinical decision-making tools that are programmed with fixed algorithms, ChatGPT adapts by learning from its users and incorporating information from a wide range of sources beyond peer-reviewed medical journals. Therefore, customized GPT models are designed to improve accuracy and specificity to patients’ needs based on feedback from both their creators and the end users who benefit from the insights. Clinics could greatly benefit from team-based ChatGPT models to interpret jargon, clinic policies and diagnosis accuracy for individualized clinical care. With the integration of DALL-E for imaging and voice, more accurate anatomical depictions may provide substantial patient understanding of their conditions and be more accessible for those who struggle with literacy or have lower levels of education.17 However, this is still a new product, and it is important to note that ChatGPT took decades to develop. Tailoring and perfecting other GPT models to achieve similar or greater results may require additional evaluation and an unknown span of time.

    Notes:

    1. Liu NF, Zhang T, Liang P. “Evaluating Verifiability in Generative Search Engines.” arXiv:2304.09848v2 [cs.CL]. Oct. 23, 2023. Available from: https://doi.org/10.48550/arXiv.2304.09848
    2. Tirth D, Anirudh AS, Satyam S. “ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations.” Frontiers in artificial intelligence, 6, 1169595. https://doi.org/10.3389/frai.2023.1169595.
    3. Mello MM, Guha N. “ChatGPT and Physicians’ Malpractice Risk.” JAMA Health Forum. 2023;4(5):e231938–e231938. Available from: https://doi.org/10.1001/jamahealthforum.2023.1938.
    4. Dash D, Horvitz E, Shah N. “How Well Do Large Language Models Support Clinician Information Needs?” Stanford Human-Centered Artificial Intelligence. March 2023. Available from: https://bit.ly/3wVyT2s.
    5. Bresnick J. “Is ChatGPT ready for prime time with patient education?” Digital Health Insights. 2023 Oct 20. Available from: https://bit.ly/49OUPe6.
    6. Harrop C. “Medical groups moving cautiously as powerful generative AI tools emerge.” MGMA. March 30, 2023. Available from: https://www.mgma.com/stat-032823.
    7. Tajirian T, Stergiopoulos V, Strudwick G, Sequeira L, Sanches M, Kemp J, et al. “The Influence of Electronic Health Record Use on Physician Burnout: Cross-Sectional Survey.” Journal of Medical Internet Research. 2020 Jul 15;22(7):e19274. Available from: https://doi.org/10.2196/19274.
    8. AMA. “Allocation of physician time in ambulatory practice.” Available from: https://bit.ly/4a7Zebz.
    9. Budd J. “Burnout Related to Electronic Health Record Use in Primary Care.” Journal of Primary Care & Community Health. 2023;14. Available from: https://doi.org/10.1177/21501319231166921.
    10. Bernard R. “Physician experiences with AI documentation.” Medical Economics. Feb. 5, 2024. Available from: https://bit.ly/49RwdBm.
    11. Yang D. “Lecture 3: Human-in-the-loop.” Stanford CS 329X, Human-Centered NLP. 2023 April 12. Available from: https://bit.ly/3vaPwGQ.
    12. Mok A, Brueck H. “Medical experts prefer ChatGPT to a real physician 78.6% of the time — because it has more time for questions.” Business Insider. 2023 May 2. Available from: https://bit.ly/49NxMAp.
    13. Diaz N. “ChatGPT outperforms physicians when answering patient questions.” Becker’s Hospital Review. May 1, 2023. Available from: https://bit.ly/3IBJFNC.
    14. Noyes J. “Perceptions of AI in healthcare: What professionals and the public think.” The Intake. April 27, 2023. Available from: https://bit.ly/3VleFcB.
    15. Patrick A. “New Survey Shows Perceptions of AI Use in Healthcare Are Changing.” Business Wire. July 27, 2023. Available from: https://bit.ly/3wVzK3a.
    16. OpenAI. “Introducing GPTs.” Available from: https://bit.ly/3VhH2bC.
    17. OpenAI. “ChatGPT — Release Notes.” Available from: https://bit.ly/3x1RERu.
    Jacqueline Wu

    Written By

    Jacqueline Wu

    Jacqueline Wu, MHA candidate at the University of South Carolina, can be reached at jw238@email.sc.edu.

    Leslie R. Jebson

    Written By

    Leslie R. Jebson, MHA, FACMPE, FACHE

    Email: jebson@tamhsc.edu


    More Insight Articles

    Ask MGMA
    An error has occurred. The page may no longer respond until reloaded. Reload 🗙