Most of the attendees at the 2025 HIMSS Global Conference have a good sense for what AI is: Many of them build applications and platforms powered with it or are key players on the health IT teams within health systems that integrate them with EHRs and other pieces of a healthcare provider’s digital infrastructure.
But the discussion for an hourlong panel March 3 during the conference’s AI in Healthcare Forum was more about understanding the necessary guardrails when planning and implementing AI solutions and the ongoing governance needed.
Moderated by Brian Spisak of Csuite Growth Advisors and Harvard University, the panel featured Graham Walker, MD, an emergency medicine doctor with The Permanente Medical Group; Brenton Hill, JD, MHA, head of operations and general counsel of the Coalition for Health AI (CHAI); Tanay Tandon, chief executive officer of software company Commure; and Rohit Chandra, chief digital officer of Cleveland Clinic.
Defining process in AI
Spisak asked each panelist to help define what “process” means in AI adoption, eliciting some interesting perspectives. Graham Walker framed process as “friction,” explaining that healthcare workflows often create unnecessary resistance that AI can help alleviate. “Process is where that friction exists in the system,” Walker said. “What we’re really trying to do with AI is reduce the friction that is stuck on humans and offload some of that onto AI.” He provided an example of emergency rooms clogged with influenza patients who might only need basic over-the-counter medications, highlighting that AI could help direct these cases more efficiently.
Hill, reflecting on the work he’s seen in CHAI member organizations, emphasized continuous improvement in AI-driven workflows. “There’s not a single organization getting AI 100% right,” he noted. Instead, healthcare organizations must iterate, test, and refine AI applications to enhance outcomes.
Tandon took a more systemic view, likening AI to historical technological shifts. “The way we did things six months ago is completely irrelevant,” he argued, pointing to the need for reimagining processes in healthcare from the ground up.
Chandra introduced change management as a crucial aspect of AI adoption. “Healthcare is a very complex industry,” he noted. “Trying to bring innovation at scale requires discipline and careful risk management.”
Successes and challenges in AI
The panelists highlighted AI’s tangible benefits, particularly in the use of AI scribes. “The capabilities that AI scribes bring to the practice of care is remarkable,” Chandra stated, pointing to time saved for physicians by automating documentation tasks. Walker echoed this, describing how AI scribes allow physicians to spend more time with patients rather than spent in the EHR. “By the time I’m back from the desk after seeing a patient, I’m done with my note,” Walker said.
However, Walker also cautioned that the time savings from AI need to be spent wisely by organizations: “The easiest way to get more revenue out of your healthcare system is by telling the doctors and the pharmacists to go faster and see just maybe one more patient.” He warned that time savings from AI could lead to increased workloads rather than true efficiency gains if not managed properly.
Tandon emphasized that co-development is key to AI’s success. “If you use a cookie-cutter product and you just turn it on in your EHR, [it’s likely that] 80% of physicians are not going to use it,” Tandon said. Instead, providers and AI developers must collaborate closely to refine solutions.
Governance and leadership in AI
Governance emerged as a major theme, with Spisak asking how organizations should structure AI oversight. Hill pointed out that many institutions struggle with where to begin because “there’s not one silver bullet governance structure that you can put out there that says this is going to solve all your problems.” Instead, organizations must establish clear policies, accountability structures, and multidisciplinary oversight committees.
Chandra described Cleveland Clinic’s approach: “We have a governance structure where we actively assess considerations like clinical safety, data privacy, and regulatory compliance before deploying AI.”
Tandon, speaking from a vendor perspective, warned against over-regulation. “I think the biggest mistake we can make as an industry at this point in time is to over-govern and over-regulate,” he argued, citing the U.S. government’s hands-off approach as a reason for healthcare organizations to move quickly on AI adoption.
AI support for smaller, rural providers
A major concern raised was how smaller healthcare providers — especially those in rural areas — can integrate AI without the resources of large health systems. “Not everyone has a team of data scientists,” Spisak noted, asking for insights on how smaller organizations can leverage AI.
Tandon highlighted that the internet has an abundance of free AI tools that smaller providers can access. “Some of the best AI tools out there are free and accessible online,” Tandon said, without the need for a big IT team or budget.
Walker emphasized partnerships. “Tech companies can’t approach this as, ‘We’re going to sell to a hospital.’ It needs to be, ‘We’re going to help solve a particular problem.’” He suggested that smaller organizations reach out to AI companies to take part in testing phases.
Chandra suggested that smaller hospitals take a fast-follower approach — observing and learning from early adopters rather than investing in AI too soon. “It’s not clear to me that one small organization should take on the burden of being at the front of the pack,” he noted.
Why AI projects fail and how to avoid that failure
Spisak closed the discussion with a critical question: Why do AI projects fail?
Chandra pointed to the “last mile” challenge in healthcare: “Technology is the easy part. The hard part is integrating it into the organization and getting people to adopt it.” Without cultural and workflow alignment, even promising AI solutions can falter.
Tandon described bureaucratic inertia as a major obstacle: “The reason most AI deployments fail is death by a thousand paper cuts — committees on committees with no centralized decision maker.” He argued that organizations need clear accountability structures to avoid paralysis.
Hill stressed culture and leadership as the key to AI success. “The technology is powerful, but even more powerful are the leaders that can align the right people, inspire adoption, and make AI work.”
Walker took a different stance, arguing that failure is an essential part of the process: “I actually think there’s huge value in failure. Your organization is going to learn a ton from your failures, probably more than from your successes.” He encouraged healthcare leaders to experiment, iterate, and improve rather than fear setbacks.
Key takeaways
The panel helped anchor the AI forum by emphasizing the need for strategic governance, cultural alignment, and iterative learning. While AI offers significant opportunities to improve efficiency, reduce physician burnout, and enhance patient care, its successful implementation depends on strong leadership, careful change management, and a willingness to learn from successes and failures.