March 03, 2026
ViVE 2026: Practical AI Governance for Healthcare Organizations
At the annual conference, a CDW expert offered advice for health systems on governing their newest artificial intelligence solutions.
Last year, enterprise leaders across industries agreed that their organizations had good ideas for artificial intelligence but found execution challenging.
“I’d say 9 out of 10 organizations that we talk to struggle with where to start, what solutions to use and how to govern and really have a lifecycle management approach to AI,” Ken Drazin, director of digital experience at CDW, says in CDW’s Artificial Intelligence Research Report.
AI governance was a key point of discussion during last week’s ViVE conference in Los Angeles.
Sam Baker, business development manager for the AI Factory program at CDW, shared how healthcare organizations can improve AI governance by working with established structures and setting standards for project requests.
Use What You Already Have
Data undergirds the effective building, use and management of AI, Baker said, so if organizations already have a data governance structure in place, they won’t have to reinvent the wheel.
Take, for example, the adoption of an AI-powered clinical decision support solution. While there are newer aspects to account for, such as the real-time monitoring of a model’s efficacy, organizations still need to address the data stewardship of the solution: Who is monitoring the data quality or output? Who owns the data for that model? Building on the overlapping elements of data and AI governance allows organizations to stay agile.
With effective governance come clear delineations of responsibility. Of course, organizations want their AI deployments to be widely adopted, but someone or some group must be responsible for it. There must be well-defined accountability to ensure the continued quality of an AI tool.
Similarly, many organizations are familiar with traditional IT asset management. As more AI solutions become adopted and integrated within an environment, it’s crucial that organizations have visibility into all of the tools they’re using. That can include information such as solution purpose, owner and stakeholders, metadata, risk tier and more.
“Have that in an effective system such as a ServiceNow or other IT asset management platform,” Baker said, so that when a clinician or other staff member requests a new AI solution, “that’s not something that already exists within our environment. If we can reapply licensure or effective tooling that’s already been applied to that problem in a different department, then let’s make sure that we effectively do that.”
Standardize AI Project Requests
“If we implemented X and could (predict/categorize/detect/etc.) A, it would tell us B. If we knew B, we could do C. If we do C, the expected impact is D, which is beneficial because E.”
Baker presented a simple AI use case statement that IT teams can implement to better focus any requests from other departments. A thoughtful proof of concept and application to a specific challenge can take a solution beyond the planning phase.
He shared areas where AI support has seen growth in healthcare: ambient clinical documentation, generative chart summarization and computer vision in patient monitoring. “Tie effectively to a business value or challenge, and then deploy AI thoughtfully and with a really clear proof of concept and trackable return on investment,” he said.
Keep Monitoring Your AI Solutions
AI governance is not a one-and-done deal. Only having a committee meeting once every quarter can mean your organization falls behind on model updates and shifting regulatory guidance.
“You want to think about data drift, model decay and changing regulations that may potentially apply if you’re in the payer space, for example,” Baker said. “How do we make sure that our models are actively adjusting, or that we reassess on a regular basis based on that historical responsibility that’s already existed within our organization?”
If widespread adoption is a main factor, organizations must keep building trust with key stakeholders and offer continuous training.
“We want to help our clinicians, leaders and staff understand what AI actually is and what AI actually is not. A lot of folks are conflating workflow optimization or management with AI today because of the agentic boom,” Baker said. “Validate on representative and real-world data. Thoughtful proof of concept comes with validation within our own environments, and that is absolutely what needs to happen. That is how you build clinician trust, how you build patient trust for some of the tools that are regularly adopted today or have been effectively popularized today.”
Reach out to a CDW expert to learn how to accelerate your artificial intelligence journey.
Teta Alim
Editor