
AI adoption in healthcare is surging, growing from 3% two years ago to 22% in 2025, according to research from Menlo Ventures. But adoption is vastly different from success.
The harsh reality is that too many healthcare providers struggle to move their AI initiatives past the pilot phase. Newly released data from Black Book Research shows that 70% of healthcare executives report at least one AI pilot that never moved beyond limited deployment. The major reasons — data gaps, workflow misalignments, and poorly chosen KPIs — all tie back to a single problem: a lack of governance.
In this article, we’ll get to the heart of the governance issues in healthcare. Read on to learn the opportunity costs of missing AI governance and discover how to move from pilot to production with the right framework and governable AI tools.
Missing governance stops AI from scaling
AI has achieved widespread adoption in healthcare, with 88% of health systems already using AI internally, according to the Healthcare Financial Management Association. Yet of those, only 18% have established a mature governance structure and a fully formed AI strategy, keeping them trapped in a loop of endless pilots and inconsistent outputs.
When AI governance is missing, every new deployment must start from scratch. Teams waste time figuring out how the tool fits into daily workflows, who has to approve it, and how to react if something goes wrong. This leads to duplicate work among IT, compliance, legal, and operations, even for similar use cases. This repetition slows down projects and wastes staff’s time.
Insufficient AI governance also introduces new privacy risks that health systems can’t afford to ignore. Just one healthcare data breach costs an average of $7.42 million, a steep financial consequence that can be avoided with proper governance.
Health systems with robust AI governance frameworks are already enjoying AI success, while those without governance are falling behind fast. A Healthcare Information and Management Systems Society (HIMSS) analysis of long-term deployments shows that AI is increasing worker productivity by 10% to 15% and generating immediate cash flow improvements. These specific workflows are driving the biggest ROI:
| Use Case | Annual cases | Automation rate | Hours saved yearly | ROI |
|---|---|---|---|---|
| Patient scheduling | 20,800 | 96% | 1,740 | 495% |
| Consult prep/charting | 7,027 | 99% | 2,297 | 965% |
| Multidisciplinary team prep | 4,800 | 99% | 784 | 148% |
| Emergency department | 19,500 | — | 683 | 430% |
| Oncology EMR integration | — | 94% | 3,600 | 839% |
The costs of poor AI governance add up quickly
AI spending in healthcare reached $1.4 billion in 2025, nearly tripling 2024’s investment, according to data from Menlo Ventures. But without governance, many of those dollars are wasted. To see why, let’s look at the four steps of successful AI pilots and the costs IT teams might incur if governance is lacking at each phase.
Finding the right workflow
Pilots fail when AI is applied to workflows that are poorly suited for automation. In contrast, successful AI initiatives start with stable and repeatable workflows, such as paper-heavy, rules-based processes like referrals and patient intake. When governance teams evaluate workflow suitability up front, they can clearly define scope, success metrics, and human-in-the-loop requirements. Costs of getting this wrong include:
- Extended pilot timelines caused by unclear success metrics
- Rework when AI is applied to the wrong process
- Budget spent on pilots that never reach production
Choosing the right tool
Each type of AI tool introduces new potential vulnerabilities. AI models not specifically trained for healthcare can struggle with medical terminology, complex document formats, and regulatory requirements. In contrast, healthcare-specialized AI models are designed to understand clinical workflows and maintain compliance standards, using technologies such as large language models (LLMs), natural language processing (NLP), and optical character recognition (OCR) to accurately process medical documents.
Costs of missing governance at this stage include:
- Higher legal and compliance review costs for high-risk tools
- Delays caused by post-launch risk assessments
- Increased exposure to data handling and privacy failures
Implementation and user testing
Strong governance expedites AI integration by answering all the important questions up front: When will staff review AI outputs? Who will review mismatches? How are errors logged? How will the model improve over time? Without this approach, pilots slow down or even stall out. Further problems arise if an AI tool is deployed to production without proper testing. If tools introduce friction by requiring a new UI or an additional login or password, staff may find workarounds after deployment, negating expected productivity gains. Costs at this stage include:
- Extended pilot timelines due to unresolved exception handling
- Productivity losses due to staff using poorly integrated tools
- Loss of user trust that undermines adoption
Establishing a repeatable process
Governance makes it easier to decide which pilots are ready to expand and which should stop. IT leaders should use a consistent framework to evaluate performance, risk, and ROI before a broader rollout. Centralizing these outcomes allows health systems to move forward with proven use cases. Costs of missing governance here include:
- AI pilots stuck in limited deployment indefinitely
- Redundant tools and licenses purchased across departments
- Growing AI sprawl with no centralized oversight
Steps to improve governance and scale AI pilots
Achieving ROI with AI requires putting the right guardrails in place early so pilots can move quickly and scale without introducing unnecessary risks. Two focus areas:
Documentation and decision guardrails
Effective governance starts with clear documentation that makes oversight repeatable and not reactive. CIOs need to know which AI tools exist in the organization, who owns them, and how they were approved. To answer these questions, governance policies should cover:
- Model inventory: A simple, shared list of which AI tools exist in the organization, who owns them, and where they’re running
- Model lineage: A record of where the model originated and how it’s trained over time
- Model signoffs: Documented proof that the right people — IT, compliance, security, operations, or clinical leadership — approved the AI models before they went live.
Governance also depends on written gate criteria, documented rules that define when AI is allowed to act on its own and when human intervention is needed. Black Book Research data shows that this approach leads to 28% fewer pilot extensions.
Processes for oversight, testing, and feedback
Beyond documentation, health systems also need consistent internal processes to advance their AI initiatives. For example:
An AI governance council should be established to perform oversight of all AI deployments. Black Book Research finds that health systems with this approach are twice as likely to achieve ROI within 12 months.
Shadow mode testing lets health systems run AI tools in the background to assess their performance before introducing them into real-world workflows. AI initiatives are 1.75x more likely to advance without safety flags when shadow mode testing is performed, according to Black Book Research.
Continuous feedback loops like EHR-integrated dashboards can continually track outcomes and measure staff feedback. Programs with dashboards achieve ROI in roughly 7.5 months, compared with 13.5 months for those without dashboards, per Black Book Research.
Choosing easy-to-govern platforms
With effective governance processes in place, health system leaders can select tools that fit into their workflows and reduce friction rather than increase it.
eFax® Clarity is designed to do just that. It’s an Intelligent Document Processing (IDP) solution that processes unstructured inbound content, including faxes, scanned documents, emails, and PDFs. It then converts that information into structured data that can flow into EHR, RIS, and other core systems. The AI performs data extraction instead of content generation; outputs are tied to source documents, so the solution is fully auditable.
Because eFax® Clarity works quietly in the background, fits into existing workflows, and requires no new UI or login, staff disruption is minimal. Integrations are supported by experts skilled at matching the tool to each organization’s HL7 data framework, preventing one-off implementations that are hard to oversee or scale.
With these guardrails in place, health systems can define goals and track KPIs. Common metrics include faster time-to-schedule, lower demographic-related denial rates, increased referral processing throughput, and fewer unassigned or misfiled documents.
Bust out of pilot purgatory
Before health systems can move forward with AI, they need to take a step back. Defining governance requirements up front can help organizations avoid the sunk costs of endless pilots and achieve ROI faster. With the right framework and low-risk tools in place, leaders can confidently move from experimentation to enterprise rollout.





