
IBM's Rob Thomas argues that as AI matures from standalone product to foundational infrastructure, enterprise organizations must invest in robust governance frameworks to protect margins. The shift demands new strategies for managing AI risk, compliance, and operational efficiency at scale.
As artificial intelligence moves from experimental novelty to mission-critical infrastructure, a new challenge is consuming boardroom conversations across the Fortune 500: how do you govern something that touches every layer of your business without destroying the speed and flexibility that made it valuable in the first place? IBM’s senior leadership is now making the case that robust AI governance isn’t just a compliance checkbox — it’s a margin-protection strategy that enterprise organizations can no longer afford to ignore.
Rob Thomas, IBM’s Senior Vice President and Chief Commercial Officer, recently articulated a framework that reframes how business leaders should think about AI’s evolution. His central argument is deceptively simple: software follows a predictable lifecycle, graduating from a standalone product to a broader platform, and eventually maturing into foundational infrastructure.
Each stage of that lifecycle changes the economic and operational rules. In the early product phase, tightly controlled, proprietary development environments allow companies to iterate rapidly and capture concentrated financial value. But as technology matures into infrastructure — think electricity, cloud computing, or the internet itself — that closed-garden approach breaks down. The governance requirements shift dramatically.
Thomas’s message to enterprise decision-makers is clear: AI has entered its infrastructure era, and the companies that fail to manage it accordingly will watch their margins erode through security gaps, regulatory penalties, and operational inefficiencies.
For years, AI governance was treated as a risk-management concern — something for legal teams and compliance officers to worry about. That framing is rapidly becoming obsolete. Today, the way an enterprise chooses to govern its AI systems has direct, measurable consequences on profitability.
Consider the numbers. According to IBM’s own research, organizations that deploy AI at scale without proper governance frameworks face an average of 25% higher remediation costs when models produce biased, inaccurate, or non-compliant outputs. Meanwhile, the EU’s AI Act — which began phased enforcement in 2024 — introduces potential fines of up to €35 million or 7% of global revenue for certain violations.
These aren’t hypothetical risks. They’re balance-sheet threats. Here’s how governance directly protects enterprise margins:
For a deeper dive into the regulatory landscape shaping these decisions, our coverage of Why Companies Like Apple Are Building AI Agents With Limits provides additional context.
Thomas’s lifecycle framework deserves closer examination because it explains why so many enterprise AI deployments stall or underdeliver. When AI is treated as a standalone product, organizations tend to manage it the way they would any SaaS tool — procurement evaluates it, IT deploys it, and a small team of specialists operates it.
But AI at the infrastructure level behaves differently. It’s embedded in supply chain logistics, customer service workflows, financial forecasting, HR screening, and dozens of other processes simultaneously. At that scale, you can’t manage governance through ad-hoc policies or one-off audits.
This mirrors what happened with cloud computing a decade ago. Early cloud adoption was product-centric: individual teams spun up AWS instances for specific projects. Over time, as Gartner has documented extensively, enterprise cloud strategy had to evolve into a centralized governance model to manage costs, security, and interoperability. AI is following the same trajectory — only faster.
IBM isn’t alone in sounding this alarm. Across the enterprise technology landscape, a consensus is forming that governance maturity correlates directly with AI ROI. Microsoft has been investing heavily in responsible AI tooling through Azure. Google Cloud has published detailed model cards and governance best practices. And a growing ecosystem of startups — from Credo AI to Holistic AI — are building dedicated platforms to help organizations manage AI risk at scale.
Analysts at Forrester noted in early 2025 that enterprise spending on AI governance tools is expected to grow at roughly three times the rate of overall AI infrastructure spending through 2027. That disparity signals something important: organizations are realizing that deploying AI without governance is like building a factory without quality control. You might ship product faster in the short term, but the long-term cost in defects, liability, and lost customer trust will devastate your margins.
Our previous analysis of Why Companies Like Apple Are Building AI Agents With Limits explores how leading organizations are structuring their governance programs internally.
Several developments are worth watching closely in the coming months:
IBM’s positioning here is strategic. The company has been investing in AI governance capabilities through its watsonx platform, offering enterprise clients tools for model monitoring, bias detection, and regulatory compliance. Whether IBM captures the lion’s share of this emerging market remains to be seen, but the underlying thesis — that governance is inseparable from profitability — appears increasingly difficult to dispute.
The era of treating AI governance as an afterthought is over. As artificial intelligence becomes embedded infrastructure across the enterprise, the organizations that invest in robust frameworks to manage model risk, regulatory exposure, and operational complexity will be the ones that protect — and expand — their margins. IBM’s Rob Thomas has articulated what many industry leaders are beginning to accept: in the infrastructure phase of AI, governance isn’t overhead. It’s a strategic investment that pays for itself many times over.