Over 200,000 JPMorgan Chase employees use AI daily through the bank's LLM Suite, and its fraud-detection AI cut scam losses by 40% in 2025. In medicine, the FDA has cleared 1,104 AI radiology devices as of early 2026, representing 76% of all AI medical device authorisations. In law, 55% of firm-based attorneys and 81% of in-house counsel report using AI tools in their daily work. Public procurement sits at a different number: only 36% of procurement organisations have meaningful generative AI implementations in production, and just 11% feel confident enough to scale.
The pattern is striking. Every high-accountability profession that once feared AI has, in the past 24 months, quietly stopped fearing it. Procurement has not. It is worth walking through how the other professions crossed the chasm, because procurement is about to follow the same path.
Radiology: the profession that should have hated AI the most
Radiology was the textbook case for AI skepticism. A missed finding on a scan can end a career and a life. The profession is conservative, heavily regulated, and its experts spent a decade warning that AI would misread scans and produce confident garbage.
Then it got adopted anyway. A 2025 systematic review in JAMA Network Open tracked 1,104 FDA-cleared radiology AI devices, up from a trickle in 2018. Clinical usage roughly doubled from 20% of surveyed radiologists in 2018 to 48% in 2024. The radiology AI market is forecast to reach $3.71 billion in 2026.
What changed was not the technology, which kept improving incrementally. What changed was the workflow. AI does not read scans alone. It flags what looks abnormal, quantifies, measures, and hands the image back to a radiologist who decides. The FDA's 510(k) pathway, which clears most AI tools, does not license autonomy. It licenses assistance. Radiologists accepted AI the moment it was clear they were still the one signing off.
Banking: adoption under real financial risk
Banks had every reason to be cautious. A false positive freezes a customer's card at the worst possible moment. A false negative means real money lost to real criminals. There is no margin to hide in.
It got adopted anyway. A 2026 analysis by AllAboutAI found that 91% of US banks now run AI-driven fraud detection, and leading systems achieve 90 to 99% accuracy. Mastercard's 2026 global report shows 42% of issuers saved over $5 million in fraud attempts over the past two years thanks to AI. JPMorgan's AI-powered fraud shield cut scams by 40% in 2025, according to The Silicon Review. The bank allocates roughly $2 billion of its $18 billion technology budget specifically to AI, per its 2026 case study.
Consumer trust followed the numbers. 77% of consumers now expect their bank to use AI for fraud prevention. Not tolerate, expect.
The industry went from "will customers accept this?" to "will customers accept a bank that doesn't use it?" in about four years.
Law: the profession most afraid of hallucinations
Lawyers had the loudest warning about AI. Dozens of high-profile cases of fabricated citations. Over 700 court filings worldwide now involve AI hallucinations, with sanctions ranging from warnings to five-figure fines, according to AllAboutAI's 2026 legal statistics. A 2024 Stanford study found error rates of 17% for Lexis+ AI and 34% for Westlaw AI-Assisted Research, branded legal products from established vendors.
And yet. The 2026 Legal Industry Report showed AI use among attorneys more than doubled in a single year. 55% of firm-based attorneys and 81% of in-house counsel now use AI tools in their work. A Rev 2026 survey found 68% of legal professionals trust some AI tools with sensitive client information. Trust increased year over year: more than half of surveyed lawyers reported higher confidence, and only 1% reported decreased confidence.
What lawyers discovered is that the hallucination risk, while real, is manageable when verification stays in the loop. Every cited case gets opened. Every generated draft gets read. Every AI output gets treated as a first draft by a smart but unreliable paralegal. With that mental model, AI moved from threat to tool.
Procurement is the last holdout
Procurement has every quality the other three professions had. High accountability. Heavy regulation. Career risk on a bad decision. Documents nobody has time to read fully. Decisions that affect public money.
But the adoption numbers are different. ProcureAbility's 2026 CPO Report found 73% of procurement organisations are piloting or scaling AI, up from 28% in 2023. That sounds fast until you check the depth. Only 11% say they are ready to scale AI confidently across the enterprise. Only 36% have meaningful generative AI implementations in production, according to Art of Procurement's 2026 state-of-AI report. The top barrier, per an April 2026 EY survey of US federal agencies, is workforce skills (44%), not technology.
Compare that to JPMorgan, where 200,000 staff use AI daily. Or legal, where adoption doubled in 12 months. Or radiology, where usage went from 20% to 48% in six years and is still climbing.
Where procurement sits in 2026 — roughly where medicine was in 2018, banking was in 2020, and legal was in 2023. The catch-up window is short.
There is a reason for the delay. Procurement sits inside public administration, which McKinsey's 2026 State of AI Trust report describes as having an inverted adoption dynamic: in private industry, AI is pushed top-down by executives, but in government, workers want AI while leaders are wary. Only 1% of government leaders surveyed said more than 60% of their workforce has access to generative AI. In private industry, the comparable number is several multiples higher. Public procurement inherits that caution and adds its own: public money, audit exposure, anti-corruption rules.
What breaks the pattern
The other three professions did not wait until AI was perfect. They waited until two conditions were met.
First, the tool's error modes were predictable and checkable. A radiologist can look at the same scan. A lawyer can open the cited case. A fraud analyst can review flagged transactions. Second, the professional's own decision authority was preserved. Every profession that adopted at scale did so by making AI an input to a human verdict, not a replacement for it.
Procurement meets both conditions right now. The agent's reasoning is checkable, because every finding links to the exact text in the RFP or proposal. The procurement specialist still makes the call, signs the award recommendation, and faces the auditor. AI does the grind of reading 400 pages across 12 proposals. The human does the judgment.
2026 is the year procurement teams start adopting for the same reasons everyone else did. The tools work. Error modes are visible. And once one organisation in a given market proves out the workflow, the rest follow quickly. That is what happened in banking between 2020 and 2024, and in legal between 2023 and 2026.
What we built for this moment
Mitigate Procurement AI reads the full RFP and every proposal page by page. It checks each requirement against what the proposal offers, flags mismatches, and delivers findings with the exact quote from the source document. Critical findings are double-verified by a second model. The specialist reviews the findings, applies their expertise, and makes the award decision. The same workflow that radiology, banking, and law converged on.
It is built for a profession that has every reason to be cautious, and every reason to adopt now that the cautious path is clear.
Sources
Try your first AI analysis for free.