Join the Community

23,140
Expert opinions
43,729
Total members
358
New members (last 30 days)
177
New opinions (last 30 days)
29,030
Total comments

Building an AI-Ready Procurement Function: Why We Should Be Asking Deeper Questions

The New Risk Frontier in AI Procurement

AI has become everyone’s new favourite technology. A panacea that has become embedded across enterprise functions, from customer onboarding and compliance automation to operational risk management and fraud detection. Procurement teams are increasingly tasked with sourcing AI-powered solutions, often under pressure to move quickly and secure competitive advantage. Yet many enterprises remain unprepared for the specific risks that AI introduces into the organization. These risks are not simply technological; they implicate regulatory, operational, and reputational dimensions.

The regulatory environment is also raising expectations around AI procurement, particularly within financial services. Europe's Digital Operational Resilience Act (DORA), which took effect earlier this year, significantly the obligation to manage third-party risks, including AI. Under DORA, firms should ensure that critical ICT providers meet standards for operational resilience, security, and risk management. This of course extends to AI systems embedded in vendor services.

Unfortunately, today’s traditional procurement processes are nowhere near sufficient. The typical focus on functionality, security, SLAs, etc do not sufficiently address the continuous risks posed by AI. Procurement functions have also become used to acting slowly and in a one-off manner. Organizations that fail to adapt and speed up their procurement approach risk facing many liabilities, including regulatory exposure, systemic biases, data governance failures, and loss of operational transparency to the point of not knowing what has gone wrong where.

 

Data Integrity and Model Transparency

Most recommendations focus on training data, and rightly so: One of the earliest failure points in AI procurement stems from a lack of scrutiny over training data. Enterprises must demand clear disclosures about data sources, quality assurance processes, and the steps vendors take to mitigate bias. If the underlying data is flawed or unrepresentative, the AI system will inevitably produce flawed outcomes, no matter how advanced the algorithms appear. But one must not forget that there are many nuances in the training and fine tuning process that go beyond mere training data: algorithms, sampling, hardware, and human interaction also affect model training.

Model transparency is equally essential. Firms must not accept "black box" solutions without mechanisms for auditing and explaining AI outputs. Vendors should be able to demonstrate that their models are subject to interpretability frameworks that enable independent audit of decision-making pathways. Transparency is foundational to building trust, ensuring regulatory compliance, and maintaining control over critical enterprise processes.

The Emerging Risks of Foundation Models and Model Supply Chains

An increasingly important dimension of AI procurement involves understanding the model supply chain. Many vendors today build their offerings on top of powerful third-party foundation models such as GPT, or Claude. While these models accelerate innovation, they can be costly and not fit for purpose, and with open source models entering the market, the risk skyrockets.

Data provided to vendors may could potentially be absorbed into underlying models unless explicit contractual safeguards are in place. This raises a whole host of privacy, IP, and confidentiality concerns. Procurement teams must demand clarity: will internal data be isolated from model retraining? What technical controls are in place to prevent data leakage? How are foundation model dependencies governed, and what liabilities are accepted if an upstream failure occurs? What is the process of underlying foundation model changes/updates?

Buyers must think not only about their direct vendors but about the entire upstream model ecosystem, where issues and failures could propagate downstream into their own operations.

 

The Case For Continuous Monitoring

Procurement must recognize that AI systems introduce continuous risks, not static ones. The dynamic nature of AI means that new issues can emerge long after deployment. It is therefore crucial to know when vendor models are changed/updated, how retraining is done, and what oversight exists for post-deployment performance.

Procurement teams must build a framework for continuous monitoring of vendor AI behavior, model outputs, and contractual compliance. Risk assessment cannot stop at onboarding, but it should continue throughout the vendor lifecycle. Organizations must develop capabilities to detect when risks evolve, and when vendors change their foundational technologies, models, or data policies and practices.

Without dynamic monitoring, one will only discover problems when it is too late to mitigate.

Contract Risk: Embedding Governance at the Source

Contracts for AI-powered solutions must evolve to meet the new realities of AI risk. Traditional software contracts rarely address key concerns such as:

  • Ownership and control of data outputs generated by AI
  • Limits on model retraining using enterprise data
  • Requirements for bias testing, fairness auditing, and performance reporting
  • Remedies for compliance failures or unauthorized use of client data
  • Audit rights over both direct vendors and their foundation model providers

Procurement teams must work closely with legal, risk, and compliance functions to ensure that AI-specific governance is embedded into vendor agreements. Pre-contract due diligence must include a careful review of how AI risks are allocated and mitigated through legal frameworks, not just commercial terms. If one fails to contractually govern AI risks at the outset they will find it nearly impossible to enforce accountability when failures arise later.

Firms must also invest in systems and processes that enable continuous risk assessment, vendor questioning, and contractual governance enforcement. Procurement need to become a dynamic function capable of adapting to the evolving risks of AI, rather than a static gatekeeper performing one off and basic assessments.

Asking Better Questions: Faster and More Often

The landscape of business is changing fast. New and exciting technologies have come up with great promises. Enterprises who can deeply and efficiently assess, onboard, and monitor their vendor ecosystem have a significant competitive advantage in the new economy.

 

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

23,140
Expert opinions
43,729
Total members
358
New members (last 30 days)
177
New opinions (last 30 days)
29,030
Total comments

Now Hiring