AI Vendor Obligations
Vendors providing AI tools to other organizations sit upstream of deployers and downstream of developers. Specific contractual and disclosure obligations depend on jurisdiction and use case.
Framework controls
Maintain documentation throughout the AI system lifecycle including data management, system development, verification and validation, and deployment per Annex A.6 controls.
Conduct AI system impact assessments and risk assessments addressing intended uses, deployment context, affected stakeholders, and mitigation of identified risks per Annex A.5 controls.
Establish, implement, maintain, and continually improve an AI management system (AIMS) covering policies, leadership commitment, roles, and integration with other management systems.
MANAGE function: prioritize and treat identified risks, allocate resources, and implement risk response strategies including mitigation, transfer, acceptance, or avoidance.
MEASURE function: assess, analyze, and monitor AI risks using both quantitative and qualitative methods, including bias evaluation, robustness testing, and explainability assessments.
MAP function: identify the context, intended uses, stakeholders, and risks of each AI system, including categorization of impacts on individuals, communities, and the organization.
GOVERN function: establish policies, processes, structures, and accountability for AI risk management across the organization, including senior leadership oversight and a risk-based culture.
We may receive referral commissions from recommended compliance tools. Recommendations are based on product fit and not on commission size. Links marked “partner link” include a tracked redirect.