04/09/25

ISO/IEC 42005:2025 – A New Blueprint for Legal and Commercial Leaders Navigating AI Risk and Governance

Introduction: The Evolving Landscape of AI Governance

There is no doubt that artificial intelligence (AI) is rapidly transforming the commercial landscape, offering unprecedented opportunities for efficiency, innovation, and growth. However, as AI systems become more deeply embedded in business operations, the risks associated with their deployment and use, ranging from legal and regulatory exposure to reputational harm are coming under increasing scrutiny.

In response to these challenges, the International Standards Organisation (ISO) has published ISO/IEC 42005 AI System Impact Assessment (“ISO 42005”), a new standard providing comprehensive guidance on AI system impact assessments. Released in May 2025, the standard supports organisations in identifying and managing the potential benefits and harms of the use and deployment of AI systems across their lifecycle.

ISO 42005 is a process standard, meaning it does not require certification but provides a common framework for ensuring consistent and responsible AI across different domains and applications.

This article explores the standard’s key requirements, its practical implications for in-house legal and commercial teams, and how it can be leveraged to support robust, responsible AI governance.

For background on the different types of ISO standards and their role in AI governance, please see our previous Law Now articles:

  • Managing AI: What businesses should know about the proposed ISO standard (ISO 42001)
  • Governance of AI by organisations: Do’s and don’ts explained in new ISO standard (ISO 38507)
  • Proposed ISO standard on risk management of AI: What businesses should know (ISO 23894)
  • New international standard for AI application, development and use (ISO 5339)

Overview of ISO 42005 – AI System Impact Assessment

ISO 42005 is designed to help organisations systematically assess the impacts of AI systems on individuals, groups, and society at large. The standard sets out a structured process for identifying, documenting, and managing the potential benefits and harms associated with AI, throughout the system’s lifecycle. It is intended for any organisation developing, providing, or using AI systems, regardless of size or sector.

How Does ISO 42005 Fit With Other ISO Standards?

Importantly, ISO 42005 sits within a broader ecosystem of AI governance standards. Together, these standards provide a holistic framework for ensuring that AI systems are trustworthy, transparent, and aligned with both legal obligations and societal expectations.

ISO 42005 complements these standards by focusing specifically on the AI system impact assessment process, which is increasingly required under emerging regulations such as the EU AI Act.

To help clarify how ISO 42005 fits within the broader AI standards landscape, the table below outlines some of the key existing ISO standards specifically relating to AI and their respective focus areas:

Standard | Focus Area
ISO 42001 | AI Management Systems
ISO 23894 | AI Risk Management
ISO 38507 | AI Governance
ISO 5339 | Guidance for AI Applications
ISO 42005 | AI System Impact Assessment

Why This Standard Matters for In-House Legal and Commercial Teams

For in-house lawyers and senior commercial professionals, ISO 42005 is more than a technical guideline, it is a practical tool for managing legal, regulatory, and reputational risk. The standard supports compliance with emerging AI regulations, helps demonstrate due diligence to regulators and business partners, and can provide a defensible framework for addressing stakeholder concerns.

Key drivers for adopting the standard include:

  • Legal and regulatory compliance: anticipating and meeting requirements under data protection, human rights, and AI regulations.
  • Contractual obligations: satisfying client, partner, and supply chain expectations for responsible AI use.
  • Reputational risk: building and maintaining trust with customers, employees, and the wider public.
  • Operational resilience: identifying and mitigating risks that could disrupt business operations or lead to costly disputes.
  • Accountability and transparency: helping organisations demonstrate accountability and transparency, two principles that are increasingly demanded by regulators.

Key Requirements and Processes Under ISO 42005

The standard sets out a series of interlocking requirements and processes, designed to ensure that AI system impact assessments are robust, repeatable, and integrated into broader organisational governance. The main elements include:


1. Structured assessment process

Organisations must establish a consistent approach for performing and documenting AI system impact assessments, tailored to their context, objectives, and risk appetite. This includes consideration of internal factors (e.g. governance, policies, contractual obligations, intended use, and risk appetite) and external factors (e.g. legal requirements, cultural norms, and competitive pressures). The assessment must reflect the specific environment in which the AI system operates.

2. Integration with risk management

The impact assessment process should be embedded within existing risk, compliance, and management systems, ensuring alignment with broader organisational controls. This includes integration with privacy, human rights, and environmental impact assessments, creating a holistic governance framework. Organisations should document how these processes interconnect.

3. Timing and triggers

Assessments should be conducted at key stages of the AI system lifecycle: design, deployment, and whenever significant changes occur (e.g. changes in use, data, or legal context). Reassessment is required if there are changes in the system’s intended use, users, customer expectations, operational environment, or applicable laws and policies.

4. Defining scope and responsibilities

A clear definition of the assessment’s scope is essential. This includes specifying which systems, components, and stakeholders are covered, and allocating roles and responsibilities across the organisation. Organisations must also define their role in the AI ecosystem, whether as a data provider, model developer, or service provider, and document this within the assessment.

5. Thresholds for sensitive or restricted uses

Organisations should establish criteria for identifying high-risk, sensitive, or restricted uses of AI, along with escalation and approval processes for such cases. Thresholds should be based on legal requirements, stakeholder expectations, ethical frameworks, and the state of the art. Where uses are deemed sensitive or prohibited, next steps must be documented, including escalation or additional review.


6. Documentation and reporting

Comprehensive documentation of the assessment process, findings, and mitigation measures is required, with clear reporting lines to management and, where appropriate, external stakeholders. Documentation should include procedural guidance, templates, training materials, and artefacts from completed assessments. It must be maintained throughout the AI system’s lifecycle.


7. Approval, monitoring, and review

Organisations must implement formal approval processes, ongoing monitoring, and periodic review to ensure assessments remain current and effective. They should define how assessments are recorded and reported, both internally and externally, while protecting sensitive information. Approval procedures should specify who signs off and when external validation is required.

8. Triaging and determining assessment necessity

ISO 42005 also encourages organisations to use triaging tools to determine when a full impact assessment is necessary, particularly focusing on high-risk AI systems. Organisations must clearly document what constitutes a high-risk system and what triggers an assessment.

What Does a Good AI System Impact Assessment Look Like?

A high-quality AI system impact assessment, as envisaged by ISO 42005, is both thorough and practical. Key features include:

  • Comprehensive documentation: detailed records covering the AI system’s description, functionalities, intended and unintended uses, data sources, algorithms, deployment environment, and relevant stakeholders.
  • Identification of impacts: systematic analysis of both positive and negative impacts, including accountability, transparency, fairness, privacy, reliability, safety, explainability, and environmental effects.
  • Stakeholder engagement: consultation with directly and indirectly affected parties, including users, vulnerable groups, and internal teams, to ensure diverse perspectives are considered.
  • Multidisciplinary input: involvement of legal, technical, ethical, and business experts to capture the full range of potential impacts and mitigation strategies.
  • Transparency and accountability: clear communication of assessment outcomes, including limitations, uncertainties, and measures taken to address identified risks.
  • Ongoing review: regular updates to the assessment in response to changes in the AI system, its use, or the external environment.


The assessment should also evaluate the magnitude and likelihood of each impact, and document how multiple objectives may interact. Input from diverse stakeholders is encouraged to ensure a comprehensive perspective.

Aligning AI Impact Assessments with Existing Organisational Processes

One of the strengths of ISO 42005 is its emphasis on integration and efficiency. Rather than duplicating existing efforts, the standard encourages organisations to align AI system impact assessments with other impact assessments, such as:

  • Privacy Impact Assessments
  • Human Rights Impact Assessments
  • Environmental Impact Assessments
  • Security Impact Assessments
  • Business and Financial Impact Assessments


Practical steps for alignment include:

  • Mapping existing assessments: identify where existing assessments cover similar ground (e.g. data quality, stakeholder engagement) and leverage these inputs to inform the AI system impact assessment.
  • Coordinated reviews: where possible, conduct simultaneous or joint reviews to streamline processes and ensure consistency.
  • Centralised documentation: maintain a central repository for impact assessments, enabling cross-referencing and holistic risk management.
  • Appointing a coordination lead: designate a responsible individual or team to oversee the integration of AI impact assessments with other governance activities.

Common Challenges and Practical Tips for Implementation

Implementing ISO 42005 will have some challenges. We expect such challenges will include:

  • Data quality and documentation: ensuring that data used in AI systems is accurate, representative, and well-documented, with clear provenance and quality controls.
  • Cross-functional responsibilities: managing input and accountability across legal, technical, and business teams, and ensuring clear lines of responsibility.
  • Sensitive or high-risk use cases: identifying and appropriately escalating cases where AI systems may have significant adverse impacts, such as discrimination or privacy breaches.
  • Stakeholder engagement: engaging meaningfully with affected parties, particularly where impacts are complex or contested.
  • Keeping assessments current: maintaining up-to-date assessments in the face of rapid technological and regulatory change.

Some practical tips to overcome some of these challenges include:

  • Start with a pilot: select a representative AI system and conduct a full impact assessment to identify gaps and refine processes.
  • Leverage templates and checklists: Use the standard’s example templates to ensure consistency and completeness.
  • Invest in training: build awareness and capability across the organisation, particularly among those responsible for conducting or approving assessments.
  • Establish review cycles: set regular intervals for reviewing and updating assessments, and link these to key business or regulatory milestones.

The Annexes of ISO 42005

Some of the most useful and interesting content is included in the Annexes to ISO 42005.

Annex A: Guidance for use with ISO/IEC 42001

Annex A proovides practical guidance on how ISO 42005 aligns with ISO/IEC 42001, the management system standard for AI. It maps the requirements and recommendations of ISO/IEC 42001 to the processes and documentation described in ISO 42005, helping organisations understand how to integrate AI system impact assessments into their broader AI management systems. This annex is particularly useful for organisations already implementing ISO/IEC 42001, as it clarifies how to ensure compliance with both standards and avoid duplication of effort. By following the guidance in Annex A, organisations can streamline their approach to AI governance, ensuring that impact assessments are embedded within their existing management processes.


Annex B: Guidance for use with ISO/IEC 23894

Annex B explains the relationship between ISO 42005 and ISO/IEC 23894, which provides guidance on risk management for AI systems. It distinguishes between general organisational risk management and the more focused AI system impact assessment. Annex B illustrates how impact assessments feed into the overall risk management lifecycle, ensuring that the specific risks and impacts associated with AI systems are properly considered and managed.


Annex C: Harms and benefits taxonomy

Annex C offers a structured taxonomy for analysing the potential harms and benefits of AI systems. It provides a template that organisations can use to systematically identify and evaluate the impacts of their AI systems across a range of objectives, such as accountability, transparency, fairness, reliability, security, privacy, and environmental impact. By using this taxonomy, organisations can ensure that their impact assessments are comprehensive and consistent, covering all relevant dimensions of potential benefit and harm. This annex is particularly valuable for organisations seeking to demonstrate due diligence and thoroughness in their AI system assessments, supporting both internal decision-making and external reporting.

Annex D: Aligning AI system impact assessment with other assessments

Annex D addresses the practical challenge of coordinating AI system impact assessments with other existing organisational assessments, such as privacy, security, human rights, environmental, financial, and business impact assessments. It provides a coordination guide, an alignment guide, and a mapping guide to help organisations identify overlaps, avoid duplication, and ensure that all relevant impacts are considered efficiently.

Annex E: Example of an AI system impact assessment template

Annex E supplies a detailed example template for documenting an AI system impact assessment. The template covers all the key elements required by ISO 42005, including system information, intended and unintended uses, data quality, algorithm and model details, deployment environment, relevant interested parties, and the analysis of benefits and harms. Organisations can customise this template to suit their specific needs, ensuring that their assessments are thorough, well-documented, and aligned with the standard.


Annexes A to E provide a comprehensive suite of tools, guidance, and practical resources to help organisations implement ISO 42005.

Strategic Considerations for Senior Legal and Commercial Leaders

For senior leaders, ISO 42005 is not merely a compliance exercise. It can be a strategic enabler. Robust AI system impact assessments can:

  • Support innovation: by identifying and addressing risks early, organisations can deploy AI systems with greater confidence and agility.
  • Enhance competitive differentiation: demonstrating adherence to international best practice can be a powerful differentiator in client pitches, tenders, and regulatory engagements.
  • Build stakeholder confidence: transparent, well-documented assessments foster trust among customers, partners, and regulators.
  • Future-proof the organisation: proactive adoption of standards positions organisations to respond effectively to evolving legal and regulatory requirements.

Conclusion

ISO 42005 represents a significant step forward in the governance of AI systems, providing a practical, internationally recognised framework for assessing and managing their impacts. For in-house legal and commercial professionals, the standard offers a valuable tool for navigating the complex risks and opportunities presented by AI.

Organisations are encouraged to review their current AI governance practices, benchmark against the new standard, and engage with legal and technical experts to assess readiness and plan for implementation.


Author: Dr Sam De Silva, Partner (CMS London)
Special thanks to Ana-Maria Curavale, Trainee Solicitor at CMS, for her help in writing this article.

dotted_texture