<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=229461991482875&amp;ev=PageView&amp;noscript=1">
What your business needs to know about the EU Artificial Intelligence Act
24:17

The EU Artificial Intelligence Act will soon come into force to regulate the use of AI systems within the EU. If your business is already using AI systems or is planning to do so, it's time to understand how you might be subject to the Act, and how to get ready for it.

This article provides a guide to everything you need to know, including:

The EU Artificial Intelligence Act Summary

The EU AI Act is a proposed regulation by the European Union designed to ensure the safe and ethical development and use of artificial intelligence. Key areas of interest in the Act include:

Foundational Concepts (Articles 1-4)

  • Article 1 (Objective): sets the overall goal of the Act - to ensure the development, deployment and use of AI systems respect fundamental rights and safety
  • Article 2 (Definitions): provides clear definitions for key terms like ‘AI system,’ ‘high-risk’ and ‘provider’ which are crucial for interpreting other articles
  • Article 3 (Risk Assessment): establishes the process for classifying AI systems based on risk (high, limited, minimal). This classification significantly impacts the requirements in subsequent articles
  • Article 4 (Placing on the Market and Putting into Service): defines the obligations of providers who place AI systems on the EU market and businesses putting them into service. This establishes the scope of the Act's application.

Risk-Based Requirements (Articles 5-19)

  • Articles 5-11 (High-Risk AI Systems): outline specific obligations for high-risk AI systems, including requirements for risk management (Article 8), human oversight (Article 10), data governance (Article 6) and transparency (Article 13). These articles depend on the risk assessment conducted under Article 3
  • Articles 12-15 (Limited Risk AI Systems): establish less stringent requirements for limited-risk AI systems, often focusing on transparency obligations (Article 14) and information disclosure (Article 12). These build upon the foundational concepts but with a lighter touch
  • Articles 16-19 (Horizontal Obligations): apply to all AI systems regardless of risk classification. This includes obligations for training data (Article 16), incident reporting (Article 17) and record-keeping (Article 18).

Enforcement and Governance (Articles 20-43)

  • Articles 20-29: define the role of national competent authorities in overseeing the Act's implementation, including market surveillance, investigations and enforcement actions
  • Articles 30-33: establish a framework for harmonised oversight within the EU, including cooperation between member states and a European Artificial Intelligence Board. These articles rely on the risk assessments conducted under Article 3 and the risk classification of AI systems
  • Articles 34-43: cover general provisions like penalties, liability regimes, transitional measures and reporting obligations.

Key Dependencies

  • Risk assessment under Article 3 is the foundation for applying subsequent articles and determining the specific obligations for each AI system
  • High-risk AI systems face the most stringent requirements, with Articles 5-11 building upon the foundational concepts and horizontal obligations
  • Articles 20-29 rely on the risk classification to determine the level of oversight and enforcement actions needed.

 

AI system risk classification


The AI Act classifies AI systems based on their potential risk, with reducing levels of attention applied to each classification level. Examples from each classification include:

  • High-risk systems: algorithmic trading systems, credit scoring systems, facial recognition systems, recruitment tools
  • Limited-risk systems: chatbots for customer service, image enhancement tools, music or video recommendations, spam filters
  • Minimal-risk systems: basic fitness trackers, games, spam detection captchas, weather forecasting models.

Achieving Compliance with the AI Act

A multi-pronged approach that involves various stakeholders within your business will be required to help achieve any compliance required with the Act as a provider and/or a user of AI systems. The individual tasks and likely primary and secondary participants in each are:

Stakeholder Engagement and Awareness of the AI Act

  • Identify key stakeholders: this includes legal and procurement teams, IT and data security personnel, risk management teams, human oversight teams, HR, marketing and product development, and potentially change management and user training teams (depending on the AI system's use case)
  • Raise awareness: educate these stakeholders on the relevant sections of the AI Act based on their roles and responsibilities. Workshops, training sessions or informative materials can be helpful tools
  • Primaries: Legal Team leads training sessions on relevant AI Act provisions and compliance requirements; Human Resources/Training Department assists in developing and delivering awareness training materials
  • Secondaries: IT and Data Security Teams can contribute to training on data security protocols; Risk Management Team explains risk classification

Risk Assessment and Classification

  • Identify all AI systems: create a comprehensive inventory of all AI systems used by your business for any operations conducted within the EU
  • Classify risk levels: assess each AI system based on the criteria outlined in the Act (Annex III) to determine its risk category as high, limited or minimal. Consider factors like the system's purpose, potential impact on individuals and fundamental rights
  • Primaries: Risk Management Team leads the assessment; IT and Data Security Teams provide technical expertise on data usage and security
  • Secondaries: Legal Team provides legal interpretation of the Act and potential risks; the Business Unit using the AI system provides context on the system's purpose and potential impact

Contract Review and Adaptation (if applicable)

  • Review existing contracts: if you procure AI systems from external providers, review existing contracts to ensure they address the Act's requirements (e.g. data governance, human oversight)
  • Negotiate contract amendments: work with providers to amend contracts if necessary, reflecting obligations under the Act (e.g. warranties regarding the AI system's compliance)
  • Primaries: Legal Team leads contract review and negotiation; Procurement team provides insights into existing contracts and contracting practices
  • Secondaries: IT Team offers technical insights on the procured AI system

Data Governance and Transparency

  • Data minimisation: implement practices to minimise the amount of data collected and used by the AI system, adhering to GDPR principles
  • Data security measures: establish robust data security protocols to protect personal data used by the AI system, aligned with GDPR requirements
  • Transparency measures: develop mechanisms for informing users about the AI system's purpose, functionalities, limitations, and potential biases (especially for high-risk systems such as facial recognition)
  • Primaries: Data Security Team implements data security measures in accordance with GDPR and AI Act requirements; Contracting team provides insights into existing contracts and contracting practices
  • Secondaries: Legal Team ensures data practices comply with regulations; Business Units using the AI system provides context on data collection and use

Human Oversight Procedures

  • Define intervention triggers: establish clear criteria for when human intervention is necessary in the AI system's decision-making process
  • Develop escalation procedures: create a clear structure for escalating complex situations requiring human oversight to higher authorities within your business
  • Train human oversight personnel: train personnel designated for human oversight on the AI system's functionalities, limitations, and intervention procedures
  • Primaries: Human Oversight Team leads development and implementation of oversight procedures; Risk Management team provides support in identifying potential risks requiring human intervention
  • Secondaries: Legal team ensures procedures comply with the Act; IT Team assists with technical aspects of integrating oversight mechanisms with the AI system data practices to comply with regulations

Compliance Monitoring and Recordkeeping

  • Monitoring plan: develop a plan for ongoing monitoring of your AI systems to ensure they continue to function as intended, comply with the Act's requirements (e.g. data governance, performance, potential risks), and are secure from cyber attacks
  • Recordkeeping practices: maintain detailed records of your AI systems, risk assessments, compliance measures, and any incidents related to the AI system's use
  • Primaries: Risk Management Team establishes the overall compliance monitoring and recordkeeping process; IT and Data Security Team monitors the AI system’s performance and data security aspects
  • Secondaries: Legal team guides legal interpretations of compliance requirements and recordkeeping practices; All stakeholders involved in AI system use to report any compliance concerns

Continuous Improvement

  • Regular review: periodically review your compliance measures and adapt them as needed based on changes in the AI system, the Act's interpretations, or technological advancements
  • Stay informed: keep your stakeholders updated on any changes or developments related to the EU AI Act and its implications for your business
  • Primaries: Leadership drives the culture of continuous improvement; Risk Management Team leads the ongoing review of risk assessments and compliance measures
  • Secondaries: Legal team advises on the legal implications of improvements; All Stakeholders involved in using the AI system provide feedback on existing procedures and potential improvements

The Consequences of Non-compliance With The AI Act

Financial Penalties

  • High-risk AI: fines can reach up to €35 million or 7% of a business's global annual turnover (whichever is higher) for violations related to prohibited AI systems or non-compliance with high-risk AI system requirements
  • Limited-risk AI: fines can be up to €20 million or 4% of a business's global annual turnover for significant breaches of the Act
  • Minimal-risk AI and Incorrect Information: fines can reach €7.5 million or 1% of a business's global annual turnover for providing incorrect or misleading information to authorities

Reputational Damage: Public disclosure of non-compliance can damage a business's reputation and erode consumer trust.

Contractual Issues: Failure to comply with the Act might lead to contractual disputes with partners or providers of AI systems, impacting business relationships and potentially incurring additional costs.

Loss of Market Access: Non-compliant AI systems could be banned from the EU market, hindering a business's ability to operate within the region.Operational Disruptions: The need to rectify non-compliant practices could lead to operational disruptions and delays, impacting efficiency and productivity.

Potential Legal Action: In severe cases, legal action from authorities or affected individuals could be pursued.

Compliance Responsibilities with the AI Act

The responsibility for compliance with the AI Act falls on the system provider and your business as the system user, but to varying degrees. Here's a breakdown:

Provider Responsibilities

High-risk AI: the provider takes on a significant compliance burden. They are responsible for:

  • Conducting a risk assessment and classifying the AI system accurately
  • Designing and developing the AI system according to the Act's requirements (e.g., transparency, bias mitigation, human oversight)
  • Providing clear documentation, including information on the AI's training data, algorithms and limitations
  • Undergoing a conformity assessment by a notified body (for high-risk AI) to verify compliance

Limited-risk and Minimal-risk AI: the provider's responsibilities are less stringent for lower-risk AI but still involve ensuring the AI system is developed and documented in a responsible manner.

User Responsibilities

  • Understanding the system: users are obligated to understand the AI system they are procuring, including its functionalities, limitations and potential risks as outlined by the provider
  • Deployment in compliance: users must deploy the AI system in accordance with the provider's instructions and the AI Act's requirements. This might involve using the AI system only for its intended purpose and ensuring relevant data quality
  • Monitoring and reporting: users have a responsibility to monitor the AI system for potential issues like biases, unexpected outcomes or security vulnerabilities. They may also be required to report incidents to authorities
  • Internal compliance: users should implement internal procedures to ensure the business is compliant with the Act's broader principles (e.g., data governance, human oversight) when using the AI system

Shared Responsibility

Transparency and user information: providers and users share responsibility for ensuring clear and accessible information is available to end-users about the AI system's capabilities and limitations.

Uncertain Responsibility

Note that uncertainty about responsibilities can arise. Consider the situation where a software house develops and sells a vendor and contract lifecycle management (VCLM) system which provides access to a third-party AI system for the purpose of extracting key data from contracts and creating summaries of them.

Should the software house be designated as a provider or a user of the AI system under the AI Act?

The software house would likely only be considered a provider if it modifies, trains or controls the functionalities of the AI system, or allows users of its VCLM software to customise the AI system for other tasks.

Resolving such uncertainties requires consideration of all the relevant facts by a legal professional specialising in AI and the AI Act.

While some contractual negotiations may influence the balance of responsibilities, both providers and users have a role to play in ensuring procured AI systems comply with the AI Act. Collaboration and communication are key to achieving responsible AI adoption.

Contracting for Compliance with the AI Act

A summary of contract clauses that can help your business to ensure an AI system complies with the EU's AI Act throughout its lifecycle (development, deployment and maintenance) covers:

Development Phase

If your business is having a bespoke AI system developed by a third-party with an expectation of its use within the EU, the contract covering the work should address the following areas:

  • Data management practices: include clauses outlining the provider's data management practices during development. This should cover data collection procedures, data quality standards, and robust data security measures
  • Documentation and transparency: mandate the provider to maintain thorough documentation throughout development, outlining the AI's design, training data, and algorithms used. This documentation should be accessible for potential audits
  • Human oversight integration: specify how human oversight will be integrated into the AI's development process. This could involve requirements for human review of design choices, training data selection, and algorithm testing
  • Risk assessment and mitigation: require the provider to conduct a comprehensive risk assessment of the AI system during development. The contract should specify the methodology for the assessment and outline a plan for mitigating identified risks

Deployment Phase

Whether the procured AI system was purchased off the shelf or specifically developed for your business, your contract should contain the following clauses regarding its deployment:

  • Compliance with training data specifications: confirm that the deployed AI uses the same training data and parameters as those used during risk assessment and development
  • Data ownership: clarify ownership of the data used by the AI system and its source (e.g., provided by your business, sourced from third-party vendors)
  • Incident reporting and monitoring: outline procedures for reporting incidents related to the AI system's operation (e.g., biases, errors, unexpected outcomes). The contract can also specify monitoring requirements to identify potential issues proactively
  • Transparency and user information: provide clear and understandable information for users about the AI system's capabilities, limitations and potential risks

Maintenance Phase

During its operational life, the contract should require the AI system provider to deliver the following capabilities:

  • Continuous bias monitoring: require the provider to monitor the AI system for potential bias creep over time and implement corrective actions if necessary
  • Data deletion procedures: outline clear data deletion procedures for the data used by the AI system once it reaches the end of its operational life
  • Ongoing risk management: maintain a risk management program for the system throughout its lifecycle. This program should include regular risk assessments and updates to mitigation strategies as needed
  • Software updates and security patches: specify the provider's responsibility to provide timely software updates and security patches to address vulnerabilities or emerging risks

Additional Considerations

Other worthwhile clauses that might be included in your contract cover:

  • Access rights for audits: grant your Legal Team and relevant authorities access to the AI system and its documentation for potential audits to ensure ongoing compliance
  • Adaptability to new guidance: allow the contract to be updated to reflect any new guidance or clarifications issued by the European Commission regarding the AI Act's interpretation and implementation
  • Dispute resolution procedures: incorporate clauses outlining the process for resolving any potential disputes arising from non-compliance with the Act or the contract terms. This could involve mediation or arbitration procedures
  • Guarantees: include clauses demonstrating the provider’s commitments to handling the data used in developing and operating its AI system, identifying and mitigating potential biases in the system, and specifying how human oversight is incorporated into the system’s development and use
  • Incident reporting and corrective actions: outline procedures for reporting incidents related to the AI system's operation (e.g., biased outputs, security breaches). Specify clear timelines for corrective actions
  • Liability: specify clear terms regarding the provider’s liability in case of non-compliance with the AI Act by, or malfunctions of, its AI system
  • Overlap with other EU regulations: consider any overlap between the AI Act and existing EU regulations like GDPR and product safety laws. Contracts need to ensure compliance with both regulations regarding data collection, processing and user rights, and may need to address safety risks alongside the Act's requirements. Also ensure user information about the AI system (capabilities, limitations) aligns with both the AI Act's transparency requirements and GDPR's right to information, as well as outlining how to resolve potential conflicts arising from overlapping compliance requirements of the AI Act, GDPR and product safety laws
  • Regular reviews and updates: include provisions for regular contract reviews to assess the AI system's ongoing compliance with the Act, especially as the technology evolves and the regulatory landscape might change
  • Termination options: allow for contract termination if the AI system is found to be non-compliant with the AI Act's requirements, or poses unacceptable risks
  • Warranties: consider including warranties from the provider stating their AI system complies with the Act’s relevant risk category

Finally, the contract should demonstrate a clear link between its clauses and the specific articles of the AI Act. Referencing specific articles avoids ambiguity and ensures both parties understand the legal basis for each contractual obligation. It also establishes a framework within the contract for ensuring compliance with the AI Act.

As the AI Act evolves, referencing articles by number allows the contract to be adapted to any future amendments without rewriting the entire document.

By incorporating these extra elements into contracts, your Legal Team can ensure the AI system adheres to the EU's AI Act throughout its lifecycle, from development where relevant, through deployment and maintenance throughout its operational life.

Wrap-up

The EU AI Act represents a significant step towards ensuring responsible AI system development and use within the EU, presenting challenges and opportunities for businesses using AI systems.

Interpreting the Act and effectively incorporating its articles into meaningful contracts can be time-consuming and complicated. The information presented in this article is for general awareness and shouldn't be taken as legal advice.

It's highly recommended that your business's Legal and Contracting teams consult with legal professionals specialising in EU AI regulations, particularly where high-risk AI systems or complex contracts are involved.

Remember, the AI Act is still new, so staying updated on any official guidance or clarifications from the European Commission is essential for crafting comprehensive and effective contracts.

To learn how Gatekeeper uses AI systems to take some of the guesswork out of managing your vendors and your contracts with them, don't hesitate to get in touch with us.

Rod Linsley
Rod Linsley

Rod is a seasoned Contracts Management and Procurement professional with a senior IT Management background, specialising in ICT contracts

Tags

Contract Management , Control , Vendor Management , Compliance , Contract Lifecycle Management , Contract Management Software , Visibility , Contract Lifecycle , Case Study , Vendor and Contract Lifecycle Management , Supplier Management , Vendor Management Software , Contract Risk Management , Contract Management Strategy , Contract Repository , Regulation , Risk Mitigation , Third Party Risk Management , Contract Automation , Regulatory compliance , VCLM , TPRM , Workflows , Artificial Intelligence , CLM , Contract Ownership , Contract Visibility , Contract and vendor management , Contracts , Procurement , Supplier Performance , Supplier Risk , contract renewals , Legal , Legal Ops , NetSuite , Podcast , Risk , Vendor Onboarding , Contract compliance , Financial Services , Future of Procurement , Gatekeeper Guides , Procurement Reimagined , Procurement Strategy , RFP , Supplier Relationships , Business continuity , CLM solutions , COVID-19 , Contract Managers , Contract Performance , Contract Redlining , Contract Review , Contract Risk , ESG , Metadata , Negotiation , SaaS , Supplier Management Software , Vendor Portal , Vendor risk , webinar , AI , Clause Library , Contract Administration , Contract Approvals , Contract Management Plans , Cyber health , ESG Compliance , Kanban , Market IQ , RBAC , Recession Planning , SOC Reports , Security , SuiteWorld , Sustainable Procurement , collaboration , Audit preparedness , Audit readiness , Audits , Business Case , Clause Template , Contract Breach , Contract Governance , Contract Management Audit , Contract Management Automation , Contract Monitoring , Contract Obligations , Contract Outcomes , Contract Reporting , Contract Tracking , Contract Value , DORA , Dashboards , Data Fragmentation , Digital Transformation , Due Diligence , ECCTA , Employee Portal , Excel , FCA , ISO Certification , KPIs , Legal automation , LegalTech , Mergers and Acquisitions , Obligations Management , Partnerships , Procurement Planning , Redline , Scaling Business , Spend Analysis , Standard Contractual Clauses , SuiteApp , Suppler Management Software , Touchless Contracts , Vendor Relationship Management , Vendor risk management , central repository , success hours , time-to-contract , APRA CPS 230 , APRA CPS 234 , Australia , BCP , Bill S-211 , Biotech , Breach of Contract , Brexit , Business Growth , CCPA , CMS , CPRA 2020 , CSR , Categorisation , Centralisation , Certifications , Cloud , Conferences , Confidentiality , Contract Ambiguity , Contract Analysis , Contract Approval , Contract Attributes , Contract Challenges , Contract Change Management , Contract Community , Contract Disengagement , Contract Disputes , Contract Drafting , Contract Economics , Contract Execution , Contract Intake , Contract Management Features , Contract Management Optimisation , Contract Management pain points , Contract Negotiation , Contract Obscurity , Contract Reminder Software , Contract Requests , Contract Routing , Contract Stratification , Contract Templates , Contract Termination , Contract Volatility , Contract relevance , Contract relevance review , Contracting Standards , Contracting Standards Review , Cyber security , DPW , DPW, Vendor and Contract Lifeycle Management, , Data Privacy , Data Sovereignty , Definitions , Disputes , EU , Electronic Signatures , Enterprise , Enterprise Contract Management , Financial Stability , Force Majeure , GDPR , Gatekeeper , Healthcare , ISO , IT , Implementation , Integrations , Intergrations , Key Contracts , Measurement , Microsoft Word , Modern Slavery , NDA , Operations , Parallel Approvals , Pharma , Planning , Port Agency , Pricing , RAG Status , Redlining , Redlining solutions , Requirements , SaaStock , Shipping , Spend optimzation , Startups , Supplier Cataloguing , Technology , Usability , Vendor Categorisation , Vendor Consolidation , Vendor Governance , Vendor Qualification , Vendor compliance , Vendor reporting , Voice of the CEO , automation , concentration risk , contract management processes , contract reminders , cyber risk , document automation , eSign , enterprise vendor management , esignature , post-signature , remote working , vendor centric , vendor lifecycle management

Related Content

 

subscribe to our newsletter

 

Sign up today to receive the latest GateKeeper content in your inbox.

Subscribe to Email Updates