The European Union member states reached a unanimous decision in favor of the AI Act, setting the stage for its formal enactment either in March or April of this year. Essentially, this Act resembles Europe's General Data Protection Regulation (GDPR) of 2016, but specifically tailored for artificial intelligence. It imposes stringent requirements on entities engaged in the development or utilization of AI within the European Union, reinforced by substantial penalties. Breaches of the Act may incur fines ranging from €15 million or 3% of the annual global turnover to as high as €35 million or 7% of the annual global turnover for infringements involving prohibited AI systems, such as those employing manipulative techniques or leveraging biometric data to deduce private information.
For any enterprise operating within the EU, ensuring compliance is paramount.
Most organizations aspiring for compliance should initiate a gap analysis to identify the necessary enhancements to existing governance frameworks, policies, procedures, risk classifications, metrics, etc., to attain compliance and respond effectively and accurately to regulatory inquiries. This initial step is relatively straightforward. The real challenge lies in the implementation of measures to bridge these gaps in a manner that aligns with internal objectives. In this discourse, we delineate precisely how to accomplish this task.
Before proceeding, it's important to note two key points.
Firstly, the guidance provided herein is inherently general. Despite assertions to the contrary — even from consulting firms advocating for their proprietary frameworks — a significant level of customization is indispensable. Different organizations possess distinct infrastructures, cultures, and operational methodologies. For example, our clientele harbors diverse strategic priorities, varying rates of innovation adoption, and differing appetites for reputational and regulatory risk.
Secondly, just as the GDPR or California’s Consumer Privacy Act (CCPA) do not comprehensively tackle ethical and reputational risks associated with data privacy, an EU AI Act compliance program alone does not cover the full spectrum of ethical and reputational risks posed by AI.
Organizations that have already established AI ethical risk or responsible AI programs — a step many have taken recently — are at an advantage here. They likely already address a broader range of risks than those introduced by the Act and may only need to make minor adjustments to align with its requirements. However, organizations that have yet to develop, implement, and scale such programs must decide whether their goal is mere compliance with the Act or safeguarding their brand's integrity against potential trust erosion due to legally permissible yet ethically dubious AI practices.
Here's what boards, C-suite executives, and managers need to understand and execute to ensure compliance with the impending law.
Roles and Responsibilities
Establishing an AI Act compliance program or an AI ethical risk/responsible AI program — hereafter referred to as "the program" — is a cross-functional initiative. Therefore, the board, C-suite, and managers all bear distinct responsibilities concerning the conception, execution, expansion, and upkeep of the program. Each group also encounters common pitfalls that warrant careful consideration. Naturally, organizations may delegate these responsibilities differently among the three entities, and they must be mindful of their existing norms and cultures.
Board Responsibilities
The board holds ultimate accountability for shielding the organization from both immediate and long-term ethical, reputational, and regulatory hazards. Concurrently, they are tasked with prioritizing other strategic imperatives, such as the nature and velocity of innovation, the timing and suitability of mergers and acquisitions, and other financial allocations. Thus, the primary decision facing the board is whether to pursue a specialized AI Act compliance program or adopt a broader AI ethical risk/responsible AI program.
Ideally, guardians of a brand would opt for the latter option, as it offers a more comprehensive approach to preserving the brand's credibility and safeguarding the interests of stakeholders. However, in certain scenarios, competing priorities and resource constraints might necessitate a focus solely on regulatory compliance — which could be deemed as the minimum requirement.
When confronting this decision, boards commonly make two types of errors.
Firstly, particularly in non-tech-centric organizations, boards sometimes shy away from assuming responsibility for navigating the ethical, reputational, and regulatory risks associated with AI, perceiving them as overly complex or "too technical" to grasp. This assumption is misguided. Given the magnitude and scope of these risks — including the substantial penalties for noncompliance with the Act — it is imperative for the board to grasp these issues.
This does not imply becoming experts in AI intricacies but rather entails knowing the pertinent questions to pose. These inquiries should encompass:
Which member of the C-suite bears the responsibility for orchestrating, ensuring compliance with, and supervising the implementation of the program?
Are there educational initiatives established to educate our workforce on detecting signs of ethical or regulatory concerns related to AI?
Are these programs tailored to integrate guidance on our protocols and procedures for escalation?
When evaluating whether an AI model falls within the prohibited or high-risk AI classifications outlined in the AI Act, do we monitor the uniformity of these evaluations across different teams and markets?
Boards should also be aware of the key metrics required to monitor the implementation, compliance, and effectiveness of the program, which they delegate to the C-suite for design and execution (further details in the subsequent section).
The second common mistake is when boards assume that the absence of ethical breaches signifies safety. Unfortunately, this assumption is flawed. Not only do new regulations introduce new risks, but factors like mergers and acquisitions, emerging AI technologies, AI-enhanced workflows, new vendors, and more create fresh opportunities for ethical lapses.
Responsibilities of the C-Suite
The C-suite bears the primary responsibility for devising and supervising the program. This endeavor should commence with a comprehensive gap analysis. While different clients may refer to this process by various names such as gap analysis, risk assessment, or maturity assessment, the crucial aspect remains unchanged: identifying existing resources essential for the efficient design and implementation of the program.
This assessment serves a critical political purpose as well. Often, organizations encounter internal conflicts when one department initiates a program without consulting others or when conflicting ownership claims arise. Such conflicts frequently involve IT, data/AI, legal, and risk and compliance departments. A well-executed assessment facilitates collaboration among these stakeholders to address alignment issues.
Moreover, a gap analysis initiates the customization of the framework. Numerous AI ethical risk or responsible AI frameworks exist, ranging from generic to industry-specific. Regardless of the framework chosen, the priority lies in tailoring it to align with the organization's approach. This customization should encompass the program's rollout schedule, risk classifications, RACI matrices, policies, procedures, workflows, quality assurance/quality improvement measures, and performance enhancement strategies.
Key decisions for the C-suite include determining the notification protocol for flagged issues (e.g., AI models falling under the AI Act's high-risk category), piloting new AI technologies, or when risk mitigation strategies fail. A common pitfall is delegating such decisions to a single function or business unit, emphasizing the importance of a cross-functional team. Nonetheless, a single C-suite executive should ultimately oversee the program.
The choice of the executive responsible for this role — whether a new hire or an existing executive — depends on various factors. In practice, responsibilities are often assigned to an existing executive, while a new executive role is introduced at a milestone-specific phase outlined in the program's roadmap. Reasons for appointing a new executive include expertise, authority, and the limited bandwidth of existing executives. Additionally, having a chief AI ethics officer addresses conflicts of interest and prioritization issues that other executives may face.
Pitfalls to avoid by the C-suite include three key aspects, one of which is particularly significant:
Firstly, there might be a tendency to prioritize purchasing a "solution" or an automation platform to streamline parts of the risk identification and mitigation processes. While automation is a valid objective, it should not be the primary focus initially. Establishing a robust foundation encompassing people, processes, and technology is essential for long-term success, as fixating solely on technology may undermine the program's effectiveness.
Secondly, the program requires ongoing attention and monitoring from the C-suite; ensuring that managers and frontline staff adhere to quality assurance and improvement processes is vital to the program's success.
Lastly, regarding the significant and pervasive pitfall, the C-suite should not dismiss the applicability of metrics — Key Performance Indicators (KPIs) and Objectives and Key Results (OKRs) — in the realm of ethical, reputational, and regulatory risks. It's a common misconception among senior leaders that ethical matters cannot be quantified. However, metrics measuring the implementation, adherence, and impact of the program are indispensable for quality assurance and improvement, as well as for internal and external audits. Customization is crucial here too: Metrics should align with existing methodologies, tools, and units of measurement. Notably, harmonization metrics are vital, tracking the program's influence on other organizational metrics such as operational efficiency.
Managerial Responsibilities
The EU AI Act was formulated without direct input from operational personnel. Consequently, organizations must undertake considerable efforts to operationalize its requirements along with other ethical and reputational standards outlined in internal AI policies. Integrating these requirements into existing workflows without disrupting core business operations is imperative. Managers must meticulously adapt workflows based on various factors, including the types of AI being developed, tested, procured, and deployed, as well as the associated risk levels.
A common pitfall is managers' failure to recognize the evolving risk levels throughout the AI lifecycle. This oversight can lead to significant consequences. For instance, an AI designed for one purpose initially may later be utilized in unforeseen contexts, potentially resulting in ethical and regulatory issues if not reassessed. Continuous evaluation of the data AI learns from and is optimized by is essential to mitigate biases and other ethical concerns.
Data engineers and data scientists will shoulder much of the compliance-related responsibilities, necessitating role-specific learning and development initiatives. Similar to the approach with the C-suite, while managers should trust their teams, they must also verify compliance.
Managers themselves will require extensive learning and development efforts, particularly those lacking backgrounds in AI, ethics, or regulations.
Ready or Not, Here it Comes
The EU AI Act marks a significant milestone in the ongoing drive for AI ethics and responsible AI. While not without flaws, it represents a robust regulatory framework aimed at safeguarding society from the potential harms of emerging technologies like AI. It also serves as a means for companies to protect their brands and financial interests, even if they didn't seek such regulation. Amidst the exciting landscape of breakthrough technologies, companies must prioritize regulatory compliance and trust preservation to navigate the evolving AI landscape successfully. This imperative holds true for senior leaders in AI, irrespective of their geographic location within or outside the EU.