by Eduardo Magrani, Senior Consultant in the TMT practice area
Artificial Intelligence (AI) is no longer a novelty, including for legislators who, as we know, keep drafting regulations to govern this new reality, such as the Artificial Intelligence Regulation (AI Act or AIA) and the Digital Services Regulation (DSA).
According to the European Parliament, artificial intelligence is "the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity." In other words, it is a computer programme developed with one or more of the techniques and approaches listed in Annex I to the AI Act, which is capable, when faced with a given set of human-defined objectives, of creating outputs, such as content, predictions, recommendations or decisions, that influence the environments with which it interacts.
This system can offer enormous benefits to both individual users and companies. However, on the other hand, the use of artificial intelligence poses inevitable risks. As a result of such risks, the need to guarantee a robust and secure system, accountability, transparency and respect for fundamental rights is all the more important. For this reason, the AIA (Artificial Intelligence Act), in its Title III referring to high-risk AI systems, and in its Title IV alluding to certain systems such as those "intended to interact with natural persons", includes transparency obligations to be complied with by the persons/entities covered by these rules.
Although these rules applicable to companies can safely monitor artificial intelligence before, during and after its implementation, there is a need to establish a proper AI governance model, i.e. practices, policies and structures that aim to promote and guarantee the development, implementation and use of AI in an ethical and responsible manner.
This urgent need has prompt various governance proposals, including the 2019 "OECD Recommendation on Artificial Intelligence", which promotes the implementation of certain principles, cumulatively, by Members and Non-Members of this organisation, such as transparency and explainability, robustness and security, accountability and the principles that are also mentioned in the 2022 United Nations "Principles for the Ethical Use of Artificial Intelligence in the United Nations System". In 2021, UNESCO also published its "Recommendation on the Ethics of Artificial Intelligence", in which it recommends that member states should, among other things, "credibly and transparently monitor and evaluate policies, programmes and mechanisms related to ethics of AI, using a combination of quantitative and qualitative approaches".
A particularly relevant publication is capAI, launched in 2022, which contains a comprehensive checklist to be strictly complied with by those responsible for the systems throughout the entire process (design, development, evaluation, operation and refurbishment/withdrawal), as well as suggesting the creation of specific roles for each step of the process.
Despite the various existing AI governance alternatives, the idea of setting up an Ethics Committee - a group capable of preventing, detecting and mitigating the potential risks emerging from AI, among others - still stands. This structure, referred to in the UNESCO recommendation, can take different forms, have a range of powers and even include members from numerous specialities. At the same time, it can support the other forms of AI governance mentioned above, and vice versa.
The importance of an Ethics Committee is accentuated in the context of large companies or those working with high-risk artificial intelligence, as these systems require constant human supervision during and after implementation and are governed by the AIA.
According to the recent paper published by the Getúlio Vargas Foundation ("Framework para Comitês de Ética em IA"), the Committee can be an in-house body subordinate to the management or a specific department of the company, or an external body. An external body can be linked to the organisation, to a network of organisations with similar interests or even to the public authorities or be completely independent. In-house bodies will be more familiar with and more closely connected to the company, while external bodies, despite their distance from the organisation, appear to be more impartial.
With regard to the Committee's powers and purpose, as suggested in the aforementioned paper, it can play various roles, some more active than others, either individually or in conjunction with others. From a less present perspective, the Committee has a purely explanatory role, which aims to inform people of what is defined as good practice and which principles are most beneficial in the business context; it has a guiding role, which aims to propose the behaviours and ethical principles to be adopted by the organisation, backed up by direct monitoring of the company's management; and finally, from a more preventative perspective, the Committee can also take on the role of avoiding and mitigating the ethical risks of technology, whether implemented or not, which includes a supervisory component aimed at ensuring that there is no deviation.
From a more active perspective, the normative power focuses on drawing up mandatory, non-supplementary rules to be complied with by all members of the company. Finally, a Committee with decision-making power has the power to prevent a project from going any further and to assess the behaviour of those who are part of the company.
Ethics committees are not new, and some companies have established governance bodies that have some of the powers mentioned above. Adobe's Ethics Committee and AI Ethics Review Board, for example, combine the power to make recommendations with the power to stop the implementation of any technology that fails to comply with the principles of accountability and transparency. The Panasonic group's Ethics Committee, on the other hand, has a more preventative role, and primarily intervenes in the risk assessment of all projects involving AI.
Once the basic concept of an Ethics Committee and its inherent powers are grasped, we must also understand its composition. Regardless of whether they are in-house (from the company itself), external or a mixture of both, the most important aspect is each member's speciality and how their individual expertise can make a difference. A computer scientist, for example, offers a perspective on the algorithms themselves and their likelihood of success, while a lawyer or legal expert helps with Compliance aspects. An ethics professional would be key to detecting possible risks and helping prevent them, while a specialist in different social contexts would be able to detect possible social impacts.
With these experts, the balance between harm and benefit would be balanced, fostering a proportionate and informed decision-making process. In addition, including a person with no company-related expertise would be advantageous, to offer the perspective of the potential target of the technology to be implemented. The more multidisciplinary the Committee, the more effective and comprehensive it will be.
With regard to its formation, there is no rule on the number of Committee members who can join voluntarily or take part in a selection process, or whether or not such members should be remunerated. These matters are up to the company and the Committee in question.
Nevertheless, there is no denying the importance of a body similar to this, as well as the possibility of dialogue between the various forms of AI governance, all of which aim to reinforce the essential requirements of transparency, accountability and safety.