Skip to main content

2024 | Buch

YSEC Yearbook of Socio-Economic Constitutions 2023

Law and the Governance of Artificial Intelligence

insite
SUCHEN

Über dieses Buch

Artificial intelligence (AI) has the potential to radically transform our society. It may lead to a massive increase in the capabilities of humankind and allow us to address some of our most intractable challenges. It may also entail profound disruption to structures and processes that have sustained our society over centuries. These developments present a unique challenge to the socio-economic constitutional arrangements which govern our world at national, regional and international level. The deployment of increasingly powerful AI systems, able to function with increasing degree of autonomy, has led to concerns over loss of human control of important societal processes, over the disruption of existing economic, social and legal relationships, and over the empowerment of some societal actors at the expense of others, together with the entrenchment of situations of domination or discrimination. It has also made increasingly clear how tremendous the potential benefits, that these technologies may bring, are to those who successfully develop and deploy them. There is therefore great pressure on governments, international institutions, public authorities, civil society organisations, industry bodies and individual firms to introduce or adapt mechanisms and structures that will avoid the potentially negative outcomes of AI and achieve the positive ones. These mechanisms and structures, which have been given the umbrella term ‘AI governance’, cover a wide range of approaches, from individual firms introducing ethical principles which they volunteer to abide by, to the European Union legislating an AI Act, which will prohibit certain types of AI applications and impose binding obligations on AI developers and deployers. The fast pace of innovation in the development of AI technologies is mirrored by the fast pace of development of the emerging field of AI governance, where traditional legislation by public bodies is complemented with more innovative approaches, such ashybrid and adaptive governance, ethical alignment, governance by design and the creation of regulatory sandboxes.

The chapter “AI and Sensitive Personal Data Under the Law Enforcement Directive: Between Operational Efficiency and Legal Necessity” is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.

Inhaltsverzeichnis

Frontmatter
Law and the Governance of Artificial Intelligence
Abstract
The 2023 Yearbook of Socio-Economic Constitutions, dedicated to ‘Law and the Governance of Artificial Intelligence’, explores the timely and complex issues surrounding the emergence of artificial intelligence (AI) technologies. The volume delves into the transformative potential of AI in various sectors such as law enforcement, healthcare, and recruitment, while also highlighting the growing global concern regarding the risks posed by AI. The contributors examine the evolving landscape of AI governance, discussing the challenges of regulating AI, the role of law as a governance tool, and the disruptive effects of AI on existing governance regimes. The volume scrutinizes the proposed EU AI Act and its implications, as well as the need to reconceptualize fundamental legal principles in response to AI advancements. Furthermore, it addresses the concentration of power among private corporations in AI development and the associated risks to individual rights and the rule of law. Ultimately, the volume underscores the crucial role of law in navigating the complexities of AI governance and calls for a nuanced understanding of the implications of AI for society.
Andreas Moberg, Eduardo Gill-Pedro

Part I

Frontmatter
AI Regulation in the EU: The Future Interplay Between Frameworks
Abstract
The purpose of this contribution is to analyse existing and proposed AI regulations and to identify the critical implications of their future interplay, that is, how the regulatory framework as a whole will function in the future. Given the abundance of legislation, it is essential that all measures dovetail into each other; otherwise, legal uncertainty, fragmentation, and regulatory gaps would be inevitable.
Béatrice Schütte
The AI Act’s Research Exemption: A Mechanism for Regulatory Arbitrage?
Abstract
This paper argues that by failing to acknowledge the complexity of modern research practices that are shifting from a single discipline to multiple disciplines involving many entities, some public, some private, the proposed AI Act creates mechanisms for regulatory arbitrage. The article begins with a semantic analysis of the concept of research from a legal perspective. It then explains how the proposed AI Act addresses the concept of research by examining the research exemption that is set forward in the forthcoming law as it currently exists. After providing an overview of the proposed law, the paper explores the research exemption to highlight whether there are any gaps, ambiguities, or contradictions in the law that may be exploited by either public or private actors seeking to use the exemption as a shield to avoid compliance with duties imposed under the law.
To address whether the research exemption reflects a coherent legal rule, it is considered from five different perspectives. The paper begins by examining the extent to which the research exemption applies to private or commercial entities that may not pursue research in a benevolent manner to solve societal problems, but nevertheless contribute to innovation and economic growth within the EU. Next, the paper explores how the exemption applies to research that takes place within academia but is on the path to commercialization. The paper goes on to consider the situation where academic researchers invoke the exemption and then go on to provide the AI they develop to their employing institutions or other public bodies for no cost. Fourth, the paper inspects how the exemption functions when researchers build high-risk or prohibited AI, publish their findings, or share them via an open-source platform, and other actors copy the AI. Finally, the paper considers how the exemption applies to research that takes place “in the wild” or in regulatory sandboxes.
Liane Colonna

Part II

Frontmatter
Governance of AI or Governance by AI: Limits, New Threats, and Unnegotiable Principles
Abstract
This chapter analyses the dynamics of AI governance and advances a few ideas that should help us ensure an AI compliant with human dignity and freedom, human rights, democracy, and the rule of law. Human beings are shedding their ability to govern and exercise regulatory authority over their own affairs, ceding power to nonhumans, but as much as this trend may be owed to the advance of AI technology, also complicit in it is the regulatory activity of the companies that develop these technologies. On top of that, there is the question of AI in the hands of governments, whose use of the technology for their own (legal) regulation gives them over citizens a power that can be misused or abused. How, then, to confront this danger that AI and its use are posing? This chapter argues that one tool we can use to protect ourselves is that of human rights, but to that end—if we are to make these rights effective—we must bring them up to date considering the advancements made in AI. Due to the very nature of AI, the focus is on privacy and data protection, where I identify three cases—namely, group privacy, biometric psychography, and neurorights—and with each of these cases I show that AI can be governed by introducing a suite of rights designed to be AI-responsive or otherwise by reinterpreting existing rights to make them so.
Migle Laukyte
A Horizontal Meta-effect? Theorising Human Rights in the AI Act and the Corporate Sustainability Due Diligence Directive
Abstract
The indirect horizontal effect of human and fundamental rights has dominated European constitutional practice. In recent years, fractures have appeared in its face in courts and in regulatory practice. The EU has introduced multiple legislative initiatives that have pushed the rights towards apparent direct horizontal effect.
This article analyses the AI Act and the Corporate Sustainability Due Diligence Directive as examples of a novel human and fundamental rights strategy. The article argues that the instruments first weaken the rights and then deploy them to normatively guide and condition intra-firm sense-plan-act cycles. The rights first create adverse human rights impacts and fundamental rights risks to serve as objects of concern in corporate information processing. The planning and acting stages transport the rights into real-world reduction in human and fundamental rights violations. While on its face weak, the novel strategy is likely an adaptation to political pressures, but contains the seeds of a possible progressive end-game.
Mika Viljanen
Everybody Wants To Rule the World: The Relevance of the Rule of Law for Private Law in the Context of Algorithmic Profiling of Online Users
Abstract
The rule of law is an elusive concept, and its fluidity lends itself to multiple interpretations. Different accounts connect various core elements—‘desiderata’—under this universally recognised concept. However, there is a consensus (albeit implicit) that the rule of law is essentially a public law concept, of only marginal concern to private law. This paper departs from this understanding and suggests that this presumption is a misperception. The rule of law does not concern only the regulation of powers and arbitrariness between individuals and the State, but it operates also in the relationships between private individuals.
In particular, with the flare-up in recent years in the use of machine learning algorithms to profile online users (in order to predict their behaviour and tailor recommendations and searches to their preferences), private actors (i.e. online platforms) have obtained a super-dominant position both in the collection of data and development of the technology, in the digital (eco)systems in which they operate.
This paper aims to prospectively assess the duplicitous relevance that algorithmic profiling has for the protection of fundamental rights from a private law perspective (e.g. right to privacy, right to not be discriminated, freedom of expression) and for the self-appointed power of online platforms to self-regulate their contractual relationships with users, in the digital markets. Conversely, it also discusses the relevance of the rule of law for private law relationships in its function of stronghold for the protection of fundamental rights. This value, on one side, creates legal guardrails around private self-regulation of online platforms, and on the other side, it secures the respect of fundamental rights of users from algorithmic profiling by online platforms. The paper concludes with the need to re-evaluate the State’s power to limit private freedom and interfere in parties’ autonomy in cases where fundamental rights are seriously at stake.
Silvia A. Carretta
AI-Based Decision-Making and the Human Oversight Requirement Under the AI Act
Abstract
In the EU, software classified as artificial intelligence (AI) is considered to categorically pose a high risk to human health, safety and fundamental rights if it is applied in relation to products covered by the system for public surveillance and enforcement known as the “new legal framework”, and when implemented in certain areas beyond the general product safety regime. Whereas private parties carry the enforcement costs regarding low- or no-risk AI, the EU Member States must, according to the AI Act, have a dialogue with those who provide and apply high-risk AI professionally to ensure compliance prior to marketing and as long as the system is in use. Since the AI system learns and adapts, Article 14 of the AI Act provides that a natural person shall be designated by the professional user to oversee the high-risk AI system continuously. However, the AI Act does not define the rights and interests that it is intended to safeguard. A case in point is the right to data protection, which is mainly particularised in the General Data Protection Regulation (GDPR). Measures typically need to be taken ex ante to ensure that data processing by a high-risk AI system complies with the data subject’s right under Article 22 of the GDPR “not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. In that context, the relationship between the system for impact assessment established primarily by Article 35 of the GDPR and the compliance regime introduced by the AI Act is explored.
Claes G. Granmar

Part III

Frontmatter
Algorithmic Hiring Systems: Implications and Recommendations for Organisations and Policymakers
Abstract
Algorithms are becoming increasingly prevalent in the hiring process, as they are used to source, screen, interview, and select job applicants. This chapter examines the perspective of both organisations and policymakers about algorithmic hiring systems, drawing examples from Japan and the United States. The focus is on discussing the drivers underlying the rising demand for algorithmic hiring systems and four risks associated with their implementation: the privacy of job candidate data; the privacy of current and former employees’ workplace data; the potential for algorithmic hiring bias; and concerns surrounding ongoing oversight of algorithmically-assisted decision-making throughout the hiring process. These risks serve as the foundation for developing a risk management framework based on management control principles to facilitate dialogue within organisations to address the governance and management of such risks. The framework also identifies areas policymakers can focus on to help balance (1) granting organisations unfettered access to the personal and potentially sensitive data of job applicants and employees to develop hiring algorithms and (2) implementing strict data protection laws that safeguard individuals’ rights yet may impede innovation, and emphasises the need to establish an intra-governmental AI oversight and coordination function that tracks, analyses, and reports on adverse algorithmic incidents. The chapter concludes by highlighting seven recommendations to mitigate the risks organisations and policymakers face in the development, use, and oversight of algorithmic hiring.
Jason D. Schloetzer, Kyoko Yoshinaga
AI Gender Biases in Women’s Healthcare: Perspectives from the United Kingdom and the European Legal Space
Abstract
This paper engages with a key debate surrounding artificial intelligence in health and medicine, with an emphasis on women’s healthcare. In particular, the paper seeks to capture the lack of gender parity where women’s health is concerned, a consequence of systemic biases and discrimination in both historical and contemporary medical and health data. The existing literature review demonstrates that there is not only a gender data gap in AI technologies and data science fields—but there is also a gender data gap in women’s healthcare that results in algorithmic gender bias, impacting negatively on women’s healthcare experiences, treatment protocols, and finally, rights in health. On this basis, the article seeks to offer a concise exploration of the gender-related aspects of medicine and healthcare, shedding light on the biases encountered by women in the context of AI-driven healthcare. Subsequently, it conducts a doctrinal comparative law examination of the existing legislative landscape to scrutinise whether current supranational AI regulations or legal frameworks explicitly encompass the protection of fundamental rights for female patients in the realm of health AI. The scope of this analysis encompasses the legal framework governing AI-driven technologies within the European Union (EU), the Council of Europe (CoE), and, to a limited extent, the United Kingdom (UK). Lastly, this paper explores the potential utility of data feminism (that draws on intersectionality theory) as an additional tool for advancing gender equity in healthcare.
Pin Lean Lau
The Role of AI in Mental Health Applications and Liability
Abstract
The COVID-19 pandemic has affected the entire area of health care, including care provided to patients with mental health problems. Due to the stressful nature of the pandemic, the number of patients experiencing mental health problems, especially depression or anxiety, has increased. Even well-before the pandemic, Europe struggled with a lack of mental health care, which was especially caused by the long waiting times. The problem seems to have been solved by the plethora of mental health applications that are freely available on the market. Given the user’s accessibility to these applications, I decided to scrutinise the safety of using AI in these health apps, with a particular focus on chatbots. I examined whether existing European legislation may protect users from possible harm to their health and require these mental health applications to be certified as medical devices.
After analysing the Product Liability Directive and the upcoming legislation focused on liability associated with AI, I must state that there is insufficient transparency and protection for users of these applications. Based on experience from the user’s perspective, I have identified the lack of (1) scheduling an appointment with a healthcare professional, (2) human oversight, and (3) transparency as regards the type of AI used. Due to the ‘black box problem’, it is likely that the user who was harmed will not be able to get compensation because of the difficulty of proving causality between the defect and the damage.
Petra Müllerová
Trustworthy Artificial Intelligence and Public Procurement
Abstract
The use of artificial intelligence by public administrations raises major legal challenges, given its potential to harm citizens’ rights and freedoms. The risks that artificial intelligence systems may entail must be assessed when AI systems are procured, both in the design of the tender and in the establishment of the obligations of the contract.
In this context, even in the absence of a European legal framework on the use of artificial intelligence systems, a number of soft law documents have already been drafted to ensure that administrations procure trustworthy AI systems. From these, it is possible to extract a series of guidelines that should be incorporated in the public procurement of AI systems, as these clauses derive without difficulty from the general legislation applicable to the use of new technologies by the administration.
Isabel Gallego Córcoles

Open Access

AI and Sensitive Personal Data Under the Law Enforcement Directive: Between Operational Efficiency and Legal Necessity
Abstract
In constitutional theory, the requirement of necessity is an integral part of a wider proportionality assessment in the limitation of constitutional rights. It fulfils a function of sorting out measures that restrict rights beyond what is required to fulfil the intended purpose. Within data protection, the requirement varies in strictness and interpretation—from ‘ordinary’ necessity to ‘strict necessity’. Recently, the European Court of Justice (ECJ) has introduced what appears to be an even stricter requirement of ‘absolute necessity’ relating to the processing of biometric information under the EU Law Enforcement Directive (LED). In practice, however, the implications of those respective levels of strictness tends to vary, from a strict ‘least restrictive means’ test, to an analysis of whether a measure is necessary for a more effective or a more efficient fulfilment of the intended purpose. In this contribution the principle of necessity as applied by the ECJ is analysed as it pertains to the LED and the Charter, more specifically in the context of implementing AI supported analysis of biometric data. The gradual development of the interpretation of necessity is traced in the data protection case law of the ECJ. The study shows the increased emphasis placed on proportionality over time, highlighting both strengths and potential weaknesses of the requirement in relation to the use of AI supported decision-making in the law enforcement context.
Markus Naarttijärvi

Open Access

Correction to: AI and Sensitive Personal Data Under the Law Enforcement Directive: Between Operational Efficiency and Legal Necessity
Markus Naarttijärvi
Metadaten
Titel
YSEC Yearbook of Socio-Economic Constitutions 2023
herausgegeben von
Eduardo Gill-Pedro
Andreas Moberg
Copyright-Jahr
2024
Electronic ISBN
978-3-031-55832-0
Print ISBN
978-3-031-55831-3
DOI
https://doi.org/10.1007/978-3-031-55832-0