Teisė ISSN 1392-1274 eISSN 2424-6050

2025, Vol. 134, pp. 27–47 DOI: https://doi.org/10.15388/Teise.2025.134.3

Legal Regulation of AI and Morality: The Artificial Intelligence Act in the Context of Natural Law and Legal Positivism

Artūras Grumulaitis
https://orcid.org/0000-0001-5356-5425
PhD Student
Vilnius University, Faculty of Law
https://ror.org/03nadee84
Saulėtekio 9 – I block, LT-10222 Vilnius, Lithuania
E-mail: arturas.grumulaitis@tf.stud.vu.lt

Legal Regulation of AI and Morality: The Artificial Intelligence Act in the Context of Natural Law and Legal Positivism

Artūras Grumulaitis
(Vilnius University (Lithuania))

This paper analyses the relationship between the proposed EU regulation on Artificial Intelligence (AI) and morality, looking from the perspective of two legal paradigms: natural law and legal positivism. The categories of ‘ethics’ and ‘morality’ are being increasingly discussed in the context of advanced technologies, raising the question of whether everything that AI presents is acceptable and tolerable. Based on the essential characteristics of natural law and legal positivism paradigms, legal doctrine, and the newest AI regulation initiatives in the EU, the paper seeks to clarify how the intrinsic morality of natural law influences the legal regulation on AI, and how deep this morality is reflected in the AI Act.
Keywords: artificial intelligence, legal positivism, natural law, ethics, moral, legal paradigm.

Dirbtinio intelekto teisinis reguliavimas ir moralė: Dirbtinio intelekto aktas prigimtinės teisės ir teisinio pozityvizmo kontekste

Artūras Grumulaitis
(Vilniaus universitetas (Lietuva))

Straipsnyje analizuojamas siūlomo Europos Sąjungos dirbtinio intelekto teisinio reguliavimo ir moralės santykis, žvelgiant iš dviejų teisės paradigmų – prigimtinės teisės ir teisinio pozityvizmo – perspektyvos. Pažangių technologijų kontekste vis dažniau kalbama apie vertybines etikos ir moralės kategorijas, keliamas klausimas, ar viskas, ką mums suteikia dirbtinis intelektas, yra priimtina ir toleruotina. Remiantis esminėmis prigimtinės teisės ir pozityviosios teisės paradigmų charakteristikomis, teisės doktrina ir naujausiomis Europos Sąjungos dirbtinio intelekto reglamentavimo iniciatyvomis, darbe siekiama išsiaiškinti prigimtinei teisei būdingų moralinių nuostatų įtaką dirbtinio intelekto teisiniam reguliavimui ir kiek plačiai šios nuostatos yra atspindėtos Dirbtinio intelekto akte.
Pagrindiniai žodžiai: dirbtinis intelektas, teisinis pozityvizmas, prigimtinė teisė, etika, moralė, teisinė paradigma.

___________

Received: 09/01/2025. Accepted: 31/03/2025
Copyright © 2025 Artūras Grumulaitis. Published by
Vilnius University Press
This is an Open Access article distributed under the terms of the
Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Introduction

Scientific research problem. The dynamics of investment in Artificial Intelligence (AI) and the rapidly growing use of this technology pose many new ethical and legal challenges. Even though the new technology stimulates creativity and contributes to the optimization of business processes, it can violate human privacy, infringe copyright, create deep fakes, and mislead users by generating nonrealistic information. With the popularity of generative AI models, the categories of ‘ethics’ and ‘morality’ are increasingly being discussed, raising the question of whether everything that AI provides us is acceptable and tolerable. This implies the natural law paradigm’s idea that morality is the foundation needed for proper legal AI regulation. Based on the essential natural law and legal positivism characteristics, legal doctrine, and the newest AI regulation initiatives in the EU, this paper seeks to clarify how the morality inherent in natural law influences the legal regulation on AI1.

The paper consists of three parts. Firstly, the work highlights the role of ethics and morality in the regulation of technology, by examining specific applications of AI and the ethical challenges they pose. Furthermore, the role of morality in the legal system from the perspective of natural law and legal positivism is discussed. The final part of the paper seeks to examine how the moral attitudes inherent in natural law influence the legal regulation on AI, and whether they are sufficiently reflected in the AI Act. Based on the performed analysis, conclusions are formulated at the end of the paper.

Literature review. Despite the rapid development of AI, the relationship between AI regulation and morality in the context of legal paradigms in Lithuania has received relatively little attention from legal scholars. Therefore, the topic under consideration is timely and original. Lithuanian authors focus mainly on the impact of AI on different areas of law: A. Juškevičiūtė-Vilienė (2024) examines the impact of AI on legal education, research, and professional practice, whereas I. Breskienė (2024) studies the risks of employee surveillance by using AI systems in the field of labor law. In the area of the legal theory, a valuable interdisciplinary analysis of the transformation of philosophy, ethics, and values in an increasingly technological world was conducted by I. Kalpokas and J. Kalpokienė (Kalpokas and Kalpokienė, 2023). Another study exploring the relationship between law, morality, and justice in the context of AI regulation was made by N. Gaubienė (2024). It must be acknowledged that the issues of ethical AI regulation have been studied much more extensively in foreign literature2. One of the first legal researchers studying ethical questions raised by the AI technology was the coordinator of the High-Level Expert Group on AI at the European Commission N. A. Smuha. She emphasizes the importance of ethics in AI regulation and identifies four ethical imperatives that should be considered in the context of AI governance: (1) respect for human autonomy, (2) prevention of harm, (3) fairness, and (4) explicability (Smuha, 2019). Meanwhile, E. Magrani (2019) analyzes how increasing connectivity and symbiotic interaction among humans and intelligent machines influence the rule of law and contemporary ethics. A. T. Dahraj (2023) examines the implications of natural law and legal positivism for AI regulation, while highlighting the challenge of balancing these theories to ensure that AI development should align with societal values. The distinctive and critical approach to the AI Act from an ethical perspective is explored by M. M. Anderson (2022). He argues that “there are a number of reasons to reject the claim of the AI Act regulation as being ethically grounded”. For instance, the AI Act embeds a ‘speed paradigm’, which is arbitrary and profoundly counter-ethical when examined in the light of ethical practice. L. Hogan and M. Lasek-Markey (Hogan and Lasek-Marskey, 2024) discuss the human-rights based approach and its importance for the ethical AI governance. The authors examine the extent to which human rights-based regulation has been achieved by the primary example of AI governance legislation (the AI Act). In addition to the above-mentioned authors, general questions about AI regulation and ethics have been examined by Corinne Cath, Luiza Jarovsky, Georgios Pavlidis and others.

The object and the aim. The object of the study is the relationship between the legal regulation of AI (AI Act) and morality. The aim of the work is to assess the influence of moral values on the regulation of AI and to determine whether the AI Act sufficiently reflects the fundamental moral principles. The work purposefully refers to the provisions of two legal paradigms (natural law and legal positivism), characterized by different approaches to the role of morality in the legal system. Due to the limited scope of the work, this study does not examine legal realism or any other social paradigms.

The objectives. To achieve the main goal of this research, the following objectives are set: 1) to reveal the most important ethical issues raised by the AI technology and the need for AI regulation; 2) to present the role of morality in the legal regulation of AI from the point of view of natural law and legal positivism; 3) to assess whether the proposed EU legal regulation (AI Act) sufficiently reflects the fundamental moral principles.

Research methods. Several methods will be used in the course of the current scientific research to examine the intersection of AI regulation and morality through the lens of natural law and legal positivism. The linguistic method is applied to define and interpret the key legal and philosophical concepts, ensuring their precise usage within the framework of competing legal paradigms; the comparative method is applied to present the distinctions and interactions between natural law and legal positivism; the systematic method is employed to examine the scientific literature and legal acts, and to reveal the content and meaning of legal norms. By integrating these methods, the study aims to contribute to the ongoing discourse about the morality of the AI regulation, assessing whether the AI Act sufficiently reflects the fundamental moral principles.

Main sources. The aim of the work and the object of the study determined that the main sources were the fundamental works of representatives of the two legal paradigms (J. Finnis, L. Fuller, H. L. Hart and H. Kelsen). In preparing the work, scientific articles and special literature of Lithuanian researchers (M. Baltrimienė, N. Gaubienė, A. Juškevičiūtė-Vilienė, J. Kalpokienė, E. Kūris, G. Lastauskienė, A. Navickas, J. Randakevičiūtė, D. Valančienė, L. Baublys, A. Vaišvila) and foreign authors (A.T. Dahraj, L. Hogan, M. Lasek-Markey, B.C. Stahl, J. Jowitt, N. A. Smuha, M. Miernicki et al.) were used. To reveal the concept of AI and the proposed regulation, the AI Act, resolutions of the European Parliament, guidelines and communications of the European Commission, and studies of the European Parliament on ethics of AI were analyzed.

1. Artificial Intelligence and Ethical Challenges

The rapid development of AI undoubtedly offers many advantages, but this technology also poses serious ethical challenges. The European Commission, back in 2020, noted that AI can “lead to breaches of fundamental rights, including the rights to freedom of expression, freedom of assembly, human dignity, non-discrimination based on sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation, as applicable in certain domains, protection of personal data and private life” (White Paper COM(2020) 65 final, p. 11). This may happen due to defects in the deployment or design of the AI system, the autonomy, complexity, unpredictability of the system, or the use of biased and non-objective data.

It is worth noting that the ethical issues of AI were first addressed in the 1970s, but they became even more relevant with “the growing availability of computing resources and the increasing amounts of data that can be used for analysis” (Stahl et al., 2023, p. 1). According to a study conducted by scientists representing the United States and the United Kingdom, in fifteen years, AI models have reached and exceeded the level of human abilities in many fields: in 2014, the afore-mentioned breakthrough occurred in the field of image recognition, in 2015 – in the field of identifying human speech and writing, in 2017 – in the field of understanding text, and in 2019 – in the field of understanding language (Kiela et al., 2021). Such evolution of AI has led to the emergence of specific ethical and legal problems: illegal discrimination against individuals, violations of the right to privacy and protection of personal data, manipulation, etc. In this chapter, we shall review the main ethical challenges related to AI. At the same time, some of these challenges are also legal, regulated by international or national legal acts3.

Discrimination against individuals4. AI systems can discriminate against individuals based on age, race, gender or disability. A discriminatory violation usually occurs when an AI system operates based on real data that is different from the training data. For example, if people from certain ethnic groups were not included in the input database, the AI application will not be able to take this category of people into account and will give false results, although technically it will function flawlessly. Discrimination can also be caused by AI systems that were created for one purpose and are used for another (when one AI system is integrated into another, or different selection or evaluation criteria are provided within them). Practical examples of such systems could be the AI-based personnel recruitment used by large corporations. In 2014, the well-known USA company Amazon tested an AI application for personnel recruitment, training it with the data of the last ten years of recruitment, linking it with the results of such recruitment (Amazon ditched AI..., 2018). The company sought to speed up the recruitment process, which requires extensive human resources. The problem appeared when the system rejected female candidates as ineligible because previous data was based on more successful male candidates. The algorithm also identified female-specific hobbies, and this also constituted the basis for ranking such candidates lower. Programmers tried to modify the algorithm, but in the end the company abandoned such a tool to provide equal opportunities to candidates of both genders. Thus, the use of historical data for training algorithms may lead to the problem of discrimination. Similar cases of discrimination can arise when using facial recognition systems, if the data for training the algorithms does not correspond to the facial features characteristic of that population (for example, it is obtained from Europe or the United States and used for systems that are implemented in Asia or vice versa). To avoid such cases, AI systems should be clearly specified, understanding how the algorithm works, and how it makes certain decisions.

Breaches of privacy and personal data protection. It should be noted that not all AI systems use personal data, and so a breach of privacy can only occur in certain specific cases (for example, when an AI program uses personal data without complying with the requirements of the General Data Protection Regulation (GDPR)). This is illustrated by the Facebook data leak scandal5, when data (Facebook ID, e-mail and telephone number) of 530 million Facebook users were exposed due to the gaps in the IT system, thus breaching their privacy. If the security of such data is not ensured, it can also be used for criminal purposes.

A specific case of the potential breach of privacy by AI is presented by the UK scientists analyzing the use of genetic data processing algorithms in Saudi Arabia (Stahl et al., 2023, p. 28–29). These data could lead to an improved detection of rare diseases in new studies, but could also lead to a serious breach of privacy if the holders of the input data are exposed. Despite the confidentiality of medical data enshrined in the laws of many countries around the world, there is a question about the ethical use of this type of data, as well as about the storage of such data and its fate if the private company controlling such data goes bankrupt. Ethical issues also arise in relation to the transfer of such data between countries belonging to different jurisdictions, as they may have different approaches to the protection of sensitive personal data. The authors note that genetic information is important not only for individuals, but also for related family members, and therefore its disclosure to them is necessary.

Another problem with AI algorithms is related to the generated output (estimations and predictions). It arises from the fact that GDPR imposes legal responsibility on companies for the accuracy of the entered and collected data but does not establish requirements for the accuracy of ‘derivative’ data (estimations or forecasts). This means that AI systems, by using initial (input) data, can generate inaccurate derivative data and are not legally obliged to correct them. This can lead to privacy breaches and discrimination. Automatic algorithms use large amounts of data, and this raises the issue not only of their legal collection, storage, and updating, but also of their legal use for various types of predictions, insights, and decisions6. As for specific products or services based on AI, i.e., automated solutions used in crediting, insurance, marketing, we are facing exactly this problem. AI algorithms, while analyzing personal data in one area (for example, when assessing a person’s interests, health status, credit score), can make conclusions and suggestions in completely different areas (for example, offering insurance services), and such an automatic process can eventually turn into a chain of assumptions that are not always correct. It is highly probable that a person will not be able to access and evaluate how their personal data is used in such a process due to high financial and time costs, and thus will not be able to properly defend their rights. We will only be able to legally demand the responsibility of the entities using the data if the input data was incorrect, or if the entities used the data without the consent of the person. However, even after suffering damage, it is difficult to prove it because of the violation of data protection laws. Article 3 of the Republic of Lithuania Law on Legal Protection of Personal Data also only refers to the prohibition of the use of the personal identification number for direct marketing purposes but does not regulate the protection of derivative data. Ethical considerations are therefore of particular importance. Finally, it should be noted that a person’s right to privacy is not absolute. A restriction on the right to privacy may be justified for reasons of public interest. Public authorities collect population data for tax, health, security, and border control purposes. They can also implement AI systems that process this data, and thus the use of such systems at the state level can also pose considerable ethical challenges. To address such challenges, the status of ‘derivative’ data should be defined in the GDPR, as well as the continuous evaluation of AI systems taking into account not only the protection of personal data, but also the impact of AI systems on the economy, human rights and freedoms should be ensured.

Unauthorized (unlawful) collection and use of data for commercial purposes. As processes become increasingly digitized, companies and organizations accumulate more and more personal data. This makes it easier to reach and serve customers, whereas the storage of data is less space-consuming and costly. Companies such as Amazon, Google, Microsoft or Meta accumulate terabytes of data about their users, their buying habits and payment methods. Thus, the use of such data not only for direct marketing purposes, but also in the interests of third parties can cause serious ethical problems, even if they are not used in AI applications, but are stored in the company’s information systems. Another ethical issue is related to the use of ‘free-of-charge’ services in the online space, where users provide their personal data in exchange for ‘free’ services. The majority of users are not aware why companies use these data, what are the rules for cancelling the access to personal data, etc.

A well-known case of illegal data collection, accumulation and use has been documented by Clearview AI, a US company specializing in the development of facial recognition software. The company has faced accusations in France, Austria, Greece, Italy and the United Kingdom for arbitrary use of photos of individuals from social networks Instagram, LinkedIn and YouTube to train AI algorithms. Such actions were recognized as illegal, and the company was obliged to remove the illegally collected photos, and the United Kingdom supervisory authority imposed a fine in the amount of USD 9.4 million on it (Clearview AI..., 2022).

It can be stated that ethical and legal problems regarding the unauthorized (illegal) collection and use of data in AI systems arise primarily due to: (i) the dominant position of economic entities, which is further exacerbated by the application of the AI technology; (ii) a failure to comply with the requirements of GDPR (collect, store and accumulate data legally, for a limited time and for a defined purpose, confidentially, etc.); (iii) lack of transparency and explainability in the use of data; (iv) breaches of data volume and content (accumulation of a disproportionate amount of data for various purposes); (v) unclear status of personal data (lack of possibility to dispose of them as a specific asset).

Manipulation of opinions and choices. The Cambridge Dictionary of English defines manipulation as “the control of someone or something in order to get an advantage, often unfairly or dishonestly” (Cambridge Dictionary, 2023). AI technology makes it possible to manipulate the opinion of not only individuals but also larger groups of society. Examples include the manipulation of election results (influencing the freedom of decision of individuals) and influencing the choices of users in real time by controlling the collected data concerning them. Politically motivated manipulation is one of the most complex ethical challenges because it undermines the autonomy of individual decisions and democratic values. This can affect the outcome of elections and change the balance of political forces in a particular country. The control of consumer choices occurs when companies, for example, offer sports goods if they have data that a person attends a sports club, or offers medicines and/or medical treatment if a person is sick at the time. Perhaps this provides some convenience to the user but raises the question of whether it is ethical. The cases of manipulation increased with the popularity of generative AI models, when it became possible to imitate a person’s voice or image, thus misleading the public or a specific individual. The Forbes magazine describes several cases of how ‘deep fakes’ were created about the former US President B. Obama, the Speaker of the House of Representatives N. Pelosi, the head of Meta M. Zuckerberg, and many other politicians or other famous persons (The Best (And Scariest)..., 2019). The images of the former US President D. Trump being arrested by the police or the Pope Francis wearing a white down jacket flashed on the Twitter platform recently and were so realistic that many Internet users did not even doubt that it was real (Fake-Fotos vom Papst..., 2023). The purpose of such false reports is to influence the public opinion, mislead the target audience, with both political and pragmatic goals. Deep fake messages have a mass spreading nature on social networks, are mass-forwarded, and therefore difficult to remove. Naturally, individuals are not always able to keep up with technological developments, and the proposed legislation should therefore provide ways of protecting them against the negative consequences of manipulation.

Unauthorized use of works in input databases. These specific ethical challenges arising from the popularity of generative AI concern the unauthorized use of authors’ works for training algorithms. Developers of AI applications should ensure that the data used to train their models is obtained legally, i.e., from legitimate sources, in agreement with the holders of the intellectual property rights, and with fair remuneration. However, as practice shows, the input databases are composed of freely available data on the Internet, including literary, scientific or artistic works protected by copyright. It is the inclusion of works of art in input databases without their legitimate owners’ express consent that is currently raising the most questions. Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market (hereinafter referred to as Directive 2019/790), the provisions of which have been transposed into Lithuanian law, and which entered into force on 1 May 2022, provides for two exceptions where legally accessible works may be reproduced without the permission of the author of the work or any other copyright holder of this work and without royalties, for the creation of such input databases (text and data mining). The first exception gives the right to reproduce legally available works for non-commercial purposes (for scientific research organizations and cultural heritage institutions), whereas the second exception gives the right to reproduce legally available works for ‘commercial’ purposes (Articles 22(1) and 22(2) of the Law on Copyright and Related Rights, respectively). The second exception applies only “if copyright holders do not expressly indicate that they reserve the right to use those works by appropriate means (in the case of content publicly available on the Internet by computer-readable means)”. This can be referenced in the metadata of the files, or in the privacy policy uploaded on the website. Thus, in the EU, the use of works for databases and algorithm training of commercial AI applications without the author’s consent and royalties is possible only based on the second exception. Therefore, for the sake of transparency, developers of AI applications should clearly state how they have constructed the input databases for training algorithms. This provision is enshrined at the EU level in the AI Act as a duty of transparency and content disclosure. Unfortunately, while there is no obligation to disclose what data the applications developers used to train the algorithm, only individual companies have voluntarily provided this information. Other companies are likely to take advantage of the situation and abuse it until there is no regulation on mandatory disclosure of the data origin.

In summary, it can be said that AI poses various ethical challenges. Some of them are clearer and more unambiguous (for example, discrimination based on the gender or age), while others are not so unambiguous (for example, a nursing robot in a hospital can help people, but at the same time violate the dignity of the patient). This is largely due to the complexity, opacity and autonomy of AI systems. Therefore, when considering the AI regulation model, major attention should be paid to the detailed analysis and control of these characteristics to protect the fundamental human rights. The goal to create a human rights-based, ethical framework for AI regulation and protect fundamental rights in the context of the values of human dignity, freedom, democracy, equality and the rule of law is incorporated in the AI Act. The process of integrating a moral dimension into the AI Act reflects the ongoing tension between natural law (the moral foundation of law) and legal positivism (regulatory pragmatism). Therefore, in order to fully understand the implications of this tension, it is essential to examine the fundamental principles of natural law and the key features of the legal positivism paradigm.

2. The Role of Ethics and Morality in the legal Regulation of AI

With the popularization of generative AI models which allow creating misleading or discriminatory content much faster and with better quality, certain value levers are increasingly being sought to solve such emerging challenges and turning to the categories of ‘ethics’ and ‘morality’.

The question arises whether moral norms can be the basis for the development of effective legal regulation of AI (or technologies in general). If so, which moral norms and how many of them should be used? How can we find the right balance between law and morality, especially when there is a conflict between the two? How can we properly integrate moral attitudes into legal regulation, while knowing that morality is perceived individually, changes historically, and is not constant? Is it at all possible to create a morally perfect regulation, when morality is determined by the economic relations and material interests of society, and it differs in cultural and religious aspects? These and similar questions are important when examining the role of morality in the legal regulation of AI. They will be discussed on the basis of the provisions of two legal paradigms7 manifesting different approaches to the relationship between law and morality.

2.1. Morality and Law in the Theory of Natural Law

In legal literature, two theories of natural law can be distinguished, namely, classical8 and modern9. Both theories “are based on the principles of human and social behavior, primarily moral ones, which have been formed during the development of society over many years” (Lastauskienė et al., 2020, p. 25). We can consider the sophists who lived in the ancient Greece as the pioneers of the classical theory (Baublys et al., 2012, p. 79). They were looking for an answer to the question of whether there are any rules that should be followed by all people living in that state without exception. According to the sophists, these are rules that arise from human nature and are therefore binding upon all people. Yet they soon realized that people respond differently to different things, and therefore their statements at the same time can be both right and wrong, moral and immoral. Therefore, the main criterion for determining whose truth is superior is ‘practical benefit’ and mutual agreement on it. Such rules arising from an agreement (convention) are characteristic of state norms, while other, non-contractual (natural or voluntaristic) rules are characteristic of a person and incompatible with anything else (Baublys et al., 2012, p. 82). This distinction of rules is considered as the greatest achievement of the sophists, but, in the long run, the pursuit of personal gain overshadowed the importance of the conventional rules. This led another school of ancient Greek philosophy – the stoics – to present a different concept of natural law, considering the world as one vast whole governed by the same law (natural law). The content of this concept of natural law consisted of personal morality, social virtue, reasonable and harmonious development of the world, and the rules regulating social relations had to express these provisions (Baublys et al., 2012, p. 84). Unlike positive law, natural law “does not recognize any other than moral differences between people. All people are equal before this law” (Baublys, 2005, p. 175). The importance and significance of natural law in social life, as formulated by the stoics, influenced the Ancient Roman law, from which most modern legal systems are derived.

Classical ideas of natural law continued to develop during the medieval period. The theory was greatly influenced by the representatives of Christian philosophy, the most prominent of whom are St. Augustine and St. Thomas Aquinas. St. Augustine was the first philosopher to talk about the necessity of different kinds of laws. He distinguished three types of laws (eternal, natural, and temporal), assigning higher power to the order of the world established by God, emphasizing the primacy of will over reason. In his view, a law that does not conform to natural law is not law, and, by virtue of being contrary to God’s law, cannot be binding (Arlauskas, 2011, p. 19). Later, the ideas of St. Augustine were also taken up by the Italian philosopher St. Thomas Aquinas who formulated the theological-ethical concept of the law, according to which not just any but the virtuous pursuit of temporal goods and their use is justified (Baublys et al., 2012, p. 92). Thus, the moral principle was established as the most important principle of law. Thomas Aquinas linked the concept of justice to law, calling it “the ordinance of reason for the common good” (Aquinas, 2005, p. 39). It is the pursuit of the common good that makes natural law morally binding. It should be noted that Thomas Aquinas not only began to divide laws into separate categories, linking them together, but also examined the question of the content of the law from a moral point of view, which is still relevant today – should the law prohibit all vices and command all virtuous actions? In the view of Thomas Aquinas, the law must prohibit only the most grievous vices from which it is possible for the majority to abstain – “those that are to the hurt of others, without the prohibition of which human society could not be preserved: thus human law prohibits murder, theft and such like” (Aquinas, 2005, p. 177). Prohibitions must be made step-by-step, so that those who are unable to comply with them do not indulge in even greater evils. According to Thomas Aquinas, “all law proceeds from the reason and will of the lawgiver; the Divine and natural laws from the reasonable will of God; the human law from the will of man, regulated by reason” (Aquinas, 2005, p. 207). He perceives the law as “a set of moral principles”, and every law must be based on these principles. There is a deterministic relationship between morality and law, which creates the moral authority necessary for legal norms. “Every human law has just so much of the nature of law, as it is derived from the law of nature. But if in any point it deflects from the law of nature, it is no longer a law but a perversion of law” (Aquinas, 1998, p. 60). As circumstances change, the form of government, various norms of public life may change, but all changes in the political order must not destroy the links of this order with the natural law, because this, among other things, is the most important source of legitimacy of every political order (Navickas, 2005, p. 81).

The classical concept of natural law based on such provisions existed for many centuries and had a profound influence on the legal science. However, significant scientific discoveries, “religious divisions, philosophical rationalism and political individualism” affected natural law during the Renaissance period (Baublys et al., 2012, p. 111). These assumptions contributed to the formation of the concept of natural law developed by the Dutch scholar Hugo Grotius who sought to separate law from theology. According to him, “natural law is the voice of right reason, stating that one or another action, because it conforms to or does not conform to rational nature, is morally despicable or morally necessary <…>” (Sabine and Thorson, 1995, p. 440). Grotius believed that the power of reason alone could create a legal system by first breaking down a problem into simpler elements that no longer required religious authority to grasp. This methodological approach, which allows treating law as mathematics, was significant for representatives of exact sciences. Grotius considered security of property, respectability, justice and universal harmony of the consequences of human behavior and their merits as the natural minimum values (Baublys et al., 2012, p. 111). In his view, they should be the base for the 17th century law, both domestic and international.

The scientific developments that began during the Renaissance period encouraged people to trust their own reason, and to distance themselves more and more from the old traditions or religious postulates. At that time, two new philosophical trends, specifically, rationalism and empiricism, emerged. The most famous representatives of empiricism of that time – T. Hobbs, J. Locke, D. Humes – by constructing their concept of natural law, relied on the analysis of social life events and the “specificity of human nature” (Vaišvila, 2004, p. 72). Thomas Hobbes believed that the basic motive behind the behavior of every individual is the instinct of self-preservation in order to survive (Baublys et al., 2012, p. 117). The moral order, for Hobbes, is not something we discover by a thoughtful observation of human nature, but rather something that helps to reconstruct the natural individual, a kind of program of political socialization. (Vaišvila, 2005, p. 81). Unlike Aquinas, Hobbes does not believe that laws are bound by moral principles. Therefore, he perceives the natural law not as a set of moral principles, but rather as a certain rational construct that offers comfortable conditions for political life, on which it is useful for people to agree (Navickas, 2005, p. 81). Hobbes distinguishes between the concepts of the right and the law: “RIGHT consisteth in liberty to do or to forbeare; Whereas LAW, determineth, and bindeth to one of them” (Hobbes, 1999, p. 141–142). Right operates in the natural state, and the observance of laws is the condition and method of transition from the natural state to the social state (Baublys et al., 2012, p. 119). Natural laws create the basis for the existence of society, and also ensure survival, and therefore Hobbes gives them priority (Hobbes names nineteen such natural laws). Thus, it can be said that Hobbes fundamentally transforms the theory of natural law, by giving morality a much weaker role than Thomas Aquinas.

Today, the secular theory of natural law is dominant, “basing natural law solely on the postulates of mind” (Kūris, 2002, p. 24). It is dominated by the fundamental principles of justice, humanity, and honesty. The representatives of the modern natural law (L. L. Fuller, J. Finnis et al.) are also looking for criteria for the ‘morality’ of law. For example, Fuller argued that the law which is ‘moral’, i.e., which meets the respective requirements10, is good. According to Fuller, the essence of law is to achieve the social order, and, with the help of these rules, to guide human behavior in the right direction. A failure to comply with such rules would make the law unlawful.

Another representative of the modern school of natural law, J. Finnis, claimed that the object of natural law is fundamental goods11, which are common to all cultures and all ages (Lastauskienė, 2020, p. 26). According to Finnis, the principles of natural law “justify the operation of government in the community. They also require that government should act in such a way, […] that it reasonably respects human rights, which embody the requirements of justice, and that it seeks to contribute to the common good, which also include respect for rights” (Finnis, 2014, p. 55). On the basis of principles, it is possible to assess whether positive laws are flawed. Thus, the purpose of law is to create such rules that they should promote the protection of the common good.

In conclusion, it can be stated that the school of natural law is a fundamentally important part of the Western legal tradition. It “encouraged the pursuit of the humanity and justice of positive law, promoted the establishment and protection of economic freedom of an individual, had a huge influence on constitutionalism and the development of democracy, and laid the foundations for fairer international law” (Kūris, 2002, p. 24). The natural law theory is not homogeneous. If, in antiquity or in the Middle Ages, non-observance of the principles of natural law and lack of morality were considered essential criteria for invalidating positive law, then, in the modern theory of natural law, the morality of law (by using formal criteria) is evaluated only as an indicator, but not necessarily invalidating positive law. Despite the aforementioned differences, all conceptions of natural law share a fundamental relationship between law and morality. The question is whether that relationship is relevant in today’s technological age? In analyzing the challenges posed by AI, we see that some of them are related to value matters, and so the moral aspect of law assessment is significant and could serve to create a more humane, fairer law. The debate over AI and natural law is ongoing and will likely continue as the AI technology is advancing and becoming increasingly integrated into society (Dahraj, 2023, p. 5). The biggest problem in applying the principles of natural law to today’s legal regulation is that “it is impossible to establish a coherent and consistent system of moral standards that can be reliably relied upon when dealing with one or another significant issue of law (creating law or solving a legal dispute)” (Lastauskienė et al., 2020, p. 72). Therefore, the answer to the question of how to do this, most likely, lies in certain successful examples of expressing the principles of natural law through the establishment of human rights in positive law12. Naturally, with the rapid development of technology, “changes in the society, as it changes, its attitude towards fundamental moral principles and their content also changes” (Lastauskienė et al., 2020, p. 26); therefore, in the long term, it is impossible to ensure the best law from a moral point of view and to create perfect laws. It is a permanent process to adapt positive law to new technological challenges.

2.2. Morality and Law in the Paradigm of Legal Positivism

The global developments in the 19th and 20th centuries provided impetus for the development of a legal concept independent of moral standards. Science continued to develop, seeking for “universal morality and law’, ‘autonomy of art’, basing everything on the internal logic of life and science (Lastauskienė et al., 2020, p. 27). New schools of philosophical positivism and legal positivism were formed. J. Austin, J. Bentham, H. Kelsen, H. L. Hart, and J. Raz should be considered as the most prominent representatives of legal positivism. One of the main theses of this school states that the most important thing is to analyze what and how the law is, and not what it should be from a moral point of view. “No matter how wrong, immoral positive law is, no matter how much criticism it deserves, it is precisely what all legal communities are creating, what makes politicians and lawyers cross swords” (Kūris, 2002, p. 25).

The first legal philosopher who started to develop a fundamental theory of ‘pure law’ was Hans Kelsen. In 1960, he published the expanded and updated edition of Pure Theory of Law which was the most significant work in his career13. In this work, Kelsen examines many classic issues of the legal theory – starting with the concept of law, the hierarchy of legal rules, and ending with the issues of legislation, application of law, and loopholes in law. Since his approach to the relationship between law and morality is most important for the present work, we shall examine this aspect in more detail.

Kelsen recognized that moral standards exist alongside legal rules and other social norms, but advocated a strict separation of legal rules and moral standards, as the lack of clear boundaries threatens the “methodological purity of legal science” (Kelsen, 2002, p. 83). “The fundamental difference between law and morality is that law is a coercive order, [...] while morality is a social order without any sanctions” (ibid., p. 85). According to Kelsen, the question of the relationship between law and morality is not a question of the content of law, but rather the question of its form (ibid., p. 87). As regards the relationship between law and morality, Kelsen emphasizes that the validity of a positive legal order cannot depend on a single, absolute moral order, since moral values are relative. The second important point is that “ultimately all positive laws owe their validity to a non-positive law, a law not created by human action” (Kelsen, 2002, p. 88). For these reasons, “when evaluating a positive legal order from the point of view of morality (as good or bad, right or wrong), it is necessary to understand that the evaluation criterion is relative, that an evaluation based on a different moral system is not rejected, in addition to the fact that the legal order, which is based on one moral system is regarded as wrong in relation to another moral system can at best be regarded as right in relation to another moral system” (Kelsen, 2002, p. 88–89). Kelsen emphasizes that “if the moral order does not prescribe to obey the positive legal order under all circumstances, if … then the postulate to separate law and morals, science of law and ethics, means that the validity of positive legal norms does not depend on their conformity with the moral order; … a legal norm may be considered valid even if it is considered at variance with the moral order” (Kelsen, 2002, p. 89). The Pure Theory of Law, as developed by Kelsen, rejects the thesis that law must be inherently moral, and that an immoral social order is not a legal order. Otherwise, it would imply the existence of an absolute moral order. Moreover, if put into practice, morality would become “a tool for the uncritical justification” (Kelsen, 2002, p. 90). Kelsen speculates that such a system might be politically convenient, but logically unacceptable.

The relationship between law and morality and natural law and legal positivism was also examined by another famous legal philosopher, namely, H. L. Hart. Like Kelsen in continental Europe, Hart brought a unique and constructive approach to this area in the common law tradition. He emphasized that there were two types of norms in every legal system: primary and secondary. Their combination constitutes the essence of the law: “[W]e base the granting of this most important place to the combination of primary and secondary norms not on the fact that they will perform the function of a dictionary here, but on the fact that they are characterized by great explanatory power” (Hart, 1997, p. 261). Primary norms are intended to regulate human actions or require refraining from some actions; secondary ones are intended to introduce new norms of the first type, cancel or modify the old ones, define their scope or operation (Hart, 1997, p. 163). Thus, the norms of the first type establish duties, and the second type of norms provide powers (public or private). However, the norms of the first type have disadvantages – they are uncertain, static and ineffective (Baublys et al., 2012, p. 168). The second type of norms is also necessary to address the aforementioned shortcomings. Alongside these legal norms, there are also moral norms.

In his work, Hart discusses how moral norms differ from legal norms, distinguishing four cumulative features14: (i) importance; (ii) immunity from deliberate change; (iii) voluntary character of moral offences; and (iv) form of moral pressure (Hart, 1997, p. 280). Moral norms can be attributed to the category of non-legal norms. Violation of some of them can only indicate how to act correctly (for example, etiquette or rules of correct language), while violation of other norms can lead to condemnation or contempt. “The concept of the relative importance attached to these different types of norms is reflected in both the degree of sacrifice of private interest they require and the strength of the social pressure to obey them, although no precise scale of this importance can be established” (Hart, 1997, p. 283). Hart emphasizes the very important category of ‘accepted’ or ‘conventional’ morality. This is the kind of morality that is not characteristic of a single individual, but of the majority. Only such conventional morality can influence the development of the law. “At the core of accepted morality are such norms, which we have called primary duty-determining norms” (ibid., p. 281). Yet, Hart does not say that law must conform to moral ideals. He claims that “law and morality are connected by many different types of connection, but there is no connection that can be unequivocally distinguished for analytical purposes as their only connection” (ibid., p. 303). On that basis, he formulates his concept of positivism: “[W]e shall take legal positivism to mean the simple contention that it is in no sense a necessary truth that laws reproduce or satisfy certain demands of morality, though in fact they have often done so” (Hart, 1997, p. 304). It is by asserting the existence of an actual, albeit not necessary, connection between law and morality, and by analyzing its impact on the content of law, that Hart distances himself from radical legal positivism (Baublys et al., 2012, p. 169). This connection manifests itself in the five well-known truths that make up the content of natural law: human vulnerability, approximate equality, limited altruism, limited resources, limited perception, and will power. In his view, the doctrine of natural law is constantly reborn because its appeal is determined not by the authority of God or man, but because “it asserts certain elementary truths without which neither morality nor law can be understood” (Hart, 1997, p. 307). For this peculiar approach to the relationship between law and morality, the relationship between natural law and positivist law, Hart is called a ‘soft positivist’.

In conclusion, it can be stated that legal positivism is also not a homogeneous school of legal theory. Kelsen’s contribution to the school of positivism is indisputable. He strictly separated law from morality, emphasized the relativity of the moral system, and recognized that a legal norm can be considered valid even if it contradicts morality. He considered that law was the science of norms. Kelsen emphasized the importance of the sanction and the hierarchical system of legal norms with the basic norm (in German: Grundnorm). However, he did not see that such a system is quite static, separated from social interests. Hart brought a more modern approach to legal positivism. He constructed a legal system based on a combination of primary and secondary norms, he considered the connection between law and morality as actually existing, although not necessarily. He emphasized the category of conventional morality as a means of influencing law. Hart delved more into social aspects than Kelsen, and he did not isolate law from society.

In assessing today’s legal regulatory challenges in the field of AI, both natural law and legal positivism have significant impact. The natural law theory reflects universal moral principles and values, and legal positivism argues that the regulation of AI should be based on the laws and regulations created by the state (Dahraj, 2023, p. 9). The real challenge is to balance these two approaches. It requires a more thorough justification of the relationship between law and morality, “from which a clear role of morality in legislation can be derived, while law remains an independent discipline” (Arlauskas, 2011, p. 24).

3. Reflection of Moral Principles in AI Act

In April 2024, the committees of the European Parliament agreed on a compromise version of the AI Act15. The AI Act classifies AI systems according to the potential risks they pose16. High-risk AI systems that operate in the public space and pose a risk to many individuals (for example, infrastructure management systems, biometric identification, education, employment, credit, law enforcement, border control systems, as well as individual sectors – medicine, industrial robots and similar systems) will receive the most attention. The regulation aims to ensure the development of human-centered and ethical AI in Europe by establishing new rules for the transparency and risk management of AI systems. (AI Act: a step..., 2023). Thus, the purpose of this chapter is to reveal how moral provisions are reflected in the AI Act17 and to assess whether these provisions will help at least partially solve the ethical challenges discussed in the first part of this work. It should be noted that, in 2019, the High-Level Expert Group on AI prepared the “Ethical Guidelines for Trustworthy AI” on behalf of the European Commission. They state that ‘trustworthy’ AI is legal, ethical and robust (Ethics Guidelines, 2019, p. 2). In fact, the Guidelines set out seven criteria for companies developing, deploying and using AI to comply with: human oversight of AI, technical robustness and safety; privacy and personal data government; transparency; diversity, non-discrimination and fairness; environmental and social wellbeing; and accountability. The requirement of ethical AI meant that (i) the design of AI systems should comply with the principles of the EU Charter of Fundamental Rights; (ii) transform the ethical principles of the Charter into the seven criteria mentioned above, which companies must ensure during the entire life of the system; (iii) evaluate AI systems (Ethics Guidelines, 2019, p. 7). The Guidelines placed great emphasis on the protection of fundamental human rights by emphasizing universal moral and legal principles (Smuha, 2019, p. 102). The development of ethical AI is based on the respect for human rights and fundamental freedoms enshrined in the Treaty of the European Union18 and the Charter of Fundamental Rights of the European Union19. These documents reflect a ‘human-centered approach’ in which the person has a unique and unquestionable moral priority, both in law and in other areas of life. Therefore, the guidelines developed by the High-Level Expert Group transform this moral aspect of human rights into the ethical principles of trustworthy AI20:

(i) Respect for human dignity – the person must be treated with respect as a subject and not become an object to be filtered, sorted, evaluated or manipulated. Therefore, AI systems should be designed in a way that respects the needs, cultural identity, physical, mental and emotional health of people;

(ii) Freedom of individual decisions – human beings should remain free to make their own decisions, avoiding manipulation of their opinion, controlling them too much, limiting them from engaging in certain activities, interfering in their personal lives, limiting active social activities, etc.;

(iii) Respect for democracy, justice and the rule of law – the implemented AI systems should ensure and promote the development of democratic processes, with particular emphasis on the right to free vote. AI systems must also have safeguards against undermining the fundamental principles underpinning the rule of law, binding laws and regulations, and ensuring transparency and equality before the law;

(iv) Equality, non-discrimination and solidarity – equal respect for the moral self-worth and dignity of all people must be ensured. In the context of AI, equality means that systems cannot generate unfairly biased results (e.g., data used to train AI systems should include as wide a range of individuals as possible representing different populations). Systems must ensure equal treatment of men and women, the disabled, ethnic minorities, children, or persons with disabilities;

(v) Citizens’ rights – civil rights include many rights, including the right to vote, the right to good administration or the right to access public documents, and the right to petition. They are used not only by citizens of EU countries, but also by third-country nationals legally residing in the EU. AI systems can provide public services more efficiently. At the same time, they can negatively affect the rights of citizens, and so these rights must be protected when developing and using AI systems.

These guiding principles should enable AI systems to be designed to respect the freedom of human autonomy, avoid harm, be transparent and non-discriminatory. Human-centeredness, technical reliability and security, accuracy, data protection, transparency, non-discrimination, sustainability, accountability and other important features should be integrated into dynamically changing AI systems.

The AI Act itself also incorporates references to the aforementioned High Level Expert Group’s Guidelines (Recital 7). The Preamble states that the Regulation aims to ensure “a high level of protection of health, safety, fundamental rights…, to protect against the harmful effects of AI systems in the Union” by setting the requirements for trustworthy AI and proportionate obligations on all value chain participants, promoting the protection of the rights protected by the Charter of Fundamental Rights of the European Union: the right to human dignity (Charter, Article 1), respect for private life and protection of personal data (Charter, Articles 7 and 8), non-discrimination (Charter, Article 21), and equality between women and men (Charter, Article 23), the rights to freedom of expression (Charter, Article 11), and freedom of assembly (Charter, Article 12), the right to ensure protection of the right to an effective remedy and to a fair trial, the rights of defense and the presumption of innocence (Charter, Articles 47 and 48). The proposed AI regulation seeks, among other things, to ensure that AI systems placed on the Union market and used are safe and respect the currently existing law on fundamental rights and the Union’s values.

The AI Act has been revised several times since its presentation to European institutions and the public until today. In April 2024, the main committees of the European Parliament were able to agree on a compromise version of the AI Act. In addition to the protection of moral values and fundamental human rights already mentioned, the latest proposals have also introduced additional safeguards for the development of more transparent and secure AI. In this context, “the AI Act diverges somewhat from the traditional concept of separating rights from values as it not only focuses on rights but also incorporates explicit justifications for these rights” and “is grounded in the natural law approaches” (Hogan, Lasek-Markey, 2004, p. 10).

Firstly, the AI Act prohibits certain AI systems that threaten human dignity and autonomy. In order to protect against the potential harm that AI can cause to fundamental rights, the AI Act bans AI systems which manipulate human behavior, evaluate individuals based on personal traits and behaviors, and make human profiling or biometric surveillance (AI Act, Article 5). Additionally, Article 6 and Article 7 classify high-risk AI systems that can significantly impact fundamental rights, requiring strict compliance obligations for such AI systems. These prohibitions align with natural law principles by preventing AI from undermining the fundamental human rights.

Secondly, the AI Act provides for transparency and human oversight requirements, particularly for high-risk AI systems: (1) transparency obligations for AI deployers to provide clear information on AI capabilities, limitations, and decision-making processes (Article 13); (2) a mandatory duty for the deployers of high-risk AI systems to monitor and supervise these systems, so that the people working with those systems are trained in time and have the necessary qualifications, and AI decisions can be monitored and corrected by human operators (Article 14); (3) data governance rules to prevent bias and discrimination in AI training datasets (Article 10); (4) record-keeping, documentation duties for high-risk AI systems to ensure that AI decisions remain traceable and explainable (Articles 16–20); (5) a mandatory transparency obligation for generative AI models (e.g., Midjourney or ChatGPT) that use diffusion models or Large Language Models (LMM) to create images, music or text, which includes the obligation to disclose the legality of the data used to train the algorithm origin, prohibition of generating illegal content, indications that the work is created by AI thus protecting the legitimate interests of copyright, etc. These provisions align with natural law and moral principles of fairness, accountability, and transparency, ensuring AI remains under human control.

Thirdly, the AI Act not only imposes restrictions but also promotes ethical AI innovation: (1) possibilities for scientists and researchers and businesses to test their systems in AI sandboxes have been defined in order to notice and eliminate errors in a timely manner, and to prevent possible discrimination or other human rights violations (Articles 57–62); (2) promotion of innovation through open-source licenses for AI systems and their components; (3) opportunities for EU citizens to submit complaints and receive clarifications regarding high-risk AI systems that may significantly affect human fundamental rights have been strengthened (Article 85). This reflects an attempt to balance legal positivism (strict rules) with moral principles (encouraging ethical AI development).

The above-mentioned provisions should strengthen the protection of human rights in the age of modern technology. The form of a regulation, rather than a directive, also gives an advantage in this case, as the provisions of the regulation will apply directly in EU member states without additional transposition into national law.

The main question is, whether the incorporation of such ‘theoretical’ moral imperatives in the AI Act automatically guarantees a fundamental, human rights-based, regulation of AI in EU. What are the practical problems of implementing such an AI regulation?

When analyzing the AI Act from the perspective of morality, several shortcomings of AI regulation should be mentioned. First of all, the regulatory focus is only on the group of high-risk systems, but does not refer to ‘mixed’ systems, which are not currently expressis verbis listed in Article 6(1) and Annex III to the AI Act, but have both high-risk and low-risk components, which are interconnected and determine the purpose of the entire AI system. The operation of such systems may have certain negative consequences for the protection of basic human rights. The greater attention could be paid to ensuring the legality of data used to train AI systems (e.g., the transparency obligation provided for now is still too abstract), the status of the derived data and the disclosure of such data in connection with the requirements of the GDPR. According to the proposal of the European Parliament, the regulation does not apply to low-risk AI systems, without burdening business with additional requirements, but, from the perspective of consumer rights protection, it is criticized.

As a critical weakness from the ethical side, we must highlight the fact that the AI Act restricts certain AI system uses by private entities, but provides broad exemptions for State’s law enforcement authorities. For example, even if the AI Act provides that ‘real-time’ remote biometric identification system in publicly accessible spaces “shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay” (Article 5(2)). This contradicts the moral principle of equality, as AI risks by government use remain the same as by businesses. Accordingly, in such cases, the same ethical obligations for both types of organizations should be valid.

Although the Act promotes human oversight (Article 14), automation bias remains a risk in hiring, healthcare, and credit scoring AI systems. Humans became overly reliant on AI recommendations, and thus they lose personal autonomy. The AI Act lacks strong mechanisms for individuals to challenge AI decisions (some exceptions are only outlined in Article 85). This could be the main problem regarding the practical implementation of the AI Act.

The AI Act acknowledges AI bias risks, but it focuses more on technical documentation and compliance (Articles 8–10; Articles 72–73) rather than on strong anti-discrimination safeguards. A major emphasis is made on internal compliance audits and not on external ethical review mechanisms. From this perspective, the AI Act supports the legal positivism paradigm and contradicts the principles of natural law.

The position that the AI Act should focus on legal certainty (which is the main goal of the AI Act) rather than on ethical justifications is supported by some legal scholars. For instance, Marc M. Anderson (Anderson, 2022) emphasizes that legal positivism offers more certainty than ethical principles because ethics cannot be legislated. He argues that the main objective of the AI Act is “to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union” (AI Act, Recital 1), and therefore the AI Act gives priority to utilitarianism over natural law. Another important argument is the speed: ethical transformations take time, while AI regulation requires speed. “Ethical time is analogous to geological time. Legal time in its regulatory aspect is analogous to anthropocentric time” (Anderson, 2022). The ‘speed paradigm’ contradicts the slow nature of ethics. Anderson suggests that AI regulators should be guided by a legal positivism rather than by purely ethical principles. One might disagree with this position, given the ethical challenges posed by the AI technology. A strict separation of law and morality (legal positivism) would not be an advantage for AI regulation.

In summary, it can be stated that the AI Act seeks to find a balance between natural law and legal positivism. Moral principles are reflected in the AI Act by prohibiting the riskiest AI systems that manipulate human behavior, as well as by requiring strict compliance obligations for high-risk AI systems, integrating transparency and human oversight requirements, data management rules, providing the opportunity to test AI systems in a scientific manner, to submit complaints, etc. Accordingly, the strict regulatory rules dominate by compliance and documentation requirements, exceptions for the State’s law enforcement authorities, and centralized enforcement mechanisms.

Of course, there is some doubt that the AI regulation was adopted late, and that some provisions of the regulation will come into effect gradually over the next two years. By that moment, part of the products (especially physical ones, which cannot be adapted as quickly as software ones) will already have reached the markets, and therefore their compliance with the new regulation may raise new legal and ethical challenges.

Conclusions

Having analyzed whether the proposed legal regulation (AI Act) sufficiently reflects the fundamental moral principles, the following concluding statements can be presented:

1. The rapid development of AI undoubtedly brings many advantages, but this technology also raises serious ethical challenges: impermissible discrimination of individuals; privacy and personal data protection violations; unauthorized (illegal) collection and use of data for commercial purposes; manipulation of opinions and choices, etc. These challenges arise due to the complexity, opacity and autonomy of AI systems. The purpose is to create a human rights-based, ethical and effective AI regulation. The process of integrating a moral dimension into the AI regulation is not easy, and it reflects the ongoing tension between natural law (the moral foundation of law) and legal positivism (regulatory pragmatism).

2. Most of the challenges posed by AI are related to human values, and thus the moral aspect of legal assessment is significant and could serve to create a more humane, fairer law. The natural law paradigm is very important by regulating such rapidly changing technologies as AI. The analysis shows that the AI Act reflects the main moral principles – fairness, transparency, and accountability – thus ensuring that AI remains under human control. The biggest problem in applying the principles of natural law to today’s legal regulation is that morality is an individual category of a person, and therefore it is practically impossible to create a consistent and coherent system of moral norms that could be transposed to positive law, and, once transferred, maintain it as universal.

3. The AI Act also strongly reflects the legal positivism, which emphasizes clear, codified laws, regulatory certainty, and procedural compliance. Regulation classifies AI systems into categories, ensures a harmonized legal framework across the EU Member States, sets uniform legal standards (eliminates moral interpretations), and provides strict compliance and technical documentation requirements with legal penalties for non-compliance. The AI Act establishes centralized enforcement mechanisms without possibility for ethical debates.

4. The AI Act seeks to find a balance between natural law and legal positivism. AI regulation is based on the respect of human rights and fundamental freedoms enshrined in the Treaty of the European Union and the EU Charter of Fundamental Rights. However, the regulation should give priority to human rights, pay more attention to the fairness requirements of the algorithms, the status of output data and important liability issues. The AI Act lacks stronger ethical oversight, as well as some independent AI ethics review bodies tasked to evaluate AI systems. There is some doubt that the regulation was adopted late, and that the AI technology still continues to develop. However, by integrating basic moral principles into the regulation and strengthening the protection of basic human rights, we should be able to overcome the main challenges posed by the AI technology.

Bibliography

Legal acts

International conventions

The Universal Declaration of Human Rights (1948). United Nations [online]. Available at: https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf [accessed on 15 January 2024].

European Union legal acts

Consolidated version of the Treaty on European Union. OJ C 326, 26.10.2012, p. 13–390.

Charter of Fundamental Rights of the European Union (version of 7th December 2000). 2016/C 202/02, p. 391–405.

The Convention for the Protection of Human Rights and Fundamental Freedoms (1950). Valstybės žinios, 1995-05-16, Nr. 40-987.

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance). OJ L, 2024/1689, 12.7.2024.

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). OL L 119, 2016 5 4, p. 1–88.

National legal acts

The Constitution of the Republic of Lithuania. Valstybės žinios, 1992, Nr. 33-1014. Official translation [online]. Available at: https://e-seimas.lrs.lt/portal/legalActPrint/lt?jfwid=rivwzvpvg&documentId=TAIS.211295&category=TAD

Republic of Lithuania Law on Legal Protection of Personal Data. Valstybės žinios, 1996, Nr. 63-1479. Official translation [online]. Available at: https://e-seimas.lrs.lt/portal/legalAct/lt/TAD/ef70b5d2f14811e78f3dc265493430ae, [accessed on 3 Marcg 2024].

Special literature

Anderson, M. M. (2022). Some Ethical Reflections on the EU AI Act, [online]. Available at: https://CEUR-WS.org/Vol-3221/IAIL_paper5.pdf [accessed on 8th March 2024].

Aquinus, T. (2005). Apie įstatymus: Teologijos suma I-II. 90-97 klausimai. Vertė: G. Vyšniauskas. Vilnius: Logos.

Aquinus, T. (1998). Žmogaus veikla dorovės požiūriu: Teologijos suma I-II. 18-21 klausimai. Vertė A. Šilanskienė. Vilnius: Logos.

Arlauskas, S. (2011). Šiuolaikinės teisės filosofija. Monografija. Vilnius: Charibdė.

Baublys, L. et al. (2005). Antikinė teisingumo samprata. Vilnius: Mykolo Romerio universitetas.

Baublys, L. et al. (2012). Teisės teorijos įvadas. Vilnius: Mes.

Breskienė, I. (2024). Darbuotojų stebėjimas naudojant algoritminį valdymą, grįstą dirbtinio intelekto sistemomis. Teisė, 133, p. 103–117, https://doi.org/10.15388/Teise.2024.133.7

Cath, C. (2018). Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges [online]. Available at: https://doi.org/10.1098/rsta.2018.0080 [accessed on 10th March 2024].

Dachraj, A.T. (2023). Theory of Natural Law, Legal Positivism and Its Implications for AI Regulation [online]. Available at: https://ssrn.com/abstract=4357131 [accessed on 8th March 2024].

Finnis, J. (2014). Prigimtinis statymas ir prigimtinės teisės. Vertė A. Plėšnys. Vilnius: Aidai.

Fuller, L. L. (1964). The Morality of Law. New Haven and London: Yale University Press.

Gaubienė, N. (2024). Can Artificial Intelligence Engage in the Practice of Law as the Art of Good and Justice? Filosofija. Sociologija, 35, p. 54–63, https://doi.org/10.6001/fil-soc.2024.35.2priedas.special-issue.6

Hart, H. L. A. (1958). Positivism and the Separation of Law and Morals. Harvard Law Review, Vol. 71, (4)1958, p. 593–629.

Hart, H. L. A. (1997). Teisės samprata. Vertė E. Kūris. Vilnius: Pradai, 1997.

Hegel, G. W. (2000). Teisės filosofijos apmatai. Vertė: L. Anilionytė. Vilnius: Mintis.

Hobbes, T. (1999). Leviatanas. Vertė: K. Rastenis. Vilnius: Pradai.

Juškevičiūtė-Vilienė, A. (2024). Legal Positivism, AI, and the Modern Legal Landscape: Challenges in Education, Research, and Practice, p. 25-41 [online]. Available at: https://doi.org/10.18778/0208-6069.109.02 [accessed on 30th December 2024].

Hogan, L., Lasek-Markey, M. (2024). Towards a Human Rights-Based Approach to Ethical AI Governance in Europe, p. 1-15 [online]. Available at: https://doi.org/10.3390/philosophies9060181 [accessed on 20th December 2024].

Kalpokas I., Kalpokienė J. (2023). Intelligent and Autonomous: Transforming Values in the Face of Technology. Leiden: Koninklijke Brill.

Kanišauskas, S. (2009). Moralės filosofijos pagrindai. Vilnius: Mykolo Romerio universitetas.

Kelsen, H. (2002). Grynoji teisės teorija. Vertė: A. Degutis ir E. Kūris. Vilnius: Eugrimas.

Kiela, D. (2021). Dynabench: Rethinking Benchmarking in NLP, p. 1–15 [online]. Available at: https://doi.org/10.48550/arXiv.2104.14337 [accessed on 2nd February 2024].

Kūris, E. (2002). Grynoji teisės teorija, teisės sistema ir vertybės: normatyvizmo paradigmos iššūkis. Iš Kelsen, H. Grynoji teisės teorija. Vilnius: Eugrimas, p. 11–41.

Lastauskienė, G. et al. (2020). Teisės teorija. Vadovėlis. Vilnius: VU leidykla.

Magrani, E. (2019). New perspectives on ethics and the laws of artificial intelligence, p. 1–19 [online]. Available at: https://doi.org/10.14763/2019.3.1420 [accessed on 5nd February 2024].

Navickas, A. (2005). Prigimtinis įstatymas ir prigimtinės teisės: nuo Tomo Akviniečio iki Thomaso Hobbeso. Problemos. 67, p. 75–87.

Randakevičiūtė, J. (2016). Moralės vaidmuo teisinėje sistemoje Vakarų teisės tradicijos kontekste. Teisė, 101, p. 145–165, https://doi.org/10.15388/Teise.2016.101.10449

Sabine, G., Thorson, T. (1995). Politinių teorijų istorija. Vilnius: Pradai.

Smuha, N.A. (2019). The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence, p. 97-106 [online]. Available at: https://doi.org/10.9785/cri-2019-200402 [accessed on 8th January 2024].

Stahl, B.C., Schroeder, D., Rodrigues, R. (2023). The Ethics of Artificial Intelligence: An Introduction. In: Ethics of Artificial Intelligence. Springer Briefs in Research and Innovation Governance. Springer, Cham [online]. Available at: https://doi.org/10.1007/978-3-031-17040-9_1 [accessed on 8th March 2024].

Vaišvila, A. (2004). Teisės teorija. Vadovėlis. Vilnius: Justitia.

Travaux préparatoires:

Ethics Guidelines for Trustworthy AI, High-Level Expert Group on AI, (2019) [online]. Available at: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419 [accessed on 8th January 2024].

The ethics of artificial intelligence: Issues and initiatives, European Parliament, Panel for the Future of Science and Technology (STOA), (2020) [online]. Available at: https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf [accessed on 10th January 2024].

19 February 2020 White Paper. On Artificial Intelligence - A European approach to excellence and trust, COM(2020) 65 final [online]. Available at: https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligencefeb2020_en.pdf [accessed on 9th January 2024].

21 April 2021 Proposal for a Regulation of the European Parliament and of the Council laying down harmonizes rules on Artificial Intelligence (Artificial Intelligence Act and amending certain Union legislative acts. Brussels. COM(2021) 206 final 2021/0106 (COD) [online]. Available at: https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX:52021PC0206 [accessed on 9th January 2024].

2 March 2022 European Parliament. Draft Oinion of the Commitee on Legal Affairs. (JURI), 2021/0106(COD) [online]. Available at: https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2021/0106(COD)&l=en [accessed on 10th January 2024].

25 November 2022 Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, Interinstitutional File: 2021/1006(COD), Nr. 14954/22 [online]. Available at: https://data.consilium.europa.eu/doc/document/ST-14954-2022-ADD-1/en/pdf [accessed on 8th March 2024].

16 May 2023 DRAFT Compromise Amendments on the Draft Report Proposal for a regulation of the European Parliament and of the Council on harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9 0146/2021 – 2021/0106(COD)) [online]. Available at: https://www.europarl.europa.eu/resources/library/media/20230516RES90302/20230516RES90302.pdf [accessed on 10thMarch 2024].

22 May 2023 Report on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)) [online]. Available at: https://www.europarl.europa.eu/doceo/document/A-9-2023-0188_EN.html [accessed on 12thMarch 2024].

Other sources

Amazon ditched AI recruiting tool that favored men for technical jobs (2018). The Guardian [online]. Available at: https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine [accessed on 12 April 2024].

Cambridge Dictionary (2024) [online]. Available at: : https://dictionary.cambridge.org/dictionary/english/manipulation [accessed on 12 January 2024].

Clearview AI Fined $9.4 Million In U.K. For Illegal Facial Recognition Database (2022). Forbes [online]. Available at: https://www.forbes.com/sites/roberthart/2022/05/23/clearview-ai-fined-94-million-in-uk-for-illegal-facial-recognition-database/?sh=31018d2d1963 [accessed on 12 March 2024].

Facial recognition firm Clearview AI tells investors it’s seeking massive expansion beyond law enforcement (2022). The Washington Post [online]. Available at: https://www.washingtonpost.com/technology/2022/02/16/clearview-expansion-facial-recognition/ [accessed on 12 April 2024].

Fake-Fotos vom Papst: Warum es immer mehr Deepfakes gibt (2023). WDR1 [online]. Available at: https://www1.wdr.de/nachrichten/schieb-ki-deepfake-papst-100.html [accessed on 20 March 2024].

Legislative Train Schedule (2023) [online]. Available at: https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-regulation-on-artificial-intelligence [accessed on 16 April 2024].

The Best (And Scariest) Examples Of AI-Enabled Deepfakes (2019). Forbes. [online]. Available at: https://www.forbes.com/sites/bernardmarr/2019/07/22/the-best-and-scariest-examples-of-ai-enabled-deepfakes/?sh=525a89892eaf [accessed on 12 April 2024].

23 November 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence [online]. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000381137_lit [accessed on 12 April 2024].

Newman, L. H. (2021). What Really Caused Facebook’s 500M-User Data Leak? [online]. Accessed at: https://www.wired.com/story/facebook-data-leak-500-million-users-phone-numbers/ [accessed on 25/02/2024].

Wikipedia (2024) [online]. Available at: https://lt.wikipedia.org/wiki/Moralė [accessed on 10 February 2024].

Artūras Grumulaitis yra Vilniaus universiteto Teisės fakulteto Privatinės teisės katedros doktorantas. Pagrindinės jo mokslinių interesų sritys – dirbtinio intelekto teisinis reguliavimas, deliktų teisė, intelektinės nuosavybės teisė.

Artūras Grumulaitis is a doctoral student at the Faculty of Law of Vilnius University. His main scholarly interests involve the Regulation of Artificial Intelligence, Tort Law, and Intellectual Property Law.


  1. 1 The legal regulation is examined within the scope of the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), (hereinafter – AI Act)).

  2. 2 The list of authors and publications provided is not exhaustive, and it indicates only a small selection of foreign publications relevant to the topic under consideration.

  3. 3 For example, the right to life, liberty and personal security (which may be violated by a malfunctioning AI system) is guaranteed by the Universal Declaration of Human Rights (Article 3), as well as the Constitution of the Republic of Lithuania (Articles 18–21). Violation of the aforementioned right leads to legal responsibility, and not only moral condemnation. However, legal challenges arise in adapting civil and criminal liability regimes to the complexity and unpredictability of the AI technology.

  4. 4 Note: The prohibition of discrimination is enshrined both in international conventions and in the constitutions and laws of many countries. For example, the Universal Declaration of Human Rights (Article 7) says that “all are entitled to equal protection against any discrimination in violation of this Declaration and against any incitement to such discrimination”.

  5. 5 Newman, L. H. (2021). What Really Caused Facebook’s 500M-User Data Leak? [online]. Available at: https://www.wired.com/story/facebook-data-leak-500-million-users-phone-numbers/ [accessed on 25 Feb 2024].

  6. 6 Article 4(4) of GDPR specifies this problem as profiling; profiling refers to any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.

  7. 7 According to T. Kuhn, a legal paradigm is a set of research approaches, values, principles and rules accepted or agreed upon by scientific communities, on which not only the choice of research objectives, but also the interpretation of the obtained data depends (Lastauskienė et al., 2020, p. 45).

  8. 8 This theory is supported by Confucius, Aristotle, Cicero, St. Augustine, and St. Thomas of Aquino.

  9. 9 The most famous representatives of this theory are H. Grotius, T. Hobbs, J. Locke, J. Finnis, Lon L. Fuller, and Morris R. Cohen.

  10. 10 Fuller identified eight requirements for rules of law: 1) general character; 2) public announcement; 3) acting in the future; 4) clarity; 5) mutual compatibility; 6) non-contradiction; 7) constancy over time; 8) correspondence of declared rules and real actions (Fuller (1964). The Morality of Law. New Haven and London: Yale University Press, p. 46–91).

  11. 11 The goods, as identified by Finnis, are the following: life, knowledge, play, aesthetic experience, friendliness, practical understanding, and religion.

  12. 12 For example, in the Universal Declaration of Human Rights (1948), the European Convention for the Protection of Human Rights and Fundamental Freedoms (1950), the Constitution of the Republic of Lithuania (1992), etc.

  13. 13 Note: This segment is prepared according to the facts of the biography of H. Kelsen specified by E. Kūris in the introductory article of the book Pure Theory of Law, p. 17–22.

  14. 14 Note: 1) importance – the essential feature of a moral norm or standard that it is considered important and accepted; meanwhile, the legal norm may be considered irrelevant; 2) immunity from deliberate change – the legal system is characterized by the fact that new norms can be introduced, and old ones can be changed or abolished by a voluntary act; whereas moral norms cannot be created, changed, or eliminated in this way; 3) voluntary character of moral offences – after committing a moral transgression, the fact that the person committed it unintentionally is a circumstance that exempts this person from moral condemnation. Whereas violations of the law may be ‘strict liability’ or ‘no fault’ in the legal system, there may still be a penalty; 4) form of moral pressure – when it is intended to violate a legal norm, it can be dissuaded only by the threat of punishment. In the case of morality, there is a form of pressure – an appeal to respect norms, it is expected that the appeal will evoke a feeling of shame or guilt.

  15. 15 Note: At the date of the submission of this paper, the AI Act had already been adopted (Regulation (EU) 2024/1689) and entered into force on 2 August 2024.

  16. 16 Risk-based approach to AI systems: 1) unacceptable risk AI – harmful uses of AI (e.g., real-time biometric, profiling or emotion recognition systems; 2) high-risk AI – a number of AI systems that are creating adverse impact on people’s safety or their fundamental rights (Annex III to the AI Act; Article 6); 3) limited risk AI – systems with specific transparency obligations; 4) minimal risk AI – systems can be developed and used in the EU without additional requirements.

  17. 17 Analyzed documents: 21 April 2021 AI Act COM(2021) 206 final, Proposal for a Regulation of the European Parliament and of the Council of 25 November 2022 on amendments to the AI Act, interinstitutional file 2021/0106(COD) No. 14954/22, and final version of the Regulation (EU) 2024/1689.

  18. 18 Article 2 of the Treaty on European Union: “[T]he European Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities, and whereas these values are common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail”; and Article 3: “The Union’s aim is to promote peace, its values and the well-being of its peoples”.

  19. 19 The Preamble and text of the Charter enshrine basic rights: dignity, freedom, equality, solidarity, civil rights, justice.

  20. 20 Prepared by the author, by summarizing the principles developed by the High Level Expert Group, specified on pages 10–11 of the Guidelines.