en(+374 11) 520 510
·
info@legelata.am
·
Mon - Fri 09:00-17:00
News
en(+374 11) 520 510
·
info@legelata.am
·
Mon - Fri 09:00-17:00
News

Ethical issues of artificial intelligence

In the era of technological innovations, it’s no longer news that artificial intelligence (henceforth – AI) systems are capable of functioning in a wide range of industries, especially in transportation, retail, customer service, judiciary etc.

Despite the fact, that AI systems make our lives a lot easier, they can also be life-consequential in a bad sense, regarding the AI ethical issues. Particularly, the broad use of AI systems arises these questions:

  • Are AI systems free of discrimination and bias or do they operate in accordance with ethical norms?
  • Do they have the ability to understand where the border of human rights starts?
  • Do AI systems have the ability to understand which actions it is better not to conduct in order to circumvent pernicious consequences?
  • Who is responsible for the actions, held by AI systems?
  • Is there a guarantee that AI systems fully maintain under human control?

Abovementioned ethical issues have intersections with the following legal matters:

  • data protection;
  • surveillance;
  • bias;
  • decision-making;
  • liability

Firstly, collecting of our personal data and using smart devices are unavoidable, if we want to be fully engaged in the 21st century, but don’t we feel like being under surveillance 24/7? Most of us do. The use of informational technologies nowadays has become the inevitable part of our reality. Anyone, using digital devices, having social media pages, online banking apps etc., unconditionally has to share his/her personal data for full operation of those systems. Moreover, the amount of that data has broadened so much, that even our faces, voices, preferences, behaviour are known to the devices and programs we use. For example, Apple’s Siri or Amazon’s Echo (Alexa) are collecting so much data of their users, that it feels like they’re always interfering in your daily actions, know everything one did, does and even predict what one may do in the future. Their abilities are extended to the point that those technologies are able to offer actions that are complimentary to one’s usual behaviour in the web or sometimes even in real life. In a word – it’s like the real-life representation of the “Big Brother” (known also as “Big Eye”).

Secondly, the bias. As the operation of AI systems is based on human-generated data, situations, that perpetuate unfairness and discrimination, are not a rare thing in AI products and applications. Thus, the decision-making will not always seem lawful: even though these systems are highly intelligent, they develop and “learn”, but they still have a way to go to interpret the public moral and ethical principles, existing legal regulations in their decisions rather than just operate, based on the uploaded data.

The Personal Data Protection Law of the RA leaves much to be desired, regarding the AI systems. There is no legal framework on changing the existing situation in Armenian legislation. However, at the same time the global initiatives, concerning AI ethics (e.g. EU guidelines on ethics in artificial intelligence, OECD Principles on AI), readiness of some countries to make vast changes in their legislation in this sphere (according to the Oxford insights survey (2019)) arise high hopes for filling in that legal vacuum. The only way of supervising the actions of the autonomous systems is using the general regulations to exercise control over natural persons, legal entities and state bodies, using AI.

Another question, arising from the abovementioned issue is who is responsible for the decisions of AI systems and their consequences? Someone has to be accountable: either the developers or the operators or someone else. When an AI product is pursuing the goal, tasked for it, the result of the calculations for the solution may not be satisfactory for the user and even cause much more harm than the initial problem did. For example, in the case of driverless cars the possibility of fatal consequences for human life is real: suppose, the car has to choose between the safety of road users or passengers, while driving, with no time to stop: it chooses circumventing the road users by endangering the lives of passengers. So, who is responsible for the harmful evaluation of AI? The RA Criminal Code defines criminal liability only for natural persons, The RA Civil Code – for natural persons and legal entities, The RA Code on Administrative Offences – for natural persons, public officials. 

In essence, any kind of offence has these four elements: conduct, concurrence, causation and guilty mind. As the involvement of the latter in the actions of the autonomous systems is impossible, for now it is not clear who can be held responsible for the actions of AI and the consequences: it can be the creator, developer, user or someone else, depending on the certain situation.

All things considered, AI systems are becoming increasingly more involved in our lives and throwing light on these ethical questions should be the first step towards confronting the challenges that these intelligent systems may bring.

 

Author: Lilit Harutyunyan / Associate


Disclaimer: This material is produced by Legelata LLC. The material contained in this newsletter is provided for general information purposes only and does not contain a comprehensive analysis of each item described. Before taking (or not taking) any action, readers should seek professional advice specific to their situation. No liability is accepted for acts or omissions taken in reliance upon the contents of this material.

Leave a Reply