As the pace of AI continues unabated with worldwide rapid adoption, governments around the world are moving quickly to ensure that existing laws, regulations & legal constructs, are relevant to technological change, & that they can deal with AI’s new challenges
Artificial Intelligence is bringing up a barrage of questions for all legal systems around the globe. As Forbes notes, it is not so surprising to see that the majority of governments are taking a “wait and see” approach to regulations and laws pertaining to AI.
So let’s take a look at key findings published by Cognilytica, entitled: “Worldwide AI Laws and Regulations 2020.”
- The EU is the most active for putting forward new regulations & rules, with proposed/existing rules in 7 out of 9 categories of areas where regulation could apply to AI.
- 24 countries & regions have set up soft laws regarding autonomous vehicle operation, & 8 more are currently in discussion to permit the operation of self-driving vehicles.
- When it comes to restricting the use of LAWS (lethal autonomous weapons systems), a level of discussion has been advanced by 13 countries. (Thus far, Belgium is the only country to have agreed on legislation to prevent the development or use of LAWS.
- America has adhered to a mild regulatory posture, hence the lack of widespread Artificial Intelligence regulations & laws at a federal level; however, some states have kept a more robust approach to regulation.
Civil & criminal law
At the present time, there is already a large volume of studies involving the implications of Artificial Intelligence in civil law (particularly in regard to liability law). Moreover, the literature concerning criminal law, is increasingly giving attention to the case.
The processes running in AI systems cannot all be measured according to duties of care designed for human conduct
The European Commission defines AI as: “systems that display intelligent behaviour by analysing their environment and taking actions — with some degree of autonomy — to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones […]).”
This description highlights some of Artificial Intelligence’s inherent features, including the power to:
- Accrue and analyse data from its surroundings.
- Initiate action to fulfil the machine’s particular goals, (that are normally determined by a human being.
Such abilities are on a par with what is normally classed as ‘intelligence’ (or ‘rationality’). The phrase ‘some degree of autonomy,’ is also crucially important, because, regardless of human operators’ inputs, Artificial Intelligence systems are able to act independently: that is to say, they can choose among the most appropriate course of action to fulfil their goals.
Does AI have e-personalities within the scope of the law?
During the last few years, the European Parliament has set out what it regards as an “Electronic Personality”
Lexocology notes that: “It is controversial that AI, which has human-specific abilities, can be logical in the same way a human can. AI has many abilities through the data it collects, and rapidly changing tech developments are not subject to legal norms. Further, in the future, these can become a decision-making mechanism of humans so that artificial intelligence is not given any legal status in the current legal regulations.”
The concept of AI’s personality & legal responsibility
The rise of AI brings up questions about liability for crimes it commits, mainly because it acts autonomously with limited control from human beings.
While it can be said that software programmers and associated robotic system developers are responsible for errors due to their negligence while using robotic AI enhanced tools, one has to ask the question:
“Should the robot itself participate in these responsibilities together with the progress in robot technology?”
Criminal & legal responsibility
In order to have such a legal obligation, a person, AI, or anything else, must have rights, and the capacity to act. Thus far, however, this has only applied to humans. — And this is where it gets complicated…
The Legal Low Down on UAVs
At the present time, UAVs (unmanned aerial vehicles) form part of many nation’s military defence systems. Regarded as self-moving autonomous vehicles which use AI and derivative algorithms to help conduct vital military operations, these vehicles can carry out missions:
Without being commanded, and without being bound to human command and / or intervention with its pre-formed algorithms.
But looking to the future, who will be responsible for any damage which might come about from the AI’s decisions? — Decisions which were made without any human input? Further, with regard to legislation, will company officials who sold the final product, coded the AI, or mechanised the coded technology, be held responsible?
In order to appreciate the vast complexities of this question, we need to be mindful of the fact that outstanding tech developments within the sphere of AI, mean that just as their human counterparts, artificial intelligence can be bias. Indeed, it can multiply, acquire new information, and process the latter with zero human assistance.
Moreover, it has the capacity to make its own decisions, and choose targets, as and when, it decides. To that end, when it comes to the law, it could be argued that if robots and other autonomous devices/vehicles feature human characteristics, then AI could be regarded as having the concept of a human.
Aside from the norms of civil law, it could be said that:
“If artificial intelligence has become a subject, and not the object of legal norms, then it may been taken for granted that people who have developed autonomous systems have assumed that they have committed crimes in all these processes. Yet, to be held responsible would not be equitable.” — Hence, this issue is clearly highly controversial.
One perspective is that: “robots do not have the ability to act in terms of criminal law and civil law norms, therefore, unjust acts that arise from AI should be evaluated separately in the light of criminal provisions apart from civil law norms.”
Of note, in order for a person to be deemed responsible for an action that is classed as a crime, criminal law states that the individual should have criminal capability, and in order for this to be acknowledged; the individual should have the means to act.
This is measured along the lines of the person’s age, perception, and so on.
The need for an alternative capacity
Bearing the aforementioned in mind, the concept of an ‘alternative capacity’ to act must be outlined; moreover, the boundaries of legal responsibility need to be distinctly drawn out, and should indicate the expanding limits of what AI will be able to do in the foreseeable future.
Emerging digital technologies make it difficult to apply fault-based liability rules, due to the lack of well established models of proper functioning of these technologies, & the possibility of their developing as a result of learning without direct human control.
European Parliament: Decisions relating to personal assessment & legal responsibility of AI
In February of this year (2020), an announcement was made by the European Commission in relation to setting out new strategies for the long-term use of robots and AI within the EU. In fact, a whopping 200 billion euros has been earmarked for the next 10 years’ development of robot tech and AI.
Europe is well positioned to exercise global leadership in building alliances around shared values, & promoting the ethical use of AI. The EU’s work on AI has already influenced international discussions.
In a similar vein, the EU is also fully aware of the ongoing vital work on AI, in other multilateral forums. These comprise: the International Telecommunications Union, the World Trade Organisation, the Organisation for Economic Co-operation and Development, the UN Educational Scientific and Cultural Organization, and the Council of Europe.
However, in the case of the EU, while the main components of a future regulatory framework for AI in Europe, is designed to generate an excellent unique “ecosystem of trust,” what will happen if, or when, the EU capsizes?
Britain has already left the pack, and others could well follow the trend…
Damage related the use of AI/algorithms in the financial market
According to the European Commission, at the present time, this financial market scenario is subject to the type of compensation linked to traditional fault-based regimes.
However, certain jurisdictions can enable the claimant to use financial regulations (administrative law), to determine the standard against which the perpetrator’s conduct is to be judged.
Contractually: “information imbalance resulting from the use of AI may justify the application of a (statutory or case law) pre-contractual liability regime.”
Yet, it seems more probable that: “the reaction of the legal system to potential irregularities in contracting with the use of algorithms will rely on contract law tools for assessing and challenging the validity of contracts (vitiated consent, lack of fairness, etc.).”