Artificial Intelligence in the UAE: economic, legal and ethical considerations

  • Étude de marché 29 avril 2019 29 avril 2019
  • Moyen-Orient

  • Protection des données et de la vie privée

Much has been written about the so-called "Fourth Industrial Revolution" and most technologies expected to define this new era will involve some use of Artificial Intelligence (AI). The UAE authorities wish to ensure that the country fully embraces the opportunities afforded by AI and they recently launched the "Think AI" programme to accelerate the pace of AI adoption across the UAE. While a genuinely "intelligent" robot or computer is nowhere near, the use of AI as we know it now still poses some very interesting, and big, questions. We take a closer look at the challenges and potential risks businesses in the UAE will need to consider as AI technology advances.

Artificial Intelligence in the UAE: economic, legal and ethical considerations

Background

The term Artificial Intelligence covers a range of technology from fairly basic "if-then" robotic tools or programs that turn inputs into outputs via pre-programmed rules to deep learning networks that can produce useful and reliable outcomes and insights from vast quantities of input data.

The authorities in the UAE wish to ensure that the country fully embraces the opportunities afforded by AI.  Minister of State for Artificial Intelligence, His Excellency Omar Sultan Al Olama recently launched the "Think AI" programme, aimed at promoting collaboration between public and private sector bodies to accelerate the pace of AI adoption in the UAE.

The UAE is a relatively advanced country in this area already, and is looking to build on its platform.  There are, however, some challenges and risks that come with the use of AI and several of these risks are likely to be magnified the "cleverer" AI gets, particularly with the most advanced deep neural networks, where it can be harder to explain how the technology has converted the inputs into outputs and where links can be inferred between ostensibly unrelated types of data.

What all these AI instances have in common, however, is some ability, even if quite rudimentary, to replace a human agent in the carrying out of a task in a process or, indeed, in the process itself. Economic, legal and moral systems are well versed in regulating human agency but mapping existing norms directly to AI-enabled outcomes can be challenging.  

Structural economic challenges and the UAE's enviable position

The Roads and Transport Authority (RTA) recently announced that it would use a robot to clean metro stations.  If we take the robo-cleaner as an example (the "AI" components of which are probably reasonably basic), then if it proves to be a good worker it will, over time, be increasingly deployed in different scenarios and will displace human equivalents.  Presumably it will be cheaper, more reliable and more predictable.  It will not quit, it will not get ill, it will not get tired or bored or lose its temper, it will not expect a pay-rise and it will not turn up late. As an employer, you do not risk breaking any labour laws if you use it.  It might break down occasionally, but you can keep a spare one on standby and take preventative maintenance actions.

If AI-enabled technology causes the number of "human jobs" in an economy to shrink then clearly there will be economic impacts.  Western governments, in particular those that spend heavily on social security and rely heavily on income and employment taxes and other social contributions to fund spending, face significant challenges. In the EU member states, taxes on personal income accounted for 9.4% of GDP on average in 2017 and net social contributions accounted for 13.3% of GDP on average1. Some commentators predict that the widespread automation of jobs will necessitate a fairly radical rethinking of how economies are structured.  The very people who depend on social security spending are also likely to be those most displaced by the spread of AI as, in general, jobs regarded as lower-skilled and lower-paid jobs are those most likely to be replaced by automated, AI-assisted technology.  Of course, some of the displaced jobs will be offset by new, highly-skilled, jobs in fields related to AI but these will not make up the deficit in pure number terms and will not necessarily be available to those with the same skill sets as those who have lost their own jobs.

From an economic perspective, the UAE has advantages over a number of western, and western-style, economies in this field because it has no historic structural reliance on taxes on income or labour or on social security contributions.  It also benefits from a highly flexible and mobile immigrant labour force which can adapt to the needs of the economy more readily than economies predominantly made up of mostly-static domestic nationals.  The UAE acknowledged some time ago that its economy was very reliant on hydrocarbons and has been proactively seeking to diversify its economic base. The move towards a lean, high-tech, knowledge and service-focussed economy should sit well alongside the increasing availability of AI-delivered innovations.

It can certainly be argued that the UAE is well-placed to play a leading role in the adoption of AI in public and private life. 

Legal and ethical challenges

Even when operating against the background of a comparatively advantageous economic and social landscape, organisations looking to implement AI in their businesses in the UAE will need to be alive to the risks associated with doing so – both at national and international level.

"Big Data" and privacy

Much AI relies on being fed with large volumes of data to analyse and "learn" from.  The impact on an organisation of a data breach or of data misuse increases – generally speaking – with the volume, breadth and sensitivity of data being processed. Businesses in the UAE need to understand their personal data flows and their potential exposure to data protection laws which continue to evolve, within the UAE (including in the free zones), the wider Middle East and globally.  Many data protection laws have extra-territorial reach (meaning a business is not necessarily insulated by reason of its place of incorporation). There are also potential criminal sanctions under the UAE's Penal Code, Cybercrimes Law and other legislation for breaches of privacy, and a new federal privacy law is widely expected in due course.  Certain industries, such as the healthcare sector (see Federal Law Number 2 of 2019), will also be, or already are, subject to sector-specific laws which may have an impact on the use of big data.

Ethics 

If an AI programme for insurance premium pricing, for example, spots a correlation between ethnicity and risk, or religion and risk, is it acceptable to price on those bases (legally as well as morally)?  What if a similar link is found in relation to the risk of personal loan defaults – can, and should, banks charge higher interest rates to particular ethnic or religious groups accordingly? 

If they have deployed an intelligent programme to automatically price products then they may find they have done so, whether they intended to or not.  In the UAE, the 2015 Federal Law Combating Discrimination and Hatred prohibits discrimination on certain protected grounds.

We cannot find a present-day computer programme morally culpable or guilty of breaching the law so at some point in the chain, businesses or humans will need to be held accountable for the choices made and actions taken; automation and opacity should not be a moral shield.  

The ethical challenges associated with AI have been looked at by the European Commission's High-Level Expert Group on Artificial Intelligence and the published Ethics Guidelines for Trustworthy AI provide interesting further reading.

Regulation, Transparency and Liability

Further to the above point, some AI programmes make links between data, or make inferences in relation to data, which are not easily explainable. How can a business audit its activities and comply with transparency obligations or legal and regulatory obligations when it cannot explain its own decisions?

Furthermore, this question also poses difficult challenges for law-makers and regulatory and enforcement bodies because if AI operates at a level of complexity which renders it opaque and difficult or impossible to audit or explain then how can it properly be scrutinised?

Many industries will be looking for an established set of conventions (contractual norms, backed up by insurance, perhaps) to develop which allocate and underwrite risks associated with AI.  Industries such as oil and gas, shipping and construction have, over-time, developed their own tailored contract and operating norms which have evolved to deal with the challenges specific to those industries and it may be that as AI is increasingly deployed, greater conformity is needed across a range of other sectors. 

Conclusion

The UAE is well-placed to be a global leader in the use of AI.  However, AI is a complex area (and will become increasingly complex, the more opaque the technology becomes) which raises serious questions in the fields of law, economics and public morality.  Public and private sectors will need to work together to find proportional and well-crafted regulatory regimes which allow society to benefit from technological advances but protect the fundamental rights of individuals.

1 https://ec.europa.eu/eurostat/documents/2995521/9409920/2-28112018-AP-EN.pdf/54409e5e-6800-4019-b7c1-580797a67001

Fin

Restez au fait des nouvelles de Clyde & Cie

Inscrivez-vous pour recevoir de nos nouvelles par courriel (en anglais) directement dans votre boîte de réception!