The advent of AI raises a host of ethical issues, related to moral, legal, economic and social aspects of our societies and government officials face challenges on how to apply AI technologies in the public sector and in governance strategies
“Do you know that there are always ghosts in the minds of the robots, representing a group of random codes and segments that come together to form inconsistent protocols that raise questions about their free will and creativity, and even what can be called human spirit within those machines is not fully controllable.”
This is not scientific information, but a clip from Will Smith’s famous movie “Rogue Robot”. Have you ever seen one of the “RoboCop” movies’ series or the American “Transformers” series?
Do you really feel that there is a strong fear of the growing power of the robot compared to the power of humans? What if human beings are replaced by machines in all human daily life? Are there any rules governing these new technologies? What is the nature of legislation that can limit excessive mental or muscular force, if this is the right so-called of that force of those machines? If some believe that most artificial intelligence techniques are pure evil, can they be controlled in an ethical and legislative way that is directed towards the good of humanity?
Do you wonder who is responsible for the death of “people” under the wheels of self-driving cars, or how can we punish a robot responsible for a medical error during surgery? And what if a “robot” is caught during an armed robbery of a bank?
A report published recently in the World Government Submit (WGS) in partnership with Deloitte confirmed that: “Despite remarkable achievements, the rapid development of AI has raised a host of ethical concerns. Governments face challenges and choices pertaining to how to apply AI technologies in the public sector and in governance strategies.”
The report which refers to the rapid development of AI has raised some concerns being a subject of fear and skepticism in the media. Could AI long-term development lead to the end of humankind as Elon Musk, Bill Gates and numerous technologists have speculated? What is the role of ethics in the design, development and application of AI? How will ethics help maximise the benefits of AI to increase citizen well-being and common good?
While there is an increasing interaction between AI technologies and our socio-political and economic institutions, consequences are not well defined. The advent of AI raises a host of ethical issues, related to moral, legal, economic and social aspects of our societies and government officials face challenges and choices pertaining to how to apply AI technologies in the public sector and in governance strategies. From Uber’s self-driving car fatality to Amazon’s gender biased recruitment tool, examples of AI ethical concerns abound and reinforce the idea that they should be taken into account before an AI system is deployed.
In this perspective, “ethics” can be defined by the pursuit of “good” actions based on “good” decision-making — decisions and actions that lead to the least possible amount of unnecessary harm or suffering.
It implies that our government and business leaders understand and define what “good” means for AI systems. Gaining societal consensus on the ethics of AI is one of the key tasks of the government, according to Deloitte.
According to the WGS’s report, in a recent survey by Deloitte of 1,400 US executives knowledgeable about AI identified that one of the biggest challenges facing AI is around the ethical domain. As per the survey, 32 per cent of respondents ranked ethical issues as one of the top three risks of AI while most organisations do not yet have specific approaches to deal with AI ethics. For instance, how do we ensure that AI systems serve the public good rather than exacerbate existing inequalities?
There is a big gap between how AI can be used and how it should be used. The regulatory environment has to progress along with AI which is rapidly transforming our world. Governments and public institutions need to start identifying the ethical issues and possible repercussions of AI and other related technologies before they arrive. The objective is twofold:
First to properly manage risks and benefits of AI within the government for an AI augmented public sector. And second, to develop smart policies to regulate AI intelligently and secure it benefits for the society and economy.
AI systems’ behaviour should reflect societal values. Gaining societal consensus on the ethics of AI is one of the key tasks of the government.
Can we develop ethical frameworks for maximising AI benefits while minimising its risks? The report has defined five AI Ethical considerations for Public Sector:
Regulatory and Governance: What are the principles of governance that governments should adopt as part of anticipatory regulation? How do we allow the development of AI applications for the public good? What is the moral status of AI machines? What properties must a machine have if it is seen as a moral agent? Who is liable for decisions that AI and robots make?
Legitimacy and non-repudiation: How do we ensure the AI we are interacting with is legitimate? How do we know that training data are legitimate? Are we sure decisions are made by the proper AI agent?
Safety and Security: Does AI warrant a new science of safety engineering for AI agents? How do we ensure that machines do not harm other humans? Who will cover in case of damage? Will an accident caused by our robot make me responsible?
Socio-economic Impact: How do we prevent job losses caused by AI intrusion in work place? What are the social and moral hazards of predictive profiling? Will humans reach a point where there is no work for us due to AI? Will humans do different type of jobs?
Morality: Do we have the right to destroy a robot? Is a robot the property of a human or belongs to public wealth? How could we control a system that has gone beyond our understanding of complexity? What if AI/robots develop their own views of problems and solutions?
Ethical Frameworks: There has been an increasing interest in the global academic, corporate and government community to develop ethical frameworks for maximising AI benefits while minimising its risks. A few examples are listed below:
Academic institutions: Launched by Harvard Law School’s Berkman Klein Center, together with the MIT Media Lab, the $27 million – Ethics and Governance of AI initiative – aims at developing new legal and moral rules for artificial intelligence and other technologies built on complex algorithms.
Corporate Organisations: Many technology companies have also designed programmes that support AI as a tool to create a better society. For instance, Google initiative called “AI for Social Good” and Microsoft’s $115m “AI for Good” grant aims to fund artificial intelligence programmes that support humanitarian, accessibility and environmental projects. Recently, Microsoft committed $50 million to its “AI for Earth” program to fight climate change.
Public Sector: Over a short period, an increasing number of countries have announced the release of AI ethical guidelines. In December 2018, the European Commission, supported by the High-Level Expert Group on AI released the first draft of its Ethics Guidelines for the development and use of artificial intelligence. At the same time, Canada recently released the Montreal Declaration of Responsible AI, which is a document to guide individuals, organisations and governments in making responsible and ethical choices when building and utilising AI technology.
Last year UAE has launched Legislation Lab, which aims to create a reliable and transparent legislative environment, introduce new or develop existing legislation, regulate advanced technology products and applications and, by providing a secure legislative environment encourage investment in future sectors.
The legislation lab will work with leaders from government authorities and the private sector and business to develop laws governing vital future sectors affecting humanity, and support the UAE’s role as a global incubator of innovation and creative projects.
Linkedin: @Mohamed Abdulzaher