What is AI Governance? In its basic form, is the concept that there should exist a legal blueprint around the research and development of machine learning and other AI-related technologies. The objective of AI governance is to ensure that the adoption of AI is done in a fair and balanced manner, while simultaneously bridging the chasm between ethics and accountability in the technological realm.
Some of the aspects covered by the idea of AI governance are as follows:
Safety aspects, and what segments of commercial activity should and should not be automated.
Autonomy – how much autonomy is too much.
Quality of data – biases and so on.
Justice – does it give someone an undue advantage.
Morals and ethics – outlining a framework for controlled AI R&D.
Legal – how existing legal frameworks can be adopted to the AI scenario.
AI governance becomes a critical factor when machine learning models are involved in making important decisions. Data bias is inherent in many ML models, but they are still used to grant loans, grade essay papers, etc. AI governance is intended to seek out such biases and eliminate them before an automated process can be rolled out.
In the simplest terms, AI governance will determine what AI can and cannot be allowed to do. This ethical blueprint must necessarily factor in all the points noted above. In a sense, AI governance is the moral and legal policing of AI research and development, especially when it has a larger impact on the human population that it affects.
Several organizations have been floated to create AI governance. Among these are the White House Future of Artificial Intelligence, the Ethics and Governance of AI Initiative and the Center for the Governance of AI.
An interesting new development in the AI governance landscape is the question of how comfortable we are, as consumers, with AI controlling our lives? After all, AI already pervades our daily life, telling companies what to advertise to us, what our online preferences are and so on. It has even reached the realm of the physical, where a restaurant knows what allergies you have before you even walk in.
In such a situation, who gets the right to make the decisions on what we’re served based on the behavior patterns we exhibit in our lives? Is this, ironically, something that AI itself should manage?
That’s another emerging question: should AI control how AI behaves?
What is AI Governance Responsible For?
Current issues with how data is used have highlighted the need for more control by the owners or subjects of such data. This has led to initiatives like the GDPR in the EU, where explicit permission has to be obtained in order to use the data.
However, a great chasm exists between how data is actually used and how regulators believe it should. The first step to closing this gap is to help us understand what data is out there, who is using it and how they’re using it.
With the infusion of governance over AI, owners of the data will decide on these modalities of data usage. At least, that’s the hope.
For example, in the UK, the problem of loneliness is a real one – real enough to prompt the government to appoint a Minister for Loneliness, a role dedicated to the legacy work of the late MP Jo Cox. The minister, Tracey Crouch, is responsible for crafting a government strategy to battle the endemic that affects at least 9 million Brits.
AI entities can take advantage of such people by offering them emotional incentives, and a framework of governance is essential to prevent that from happening.
What is AI Governance Going to Do about Adoption of such Frameworks?
Current work on AI governance by The Ethics and Governance of Artificial Intelligence Initiative involves identifying structures to maintain autonomy in public administration, measuring and controlling the influence of ML and autonomous systems on the public, and how ethical and moral intuitions can be better integrated into such systems.
The Obama Administration’s AI policies were outlined in two key reports published in 2016. One of the takeaways was that the design of AI governance should not center around future developments such as AGI, or artificial general intelligence. Rather, the “immediate economic implications” of narrow AI vs. strong AI should be the focal point.
This view contrasts with that of other organizations like the Future of Life Institute at MIT, the Machine Intelligence Research Institute at the University of California at Berkeley and the Future of Humanity Institute at the University of Oxford, all of which believe that a framework should be put in place now to govern the behavior of strong AI of the future.
One consideration taken from a different point of view is that over-regulation will stifle the development of commercially important AI technologies. One of the reports suggests that “where regulatory responses to the addition of AI threaten to increase the cost of compliance, or slow the development or adoption of beneficial innovations, policymakers should consider how those responses could be adjusted…”
Another interesting take on the matter is the fact that the report highlighted how China has overtaken the U.S. in terms of AI research throughput.
More recently, China’s use of face recognition technologies to identify lawbreakers amongst its citizenry has brought privacy concerns to the forefront. If the governing body overseeing AI development is, itself, misusing AI, that should be something to worry about.
How will a global AI governance framework be adopted by a nation like China, where the privacy of the average citizen is squarely in the hands of the government?
Another point of concern is how the world’s governments will come together to ratify a global framework. In a scenario where even a critical phenomenon like global warming can be pushed aside by the most powerful nation in the world, how will AI governance stand its ground once it is ready to be let out of its pen? Will countries adopt AI governance with the same level of disinterest as global warming, for instance?
Such questions remain unanswered as global organizations