Notwithstanding the fact that America is a full two years behind China in terms of a national AI strategy, the Trump administration has now revealed that it is “leading many of those conversations” with “like-minded international allies” to help shepherd the direction of AI development and application across the globe.
Most of these conversations are reportedly happening within closed groups like the G7 and the G20, as well as U.N. organizations and the Organisation for Economic Co-operation and Development (OECD.)
According to Dr. Lynne E. Parker, Assistant Director for AI in the Office of Science and Technology Policy:
“All of these activities are activities that we are deeply involved in, and again, we’re really promoting an international environment that’s supportive of American innovation, but certainly with a focus on AI that is trustworthy and that we are comfortable adopting.”
Possibly the most important point made by Parker at the National Academy of Public Administration’s Forum on Artificial Intelligence held in Washington was that the United States was not planning on introducing new “workstreams, programs or activities,” but merely going to try and guide the global AI narrative to a point where it is “consistent with our values, while also not being so scared of AI that we don’t even let anyone benefit from it because we are afraid that something might go wrong.”
She pointedly referred to a state-controlled system not being the ideal solution:
“…we don’t want to turn into an authoritarian state-like use of AI where we all feel like we have Big Brother looking over us at all times, that’s definitely not what liberal democracies want.”
This move reeks of posturing. Even if the U.S. policy on AI is merely to act as a steward for AI development, where is the framework that they need as a guideline to do this? Several moral positions have been taken on the subject of AI ethics, but the U.S. government still doesn’t know what it wants.
Vague terminology that includes “leading the conversation”, “promoting an international environment” and “AI that is trustworthy and that we are comfortable adopting” does not indicate any sort of control over the narrative, which is what the White House’s position should be in the first place.
The United States is still one of the largest economies in the world with a GDP (PPP) upwards of $20 trillion. It must necessarily lead the pack in every aspect of AI, not just the conversations around where it’s headed. It already has some of the best AI schools and the biggest tech companies in the world willing and eager to contribute. What it does not have is the political will to dominate the AI space across research, development, testing, certification, and deployment.
Think about it: there’s no meaningful regulatory certification required for AI products other than standard software, security and communications protocols. Baby toys are under tight regulatory control because they can do harm, so why aren’t AI man-toys that are capable of destroying the earth not being regulated at the highest levels?
Meanwhile, China’s progress in AI is unimpeded by this sort of ambiguity that goes all the way up to the Commander-in-Chief. Granted, their moral compass might be a lot more flexible, allowing for the development of privacy-intrusive products and such, but their purpose and resolve to lead the AI game is much stronger than America’s, at least at this point in time.
What America needs is a clear vision (or a Presidency that has one) for the future of AI, which is obviously lacking. Are they simply trying to piggy-back on the efforts of other nations and take the credit by “leading the conversation”? That’s what Parker’s words make it sound like. In her own words:
“But that’s a conversation, right: where is the line?”
Indeed, where is it?