While the US and China are fierce technology trade rivals, they appear to share something new in common: concerns about accountability for, and possible misuse of, AI. On Tuesday, the governments of both countries issued announcements related to regulations for AI development.
The National Telecommunications and Information Administration (NTIA), a branch of the US Department of Commerce, put out a formal public request for input on what policies should shape an AI accountability ecosystem.
These include questions around data access, measuring accountability, and how approaches to AI might vary in different industry sectors, such as employment or health care.
Written comments in response to the request must be provided to NTIA by June 10, 2023, 60 days from the date of publication in the Federal Register.
The news comes on the same day that the Cyberspace Administration of China (CAC) unveiled a number of draft measures for managing generative AI services, including making providers responsible for the validity of data used to train generative AI tools.
The CAC has said providers should be responsible for the validity of data used to train AI tools and that measures should be taken to prevent discrimination when designing algorithms and training data sets, according to a report by Reuters. Firms will also be required to submit security assessments to the government before launching their AI tools to the public.
If inappropriate content is generated by their platforms, companies must update the technology within three months to prevent similar content from being generated again, according to the draft rules. Failure to comply with the rules will results in providers being fined, having their services suspended, or facing criminal investigations.
Any content generated by generative AI must be in line with the country's core socialist values, the CAC said.
China's tech giants have AI development well under way. The CAC announcement was issued on the same day that Alibaba Cloud announced a new large language model, called Tongyi Qianwen, that it will roll out as a ChatGPT-style front end to all its business applications. Last month, another Chinese internet services and AI giant, Baidu, announced a Chinese language ChatGPT alternative, Ernie bot.
AI regulation vs. innovation
While the Chinese government has set out a clear set of regulatory guidelines, other governments around the world are taking a different approach.
Last month, the UK government said that in order to “avoid heavy-handed legislation which could stifle innovation,” it had opted not to give responsibility for AI governance to a new single regulator, instead calling on existing regulators to come up with their own approaches that best suit the way AI is being used in their sectors.
However, this approach was criticized by some, with industry experts arguing that existing frameworks may not be able to effectively regulate AI due to the complex and multilayered nature of some AI tools, meaning conflation between different regimes will be inevitable.
Furthermore, the UK’s data regulator issued a warning to tech companies about protecting personal information when developing and deploying large language, generative AI models, while Italy’s data privacy regulator banned ChatGPT over alleged privacy violations. A group of 1,100 technology leaders and scientists have also called for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4.
When it comes to technology innovation and regulation, there’s a certain natural path that most governments or legislators usually follow, said Frank Buytendijk, an analyst at Gartner.
“When there is new technology on the market, we learn how to use it responsibly by making mistakes,” he said. “That’s where we are right now with AI.”
After that, Buytendijk said, regulation starts to emerge — allowing developers, users and the legal systems to learn about responsible use through the interpretation of the law and the case law — followed by the final phase, where technologies having responsible use built-in.
“We learn about responsible use through those inbuilt best practices, so it’s a process,” Buytendijk said.