China is enacting new regulations to limit the manufacturing of ‘deepfakes,’ or media made or manipulated by artificial intelligence software that may make individuals appear to say and do things they never did.
The Cyberspace Administration of China (CAC), Beijing’s internet regulator, will begin implementing regulations on what it deems “deep synthesis” technology, including AI-powered picture, audio, and text-generation software, on January 10.
The CAC stated that the action was intended to reduce dangers associated with activities provided by such platforms that utilize deep learning or virtual reality to alter any web content, termed “deep synthesis service providers” by the regulator, as well as to encourage the industry’s healthy development.
Beijing’s efforts to limit the dominance of Internet businesses increased when four key internet authorities released new restrictions on the usage of algorithms — the technology that powers popular products like ByteDance’s news aggregation Toutiao and microblogging site Weibo. China’s restrictions, the first of their type anywhere, are a direct attempt to limit algorithms’ power over user behavior, a trend that has governments throughout the world concerned.
What exactly is the policy?
It’s intended to prevent applications like Toutiao from recommending manipulated media to their users via “Read next” or “Watch next” recommendations on timelines and news feeds. An earlier prohibition prohibited online news providers from employing methods like machine learning and virtual reality to create or spread fake material, and obliged them to flag it if it was identified. The new guidelines state that platforms that use significant quantities of data to tailor content are not permitted to employ their advanced algorithms to manufacture fake news.
What are the difficulties in following through?
Deepfakes and altered media have a reputation for being difficult to detect. To learn from, artificial intelligence requires a large dataset of instances. As a result, only the larger platforms will have the tools and databases required to recognise synthetic material. Even the most sophisticated detection systems are not very accurate.
Only in the last three to four years has automated deepfake identification become a reality. Deepfake identification works best when you know how the deepfake was made and when you work with high-quality media.
Companies and technologists using the technology must first contact and obtain agreement from individuals before editing their voice or image, according to the new requirements. The laws, officially titled The Administrative Provisions on Deep Synthesis for Internet Information Services, are in response to government fears that breakthroughs in AI technology could be used by bad actors to operate frauds or defame people by mimicking their identities.
The regulators acknowledge places where these technologies could be useful when delivering the guidelines. Rather than outright prohibiting the technology, the regulator claims it will promote its legal usage and “give powerful legal protection to ensure and support” its growth.
If China’s new guidelines are successful, they may establish a policy framework on which other countries might build and adapt. It would not be the first time that China has led the way in stringent technological reform. Last year, China enacted new data privacy rules that severely limited the methods through which private enterprises might gather an individual’s personal identify.