The dangers of platforms that use suggestion algorithms like TikTok, Instagram and YouTube have slowly been pushed into the public consciousness across the last half-decade. Given the absolutely massive user-bases of the platforms, the reports as to their harmfulness are frightening. Whistleblowing of Silicon Valley executives and documentaries such as “The Social Dilemma” alongside the work of investigative journalists, have revealed the extent to which these platforms are capable of nurturing addiction in users, addiction caused by stimulating emotional response and disregarding completely the reputability of the sources of information. These platforms, left unregulated, can have devastating social effects due to their ability to polarise users as they seek to commodify viewer-attention.
EU and US Trepidation
The slower governments are to act, the longer tech giants are able to capitalise off of the social and individual chaos they manufacture, and the more extensive the damage they cause. However, as with any proposal of legislation, it is crucial to ensure that the approach taken is both feasible, and informed enough to be reliable. The EU Artificial Intelligence Act took the first step in setting legal precedent regarding algorithm recommendations, useful to future lawmakers. However, the open-debate and international dialogue approach taken by EU and US legislators make the implementation of legislation and the proposition of more specific legislation a much lengthier process. China on the other hand, as of March 1st, has already gone ahead and passed new legislation restricting tech companies from hiding their recommendation algorithms and requiring a toggle function that would enable consumers to toggle on and off the advanced algorithms.
The new legislation was designed through the cooperation between the Cyberspace Administration of China (CAC) and four other government departments. The stated goal is to “regulate the algorithm recommendation activities… [in order to] protect the legitimate rights and interests of citizens… and promote the healthy development of Internet information services.” The proposal was published in January and enacted in early March. Beyond regulating just the parasitic relationship between hidden algorithms and consumers, the legislation also details broader requirements meant to mitigate the social chaos that the platforms create and allow China better control of the powerful tech industry. The proposal includes crackdowns on fake accounts and bots (Art 14), the dissemination of “false news information” (Art 13), as well as the seemingly purposefully ambiguous Article 6 which requires platforms to “actively disseminate positive energy, and promote the application of algorithms to be good.” The broadness of the legislation has raised questions among Western critics as to the feasibility of enforcing the legislation and the general effectiveness of the new policy. However, it is clearly a big step forward in the big tech vs big government conflict imminent in China’s future.
This unprecedented move by Chinese authorities is a promising step forward in terms of protecting consumers from the predatory business models of large tech giants. However, though the limitations placed on tech companies’ autonomy are crucial to protecting individual rights and freedoms, the control granted to the Chinese government in Articles 13 and 6, could be instrumental in the dissemination of propaganda, as long as the Chinese government has uncontested control of the definitions of “fake information” and “positive energy.” The Russian government, which recently passed similar fake news laws, has been attempting to use the laws to repurpose TikTok as a spreader of Kremlin propaganda. TikTok in Russia is drowned by pro-war effort content for any and every individual within Russian territory.
Regardless of the potential misuse of the legislation, the framework it offers as an example for future legislators and the opportunity other countries now have to watch how instrumental these laws can be, are invaluable.