Read part 1 here:

The main issue surrounding this question is that there is no unified legal framework for how facial recognition technology is used and data is collected, as observed by Michael Tan at Taylor Wessing. The heavy government use of the technology is made possible thanks to the works of the “National AI Team” who, although bears the name ‘national’, is made up of privately funded and operated companies.[1] The main objective to be taken here is the urgent need for law and legal studies to catch up with the speedily-growing tech industry, in order to ensure that the future of technology is in safe hands and is safely regulated.

Chinese oppression of the Uighurs is made possible by the sensitive data that is connected to the digital facial identity of the person, revealing a lot more to the government than we are accustomed to. While commenting on the issues surrounding FR as a whole, officials in People’s Bank of China suggested the heightened use of facial regulation in areas such as banking should be curbed, or that there should be alternative options for those who want to keep such sensitive information to themselves.

Upon the aforementioned (see part 1 here:) recent lawsuits and public concern over the use of FR amongst Chinese citizens in general, the government had promised to “curb and regulate” FR use. The efforts to this are separated into two categories: private and public.

The issue of consent was brought to light under the proposition by the National People’s Congress and Strengthening Information Protection on Networks (NPC) Decision in 2012, which asked for the disclosure of purposes, obtainment of consent and ensuring the safety of the information.

The Chinese government’s efforts to broaden the NPC Decision came under the Cybersecurity Law of the People’s Republic of China in 2016, which came into effect in 2017.[2] The policies focus on the interconnected relations between the government and the private companies on the matter. The significant step taken under this law was the definition of what personal information is in Article 76. The more concentrated approach to the FR use concerns were addressed under Personal Information Security Specification under the law, which proposed suggestions to strengthen the safety of the private data collected. As the country’s first major data privacy rule, the Specification suggested that the collection of PI should be for “legal, justified, necessary and specific purposes.”

This policy narrows down the opportunities of the government to collect data on citizens. As a constantly monitored state, if this policy is implemented correctly, this could mean more freedom for Uighurs in Xinjiang. Specifically, the mention of “legal” reasons could give the citizens the right to challenge the collection of their data. However, the possibility of the authorities finding a loophole through justifying such heavy monitoring through their “reeducation” and “transformation” campaigns. “Keeping societal peace” or “monitoring of suspicious activity” could still be used as a legal excuse to continue using facial recognition under this law, as they are deemed necessary and justified. Further, there is still no mention of consent in the policy, which was the basis of the zoo lawsuit brought to light last year.

The other suggestions are, but not limited to:

  • Separate disclosure and consenting process before the collection of data
  • Adapting security measures like encryption
  • Separation of the storage of personal biometric information and personal identification information
  • No public disclosure of the information

The last two propositions carry great hope for a somewhat effective solution. The separation of personal biometric information and personal ID information could mean that the detection of an Uighur person does not give the government, or any other private authority, the means to access sensitive personal information about the person. Such information can range from previous locations, whereabouts and activities of relatives, their own beliefs and religious status and more. The proper implication of this can indicate that unless the person itself has a specific record of suspicious activity, their presence will not cause the authorities to be alerted. However, the definition of “suspicious activity” is still unregulated and undefined.

The last proposition suggests hope due to the recent worrying leaks of personal information brought to light by activists online. The information collected is easy to hack, and given the very personal nature of the data collected, this poses a great risk to the privacy of the monitored citizens.

March 3rd 2020 also saw the release of the updated Personal Information Security Specification, which introduced the concept of “sensitive personal information”.[3] The mentioned sensitivity of the data that is connected to one’s biometric identity underlines the effects on personal reputation that is tied to the use and abuse of such data. The forcing of apps to “be granted explicit consent to collect and use biometrics” could be useful for the Uighurs, as their personal chats on applications such as WeChat will not be used against them as easily. However, the concept of “consent” is not explained in detail.

Private regulation is undertaken by the companies themselves. 27 of the major FR developing companies, including Tencent, Xiaomi and SenseTime, have started an initiative to draft industry standards for FR. Such industry standards set by the private companies themselves only operate under self-regulation, as they are not bound by government regulations. However, the initiative led by SenseTime poses a brink of hope in the industry.

What would ideal regulation in China look like?

Though the current situation in Xinjiang is worrying, it is not and will not be the only example. This is not just a Chinese issue, but a global one. The dangers of FR technology’s use in Xinjiang is evidence to the flaws of legal policy and regulation surrounding the area not just in China, but around the world in general. The rise of AI and FR is not specific to China, as nations all around the world are striving to achieve dominance and efficiency in this technological field. The concerns surrounding the oppression of Uighurs is only voiced confidently by politicians worldwide because China had the chance to do this before anyone else, thanks to their advancement in technology. What is to say that other nations would not do the same to their respective minorities had they had the chance and the necessary technology? This is why further regulation that is to come regarding the issue should act as a model for the rest of the world.

Considering the fact that the condemned actions of the government are possible due to the hard work of the private companies that provide and develop the technology, the most effective way of curbing the fatal flaws of the FR technology that leads to such oppression is adopting a balance between private and public regulation on the issue. The efforts to create regulation on the matter by the private companies themselves by the 27 leading tech companies, could prove more useful than one done by an international organisation.

On the private companies’ part, the most significant struggle is the decision “between social responsibility and market success”, as Microsoft puts it.[4] Due to the intense market competition, companies tend to favor the path that will earn them success and money, rather than prioritising the wellness of the greater public. The fact that a majority of the powerful companies are working together on an industry standard is wonderful, as it supports healthy market competition. In order for the regulation efforts of the Chinese companies to work, there needs to be great emphasis on “transparency, fairness, accountability and non-discrimination”. Notice and consent of the collection data has already been addressed by the previous policy declared by the government, and could rely on how the government chooses to define such concepts in detail. The idea of lawful surveillance also depends on government policy, which is why a balance is needed.

The power that is gifted to the government by such companies has no limits. The companies must have authority to a certain degree in order to ensure that the power is not abused. Limitation of the ongoing government surveillance is not easy to achieve, as the surveillance required to achieve the ideal of a “Safe and Smart City” is a constant one, which gives way to examples of oppression.[5] What the initiative led by SenseTime should do is to clearly outline what the companies’ products can and cannot do, in order to have a detailed understanding of what they are capable of. There needs to be safeguards created by the companies, where transparency on how the technology is being used is exercised. Microsoft’s own article on the general issue of FR use stresses the need for there to be third-party testing and comparison for its application.[6] In the case of Xinjiang, a counsel led by private or public organisations could test the ethical and moral limits of the use of FR on Uighurs.

The most important thing that needs to be done, however, is for the government to draft new legislation, or update the CSL, defining precisely what “consent” and “legitimacy” mean in the context of Article 27. Maya Wang underlines that the “lack of an overarching law on such technology lets companies gain access to vast quantities of an individual’s personal data”, which is then used against them for the government’s own policies.[7] If vague clauses such as “necessity” “justification” and “personal information” are defined to the maximum of the government’s capabilities, the unsolicited gathering of data that leads to “reeducation” and fear in Xinjiang could be curbed to a certain degree.

An example to international politicians voicing their concerns on the issue is the letter written to Mike Pompeo, the Secretary of State, by the Congress on March 4th 2019. The letter urged the government to take measures in response to the FR abuse that is happening in Xinjiang. It is mostly motivated by the fact that many US companies hold investments in the leading tech companies rolling out such facial recognition technology, such as Hikivision. In order for a country to morally and ethically condemn such oppression, they must also eradicate all kinds of support that is being provided in the use and development of its tools.

What does this mean internationally?

Xinjiang is also most definitely not the only example of racial or ethnical oppression in the world. Human Rights Watch underlines that there is “a complete lack of effective privacy protections”, making the dangers posed by the abuse of FR technology a prominent one for everyone everywhere. Considering the recent riots labelled under “Black Lives Matter” in many countries, oppression of peoples through technology is not impossible anywhere. Identification for punishment of the rioters in the US was done through facial recognition technology in May 2019, at least it was attempted to, which led rioters to come up with ways to trick the cameras out of recognising their faces. This included specifically designed make-up techniques and various methods of covering up their faces and changing the bodily behaviors they adopt in their daily lives, which is also monitored and stored as data through surveillance. This only goes to show that legal regulation is urgently needed before the world catches up with China’s advanced FR technology and has the chance to use it in the same malicious way on different minorities.

What will happen if the US companies sell their shares and stop funding such advancement? It is not a definite solution to the problem, as the main reason such shareholding exists in the first place is due to the desire for US companies to develop and use the same technologies themselves. The threat that this poses against the ordinary citizen will still stand strong, as the privacy of their data, and the safety of the livelihood of minorities everywhere will be at the mercy of the next nation that successfully develops the technology on their own.

Taylor Wessing’s Michael Tan had underlined that there is “no unified legal framework for data protection on this”, and highlighted the need for one.[8] Proper regulation and international support on the issue will not only curb China’s ethnic-policy based oppression, but will also show other nations that they will not get the chance to do so either. The regulation should be a decentralised one, proportionately basing its limits on the cultural and political atmosphere of the countries themselves while creating a global authority. One might immediately think of an authority like the UN, however when looked at realistically, it is evident that public international law and international organisations more often than not fail to serve the real needs of the international public, often being corrupted by political superiority within the organisations themselves.

It is going to take a long time for governments to catch up to the advancements in technology, which is why San Francisco has chosen to ban the use of FR technology for five years. This could be seen as a trial period for the city officials to see if there can be proper regulation drafted on the issue for its safe and democratic use.

On June 8th, IBM’s CEO Arvid Krishna published a letter to the members of the US Congress saying that the company “no longer offers general purposes IBM recognition or analysis software” and that it “firmly opposes and will not condone uses of any technology, including FR tech offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purposes which is not consistent with our values and Principles of Trust and Transparency”. Forbes says that Krishna has called for a reconsideration of use of FR by governments, especially law enforcement. IBM’s move is fuelled by a deep understanding of misuse of FR, as seen in Xinjiang, and a desire to prevent such atrocities from happening anywhere else.

[1]Xue Yujie, “27 Companies Drafting China’s First National Facial Recognition Standard”, (Sixth Tone, November 27 2019),

[2]Rogier Creemers, Paul Triolo, & Graham Webster, “Translation: Cybersecurity Law of the People’s Republic of China (Effective June 1, 2017)”, (New America, June 29 2018),

[3]Mingli Shi, Samm Sacks, Qiheng Chen, & Graham Webster, “Translation: China’s Personal Information Security Specification”, (New America, February 8 2019),

[4]Brad Smith, “Facial recognition: It’s time for action”, (Microsoft on the Issues, December 6 2018),

[5]VICE News, “How China Tracks Everyone”,

[6]Brad Smith, “Facial recognition: It’s time for action”, (Microsoft on the Issues, December 6 2018),

[7]Isobel Cockerell, “Inside China’s Massive Surveillance Operation”, (Wired, September 5 2019),

[8]Dr. Michael Tan, “China: Facial recognition and its legal challenges”, (DataGuidance, April 2020),