TikTok to set up a centre in Europe following child safety and data privacy concerns.
TikTok disclosed on April 27 that it is going to open a centre in Europe—The European Transparency and Accountability Centre (TAC)—where consulting experts will be shown how it approaches content moderation and recommendation, as well as platform security and user privacy. This follows the opening of a U.S. centre last year and is similarly being billed as part of its “commitment to transparency”.
Shortly after announcing its U.S. TAC, the short-form video social media site also formed a content advisory council in the market and replicated the advisory body structure in Europe this March, with a different variety of experts. Now, it’s completely replicating the U.S. approach with a specialized European TAC.
With a surge in its popularity, TikTok has faced immense scrutiny for its content policies and ownership structure in recent years. The U.S. has mainly been concerned with censorship risk and user data security, given that the platform is owned by a Chinese tech giant and depends on Internet data laws defined by the Chinese Communist Party. In Europe, however, lawmakers, regulators, and citizens have been raising concerns that include child safety and data privacy issues.
Earlier this year, the Italian data protection regulator conducted an emergency intervention after a local girl, who had been reportedly taking part in a TikTok challenge, passed away. This made TikTok recheck the age of all users from Italy.
TikTok also announced that the European TAC will operate virtually for now, owing to the ongoing pandemic. However, the eventual plan is to set up a physical center in Ireland—which houses its regional headquarter—in 2022.
Recently, EU lawmakers suggested a set of updates to digital legislation, turning up the emphasis on the accountability of AI systems, including content recommendation engines. In the same vein, TikTok said its European TAC will provide detailed insight into its own recommendation technology.
A draft AI regulation presented by the Commission a week prior to TikTok’s announcement also puts forth a complete ban on subliminal usage of AI technology which manipulates users’ behavior in a way that could possibly be damaging to them or others. So, content recommender engines that, for instance, prod users to harm themselves by suggestive promotion of pro-suicide content or risky challenges may fall under the prohibition. The fine for breaching this is up to 6% of the global annual turnover.
The company wrote in a press release, “The Centre will provide an opportunity for experts, academics and policymakers to see first-hand the work TikTok teams put into making the platform a positive and secure experience for the TikTok community,” adding that visiting experts will also receive data on how technology is used “to keep TikTok’s community safe,” how decisions about content is made by content review teams based on its Community Guidelines, and “the way human reviewers supplement moderation efforts using technology to help catch potential violations of our policies.”