TikTok will open a center in Europe where outside experts will be shown information on how it approaches content moderation and recommendation, as well as platform security and user privacy, it announced today.

The European Transparency and Accountability Centre (TAC) follows the opening of a U.S. center last year — and is similarly being billed as part of its “commitment to transparency”.

Soon after announcing its U.S. TAC, TikTok also created a content advisory council in the market — and went on to replicate the advisory body structure in Europe this March, with a different mix of experts.

It’s now fully replicating the U.S. approach with a dedicated European TAC.

To-date, TikTok said more than 70 experts and policymakers have taken part in a virtual U.S. tour, where they’ve been able to learn operational details and pose questions about its safety and security practices.

The short-form video social media site has faced growing scrutiny over its content policies and ownership structure in recent years, as its popularity has surged.

Concerns in the U.S. have largely centered on the risk of censorship and the security of user data, given the platform is owned by a Chinese tech giant and subject to Internet data laws defined by the Chinese Communist Party.

While, in Europe, lawmakers, regulators and civil society have been raising a broader mix of concerns — including around issues of child safety and data privacy.

In one notable development earlier this year, the Italian data protection regulator made an emergency intervention after the death of a local girl who had reportedly been taking part in a content challenge on the platform. TikTok agreed to recheck the age of all users on its platform in Italy as a result.

TikTok said the European TAC will start operating virtually, owing to the ongoing COVID-19 pandemic. But the plan is to open a physical center in Ireland — where it bases its regional HQ — in 2022.

EU lawmakers have recently proposed a swathe of updates to digital legislation that look set to dial up emphasis on the accountability of AI systems — including content recommendation engines.

A draft AI regulation presented by the Commission last week also proposes an outright ban on subliminal uses of AI technology to manipulate people’s behavior in a way that could be harmful to them or others. So content recommender engines that, for example, nudge users into harming themselves by suggestively promoting pro-suicide content or risky challenges may fall under the prohibition. (The draft law suggests fines of up to 6% of global annual turnover for breaching prohibitions.)

It’s certainly interesting to note TikTok also specifies that its European TAC will offer detailed insight into its recommendation technology.

“The Centre will provide an opportunity for experts, academics and policymakers to see first-hand the work TikTok teams put into making the platform a positive and secure experience for the TikTok community,” the company writes in a press release, adding that visiting experts will also get insights into how it uses technology “to keep TikTok’s community safe”; how trained content review teams make decisions about content based on its Community Guidelines; and “the way human reviewers supplement moderation efforts using technology to help catch potential violations of our policies”.

Another component of the EU’s draft AI regulation sets a requirement for human oversight of high risk applications of artificial intelligence. Although it’s not clear whether a social media platform would fall under that specific obligation, given the current set of categories in the draft regulation.

However the AI regulation is just one piece of the Commission’s platform-focused rule-making.

Late last year it also proposed broader updates to rules for digital services, under the DSA and DMA, which will place due diligence obligations on platforms — and also require larger platforms to explain any algorithmic rankings and hierarchies they generate. And TikTok is very likely to fall under that requirement.

The UK — which is now outside the bloc, post-Brexit — is also working on its own Online Safety regulation, due to present this year. So, in the coming years, there will be multiple content-focused regulatory regimes for platforms like TikTok to comply with in Europe. And opening algorithms to outside experts may be hard legal requirement, not soft PR.

Commenting on the launch of its European TAC in a statement, Cormac Keenan, TikTok’s head of trust and safety, said: With more than 100 million users across Europe, we recognise our responsibility to gain the trust of our community and the broader public. Our Transparency and Accountability Centre is the next step in our journey to help people better understand the teams, processes, and technology we have to help keep TikTok a place for joy, creativity, and fun. We know there’s lots more to do and we’re excited about proactively addressing the challenges that lie ahead. I’m looking forward to welcoming experts from around Europe and hearing their candid feedback on ways we can further improve our systems.”

 

Source link

Author