Facebook expanded a ban on QAnon-related content on its various social platforms Tuesday, deepening a previous prohibition on QAnon-related groups that had “discussed potential violence,” according to the company.

Today’s move by Facebook to not only ban violent QAnon content but “any Facebook Pages, Groups and Instagram accounts representing QAnon” is an escalation by the social giant to clean its platform ahead of an increasingly contentious election.

QAnon is a sprawling set of interwoven pro-Trump conspiracy theories that has taken root inside swaths of the American electorate. Its more extreme adherents have been charged with terrorism after acting out in violent and dangerous ways, spurred on by their adherence to the unusual and often incoherent belief system. BuzzFeed News recently decided to call QAnon a “collective delusion,” another apt title for the theory’s inane, fatuous and dangerous beliefs.

Facebook’s effort to rein in QAnon is helpful, but likely too late. Over the course of the last year, QAnon swelled from a fringe conspiracy theory into a shockingly mainstream political belief system — one that even has its own Congressional candidates. That growth was powered by social networks inherently designed to connect like-minded people to one another, a feature that has been found time and time again to spread misinformation and usher users toward increasingly radical beliefs.

In July, Twitter took action of its own against QAnon, citing concerns about “offline harm.” The company downranked QAnon content, removing it from trending pages and algorithmic suggestions. Twitter’s policy change, like Facebook’s previous one, stopped short of banning the content outright but did move to contain its spread.

Other companies, like Alphabet’s YouTube product, have come under similar censure by external observers. (YouTube says it reworked its algorithm to better filter out the darker shores of its content mix, but the results of that experiment are far from conclusive.)

Social platforms like Facebook and Twitter have also made changes to their rules after being confronted with a willfully mendacious administration ahead of an election, about which the same administration has propagated lies and disinformation about voting security and the virus that has killed more than 200,000 Americans. The pairs’ work to limit those two particularly risky strains of misinformation is worthy, but by taking a reactive posture instead of a proactive one most of those policy choices have also come too late to control the viral spread of dangerous content.

Facebook’s new rule comes into force today, with the company saying in a release that it is now “removing content accordingly,” but that the effort to purge QAnon “will take time.”

What drove the change at Facebook? According to the company, after it yanked violent QAnon material, it saw “other QAnon content tied to different forms of real world harm, including recent claims that the west coast wildfires were started by certain groups.” In Oregon, where forest fires recently raged, misinformation on the Facebook platform led to misinformed state residents who believed that antifa — a term applied to those opposed to fascism as an unironic pejorative — were torching the state, set up illegal roadblocks.

How effective Facebook will be at clearing QAnon-related content from its various platforms is not clear today, but will be something that we’ll track.

Source link

Author