WASHINGTON – Russia's social media blitz to influence the 2016 U.S. election was part of a global "phenomenon" in which a broad spectrum of governments and political parties used internet platforms to spread junk news and disinformation in at least 48 countries last year, an Oxford University study has found.

Including U.S. government programs aimed at countering extremists such as Islamic radicals, about $500 million has been spent worldwide on research, development or implementation of social media "psychological operations" since 2010, the authors estimated.

"The manipulation of public opinion over social media platforms has emerged as a critical threat to public life," the researchers wrote. They warned that, at a time when news consumption is increasingly occurring over the internet, this threatens "to undermine trust in the media, public institutions and science."

In an earlier analysis covering 2016, the researchers found governments and political parties had deployed social media to manipulate the public in 28 countries.

"Disinformation during elections is the new normal," co-author Philip Howard told McClatchy. "In democracies around the world, more and more political parties are using social media to spread junk information and propaganda to voters.

"The largest, most complex disinformation campaigns are managed from Russia and directed at democracies. But increasingly, I'm also worried about copycat organizations springing up in other authoritarian regimes."

In about a fifth of the countries evaluated, the researchers reported disinformation campaigns are occurring on chat applications, even encrypted platforms such as WhatsApp, Signal or Telegram. Howard said young people in poorer nations "develop their political identities" on those sites, "so that's where the disinformation campaigns will go."

Russia's 2016 stealthy social media campaign was part of a broad cyberoffensive that U.S. intelligence agencies say was aimed at helping Donald Trump win the White House. It originated at a so-called "troll farm" in St. Petersburg, where Russian operatives, a number of whom now face U.S. criminal charges, allegedly placed Facebook and Twitter ads carrying fake or harshly critical news about Democratic presidential candidate Hillary Clinton or aimed at sowing divisions among voters on issues such as race, gun rights and immigration. The impact of some of those ads was amplified via automated messages, known as "bots," that reached millions of Americans.

Facebook and Twitter, facing pressure from the House and Senate intelligence committees, each took significant measures to tighten monitoring of social media activity and remove fake accounts and bots. Mark Zuckerberg, Facebook's chairman and chief executive, ordered the hiring of thousands of employees to police activity over its platform and announced the firm would require disclosure in all future political messages of the identity of advertising sponsors.

But the latest Oxford study suggests that use of social media to carry propaganda or misleading political messages may still be expanding faster than the rising numbers of cyber cops.

In five countries — Brazil, Germany, Mexico, Taiwan and the United States — the cyber operatives have found ways to complicate tracking and disabling of bot accounts, the researchers found. Cyber operatives in those nations have taken to occasionally injecting comments or typographical errors amid the bot streams to signal human involvement, Howard said.

Based on a canvas of publicly available data, the researchers estimated that in China, 300,000 to 2 million people were used in 2017 as "cyber troops" engaged in social media campaigns that are largely directed internally.