Pro-Russian Bots Sharpen Online Attacks for 2018 U.S. Vote

lightbright

Master Pussy Poster
BGOL Investor
By
Nafeesa Syeed
@NafeesaSyeedMore stories by Nafeesa Syeed
‎September‎ ‎1‎, ‎2017‎ ‎4‎:‎00‎ ‎AM‎ ‎EDT‎September‎ ‎1‎, ‎2017‎ ‎2‎:‎24‎ ‎PM‎ ‎EDT

Charlottesville comments led to online effort to target McCain
  • Twitter, Facebook have bolstered efforts to find fake accounts

After violent protests rocked Charlottesville, Virginia last month, Republican Senator John McCain took to Twitter to condemn hatred and bigotry and urge President Donald Trump to speak out more forcefully.

Then pro-Russian bots got activated on social media.

Within hours, an online campaign attacking McCain -- a frequent Trump critic -- began circulating, amplified with the help of automated and human-coordinated networks known as bots and cyborgs linking to blogs on “Traitor McCain” and the hashtag #ExplainMcCain.


After the 2016 U.S. presidential race was subject to Russian cyber meddling, analysts say the ferocity of more recent assaults is a preview of what could be coming in the 2018 elections, when Republicans will be defending their control of both chambers of Congress.
“They haven’t stood still since 2016,” said Ben Nimmo, a senior fellow in information defense at the Digital Forensic Research Lab at the Atlantic Council in Washington, which tracked the activity. “People have woken up to the idea that bots equal influence and lots of people will be wanting to be influencing the midterms.”
While special counsel and former FBI chief Robert Mueller keeps investigating the 2016 race, Nimmo’s work is among a number of initiatives cropping up at think tanks, startups, and even the Pentagon seeking to grasp how bots and influence operations are rapidly evolving. Blamed for steering political debate last year, bots used for Russian propaganda and other causes are only becoming more emboldened, researchers say.
They’re preparing “and sowing seeds of discord” and “potentially laying the groundwork for what they’re going to do in 2018 or 2020,” said Laura Rosenberger, senior fellow and director of the Alliance for Securing Democracy at the German Marshall Fund.
The alliance last month unveiled Hamilton 68, an online dashboard designed to track Russian influence operations on Twitter with the hope of better highlighting sources of information.

The site culls real-time data from 600 Twitter users, analyzing trending hashtags, topics and links. The dashboard’s developers say the accounts they selected cover those likely controlled by Russian government influence operations. Other accounts are pro-Russia users that may be loosely connected to the government and some are people influenced by the first two groups and who are active in bolstering Russian media themes. Some are bot accounts.

“Our view is that exposure is a really important element of beginning to push back on some of these efforts,” said Rosenberger, who served at the National Security Council and the State Department in the Obama administration.

Cyborgs Versus Bots
Short for “robot,” internet bots come in a couple of forms. There are automated versions in which software pumps out posts from social media accounts, often at rates that a human couldn’t conceivably do. Others are dubbed cyborgs -- some of their content is automatically spit out, but a person also takes over posting at times. They can also be human-run accounts that are hacked or taken over by a robot.

Not all bots are nefarious. Although researchers say pro-Russian operatives exploiting social media have made headlines lately, the use of bots is broadening as they prove they can be influential in moving narratives from niche circles and the fringes of the internet to a wider audience by spreading links to blogs and news sites, as well as popularizing memes and hashtags. That will make them a potentially potent tool for competing interests trying to influence U.S. political debate in 2018 and beyond.

It’s hard to determine from where bots originate. Analysts are able to monitor the messaging that bots latch on to, such as advocating for Russian and alt-right narratives or anti-NATO stances. Nation-states or groups helping political campaigns might look to employ bots given their power to shift debates.

And while many online campaigns are clearly fake, bots are also used in more sophisticated efforts that start from a basis in truth.

Ukraine Unrest
A top theme users boosted the week after the Charlottesville clashes was “alt-right alarmism” about the left-wing anti-fascist movement, known as Antifa, according to the dashboard findings. The most-tweeted link in the Russian-linked network followed by the researchers was a petition to declare Antifa a terrorist group.

On Twitter, pro-Russian bots and cyborgs helped promote accusations that McCain allied with neo-Nazis in the past, such as during Ukraine’s civil unrest in 2013. At the time, the Arizona Republican, who is known for his tough stance against Russian meddling in Ukraine, met with and appeared on a stage with nationalist leader Oleh Tyahnybok, whose group has neo-Nazi roots.

McCain’s office didn’t respond to repeated requests for comment on his appearance with Tyahnybok.

One Twitter account tracked by Nimmo’s lab, @TeamTrumpRussia, is what the researchers call a “pro-Kremlin cyborg site.” It averages a rate of more than 220 tweets a day, including memes about McCain in the week after the Charlottesville unrest, which left one person dead.

In a series of Twitter posts Friday, @TeamTrumpRussia rejected accusations that it is a “cyborg site,” saying “I am just a Russian. Deal with it.”

Putin’s Rejection
Top Russian officials, including President Vladimir Putin, have repeatedly rejected accusations the country meddled in the U.S. election, a finding at odds with the conclusions of the U.S. intelligence community. In January, the nation’s top intelligence agencies agreed that Russia interfered in the election to discredit Hillary Clinton and boost Trump, who has often appeared reluctant to embrace the findings. Trump’s intelligence chiefs, including CIA Director Mike Pompeo and Director of National Intelligence Dan Coats, have agreed with the conclusions.

Putin told NBC News in June that there’s “no proof” of any involvement by Russia at the “state level.” But he did say that “patriotically minded” Russians could have been behind intrusions into Clinton’s campaign.

The drumbeat of news about Russia’s role in the election have only helped push relations with the U.S. to post-Cold War lows. Nonetheless, analysts say Russia’s longer-term goal is less focused on Trump than on helping disrupt or undermine U.S. democratic institutions -- an effort that has been under way for decades but which now has a more technological edge.

Researchers say Twitter isn’t the only domain for bots. They’re increasingly expanding to other platforms like YouTube, Instagram and LinkedIn. They even operate interactive “chatbots” on mobile applications available on Facebook, said Nitin Agarwal, an information science professor at the University of Arkansas at Little Rock.

Mimicking Human Behavior
“The level of sophistication among these bots is increasing and becoming more and more advanced to try to evade bot detection and suspension from Twitter and other platforms,” said Agarwal, who’s spent a decade studying the use of social media for influence operations. They’re also trying to “mimic human behavior so that they can gain your trust and they can influence your behaviors,” he said.

Because the use of bots is still new, trying to understand how they operate has become a cutting-edge field. It’s even caught the attention of the Pentagon’s Defense Advanced Research Projects Agency, known as DARPA.

In May, the agency awarded Agarwal and Intelligent Automation Inc., a Rockville, Maryland-based technology company, a contract of up to $1.5 million over three years -- if research milestones are met -- to study the classification of “social bots,” what their intent is and how they’re applied on social media.

For researchers, Twitter is a data gold mine because users’ accounts are usually publicly available. It’s harder to access private content on Facebook.

‘Powerful Antidote’
When asked how it was responding to growing sophistication by bots, a Twitter spokeswoman referred to a June 14 blog post by Colin Crowell, the company’s vice president of public policy, government and corporate philanthropy. Crowell outlined how Twitter is curbing “bots and other networks of manipulation,” including growing its team and resources and working “hard to detect spammy behaviors.”

“Twitter’s open and real-time nature is a powerful antidote to the spreading of all types of false information,” Crowell wrote. “This is important because we cannot distinguish whether every single Tweet from every person is truthful or not. We, as a company, should not be the arbiter of truth.”

said it created a software algorithm to flag stories that may be suspicious and send them to third-party fact checkers. But bots are also getting savvier at dodging detection. That poses a challenge to social media companies trying to crack down on fake accounts -- and fake news.

And with bot activity accelerating as the U.S. heads into another election season in 2018, social media companies could face further risks from these networks.

A challenge for social media companies is “how good their algorithms are at weeding out bot strikes,” Nimmo said. “That’s something that they need to be thinking of.”


https://www.bloomberg.com/news/arti...ers-urge-trump-not-to-end-dreamers-protection




.
 
Russia’s State-Run RT News Network Developed and Federal Security Service Operated the Artificial Intelligence-Enhanced Bot Farm to Disseminate Disinformation to Sow Discord in the United States and Elsewhere

The Justice Department today announced the seizure of two domain names and the search of 968 social media accounts used by Russian actors to create an AI-enhanced social media bot farm that spread disinformation in the United States and abroad. The social media bot farm used elements of AI to create fictitious social media profiles — often purporting to belong to individuals in the United States — which the operators then used to promote messages in support of Russian government objectives, according to affidavits unsealed today.

In conjunction with the domain seizures and search warrant announced today, the FBI and the Cyber National Mission Force (CNMF), in partnership with Canadian Centre for Cyber Security (CCCS), the Netherlands General Intelligence and Security Service (AIVD), Netherlands Military Intelligence and Security Service (MIVD), and Netherlands Police released a joint cybersecurity advisory detailing the technology behind the social media bot farm, including details regarding how the bot farm’s creators leveraged their bespoke AI system in furtherance of the scheme. The advisory will allow social media platforms and researchers to identify and prevent the Russian government’s further use of the technology. In addition, X Corp. (formerly, Twitter) voluntarily suspended the remaining bot accounts identified in the court documents for terms of service violations.

“With these actions, the Justice Department has disrupted a Russian-government backed, AI-enabled propaganda campaign to use a bot farm to spread disinformation in the United States and abroad,” said Attorney General Merrick B. Garland. “As the Russian government continues to wage its brutal war in Ukraine and threatens democracies around the world, the Justice Department will continue to deploy all of our legal authorities to counter Russian aggression and protect the American people.”

“Today’s action demonstrates that the Justice Department and our partners will not tolerate Russian government actors and their agents deploying AI to sow disinformation and fuel division among Americans,” said Deputy Attorney General Lisa Monaco. “As malign actors accelerate their criminal misuse of AI, the Justice Department will respond and we will prioritize disruptive actions with our international partners and the private sector. We will not hesitate to shut down bot farms, seize illegally obtained internet domains, and take the fight to our adversaries.”

“Today’s actions represent a first in disrupting a Russian-sponsored Generative AI-enhanced social media bot farm,” said FBI Director Christopher Wray. “Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government. The FBI is committed to working with our partners and deploying joint, sequenced operations to strategically disrupt our most dangerous adversaries and their use of cutting-edge technology for nefarious purposes.”
https://www.justice.gov/opa/pr/just...ral-international-and-private-sector-partners
 
They've been doing this for decades.

What hurts my heart is how our community continues to fall for the banana in the tailpipe.
 
They could be trying to get truth out. I have had U.S. state media attacks that were not true at all. The next thing I know it is assassination attempts, overt surveillance, terrorism.

Disinformation more like information in many cases.
 
Back
Top