NATIONAL VIEW: Aggrieved Swifties, lovers of AI memes unite

THE POINT: This election season, fighting online disinformation by actors foreign and domestic is everybody’s business.

So Donald Trump has had his fun with those AI-generated images of crowds of Taylor Swift fans in “Swifties for Trump” T-shirts, which reportedly sparked a backlash among actual Swifties.

And who didn’t love the AI-generated scenes of Trump and Kamala Harris walking hand in hand along the shore in deep-fake love — a video clip now making its way around the globe courtesy of Elon Musk.

But the use of artificial intelligence, disinformation, and hacking — all the tools of our internet age — are only funny until they’re not, until they are used maliciously by foreign actors to disrupt our democratic processes and to set us against each other for reasons based not in fact but in propaganda.

Users themselves are the first line of defense: We all need to develop stronger skepticism about everything social media companies feed us. But the government and tech companies have to step up, too.

“Iran and Russia have employed these tactics not only in the United States during this and prior federal election cycles but also in other countries around the world,” the FBI said in a recent statement in conjunction with the Office of the Director of National Intelligence and the Cybersecurity and Infrastructure Security Agency.

The statement confirmed in this case that Iran was responsible for recent attempted hacks into the Trump and Joe Biden presidential campaigns.

“The (intelligence community) is confident that the Iranians have through social engineering and other efforts sought access to individuals with direct access to the Presidential campaigns of both political parties,” the statement said. “Such activity, including thefts and disclosures, are intended to influence the U.S. election process.”

In fact it was Microsoft that first went public with its announcement not simply of the campaign hacking but of a major disinformation campaign launched by Iran, especially in key battleground states.

“One Iranian group has been launching covert news sites targeting U.S. voter groups on opposing ends of the political spectrum,” the report said, with one in particular targeting left-leaning groups with anti-Trump messages. “The evidence we found suggests the sites are using AI-enabled services to plagiarize at least some of their content from U.S. publications.”

Another “group may be setting itself up for activities that are even more extreme, including intimidation or inciting violence against political figures or groups, with the ultimate goals of inciting chaos, undermining authorities, and sowing doubt about election integrity.”

This was, of course, in addition to the successful spear-phishing email sent in June to longtime Trump confidant Roger Stone, which netted some vetting materials on Republican vice presidential candidate JD Vance. The materials were sent along to several news outlets but not published.

Meanwhile, Russia, which has a long history of disinformation campaigns perfected in 2016, has certainly not been idle.

The Justice Department reported in July that it had disrupted two Russian propaganda campaigns, both largely aimed at justifying Russia’s invasion of Ukraine and disparaging Ukraine, Poland, and the European Union.

It was, however, the first time the United States had disrupted a “Russian-sponsored Generative AI-enhanced social media bot farm,” FBI Director Christopher Wray said in the statement announcing the discovery of 968 social media accounts on X. The U.S. operation was carried out in connection with cybersecurity agencies in Canada and the Netherlands.

As for those who had grown fond of the internet rantings of Sue Williamson of Gresham well, sorry to break the news, but she never did exist and now she’s gone. Hers was just one of those 968 accounts now voluntarily closed down by X.

Last month Meta disclosed that working on a tip from the FBI it too had removed “inauthentic” pages and accounts on Facebook and Instagram also aimed at disparaging Ukraine and its conduct of the war instigated by Russia.

Meta continues to insist — as recently as its latest quarterly “Adversarial Threat Report” issued last month — that it regulates not content but “coordinated inauthentic behavior” online.

“We view (coordinated inauthentic behavior) as coordinated efforts to manipulate public debate for a strategic goal, in which fake accounts are central to the operation,” it notes in the report. “In each case, people coordinate with one another and use fake accounts to mislead others about who they are and what they are doing. When we investigate and remove these operations, we focus on behavior, not content — no matter who’s behind them, what they post or whether they’re foreign or domestic.”

Russia remains a top violator of the policy and during the past quarter had 139 Facebook accounts and 20 Instagram accounts removed — many of them targeting Ukraine, its neighbors, or other countries in Europe.

Another 96 Facebook accounts originating in the United States and operating domestically were also dismantled by its parent, Meta. Most of the accounts were centered around “a fictitious political advocacy group — the Patriots Run Project” attempting to attract “real conservatives” in such key battleground states as Arizona, Michigan, Nevada, Ohio, Pennsylvania, Wisconsin, and North Carolina.

And while the accounts originated here, as the U.S. Director of National Intelligence warned two months ago, “Foreign actors continue to rely on witting and unwitting Americans to seed, promote, and add credibility to narratives that serve the foreign actors’ interests,” particularly through social media.

And surely it’s no coincidence that the Justice Department has recently begun to probe Americans who have worked here and abroad for Russian state media.

What used to be a straightforward process of the FBI and social media organizations finding common ground to protect the public from disinformation has been complicated by the recent legal case brought by two Republican attorneys general challenging how far the government could go to fight disinformation on the internet. The U.S. Supreme Court found in favor of the Biden administration but left unsettled where that First Amendment line should be drawn.

The government and social media companies — some brought reluctantly to the party — are slowly working to rebuild a relationship that is critical to the battle against disinformation.

Meanwhile, the most appealing target of all for belligerent foreign actors — the U.S. presidential election — grows closer.

And Meta, however self-serving, offered up a valuable — if obvious — piece of advice in its latest cybersecurity report:

“We encourage influential figures and the public at large to remain vigilant to avoid playing into the hands of deceptive operations attempting to manipulate public debate.”

In other words, don’t believe everything you read online. Disinformation is one of the prices we pay for living in a free and open society — but we don’t have to fall for it.

The Boston Globe