Will 2020 Go Down as “The Year of the Hactivists?”

Recently a YouTube surfaced with Mark Zuckerberg making outrageous statements.  It was not true.  It used AI to make believe it was Mark making these statements.  The same goes for a viral terrible YouTube claiming San Fran Nan Pelosi made silly statements—again an attack of new technology—Artificial Intelligence.  In 1985,  as my final term paper at the University of Redlands, I wrote the paper on the future of Artificial Intelligence (I have been a member of the World Future Society since the early ‘80s).  All those science fiction beliefs have come true.  Remember, the first flip cell phone was used by Capt. Kirk on “Star Trek”.

“Technology experts say the growth in corporate “hactivism” portends chaos and disruption in the political arena. They warn that the next round of political elections – in California and across the country – will almost certainly be tainted by disinformation campaigns employing manipulated audio and video. All of it is being made possible by the proliferation of sophisticated artificial intelligence software.

“A year is an eternity in technology,” says one cybersecurity expert who is advising several political clients. “The artificial intelligence software needed to create altered audio and video is already broadly available for free. By this time next year, it’ll not only be far more sophisticated, but also much more accessible. There’s no question it’ll be weaponized in politics on a grand scale.”


AI-manipulated videos known as “deepfakes” have already been used to great effect to achieve political goals in other countries.

The purpose of this article is to make sure you check and double check videos and You Tube channels—just because it appears, does not believe it is anything more than Hi Tech Fake News.  Do not be fooled and lose your credibility.

Will 2020 Go Down as “The Year of the Hactivists?”

Stephen Frank, California Political News and Views,  6/20/19  

Cybersecurity experts say the proliferation of technology is opening up a brave new theatre in political campaigns. AI software is advancing at breakneck speed, enabling online activists to manipulate audio and digital convincingly, and making it harder than ever to discern what’s real and what’s not. Now Congress is getting serious about the threat.

Five months ago, a damning audio recording surfaced of senior executives at Ripple, a San Francisco-based technology provider that works with banks and other financial institutions, ostensibly discussing illicit company practices and potentially fraudulent activity. Within a matter of days, the explosive audio recording went viral, racking up more than 100,000 views on YouTube.

There was one problem: the recording was fake. The words uttered by the company’s executives were real enough, but they had been creatively edited and spliced by a scammer employing a form of criminal phone fraud known as vishing, also known as voice phishing.

High-tech weaponry has long been used by criminal gangs and foreign spies for run-of-the-mill shakedowns, IP theft, and other financially motivated crimes. But what happened to Ripple doesn’t appear to have been motivated by dollars and cents; instead, it was designed to cause embarrassment and inflict lasting reputational harm. The would-be extortionist, identified only by the ominous-sounding handle “CRyptoReckoning,” was eventually tracked down by a team of computer forensics experts, ultimately disclosing he had been hired for $60,000 by, a Silicon Valley venture capital firm invested in a competitor of Ripple.

Technology experts say the growth in corporate “hactivism” portends chaos and disruption in the political arena. They warn that the next round of political elections – in California and across the country – will almost certainly be tainted by disinformation campaigns employing manipulated audio and video. All of it is being made possible by the proliferation of sophisticated artificial intelligence software.

“A year is an eternity in technology,” says one cybersecurity expert who is advising several political clients. “The artificial intelligence software needed to create altered audio and video is already broadly available for free. By this time next year, it’ll not only be far more sophisticated, but also much more accessible. There’s no question it’ll be weaponized in politics on a grand scale.”


AI-manipulated videos known as “deepfakes” have already been used to great effect to achieve political goals in other countries.

Indian journalist Rana Ayyub made international headlines last year after her face was digitally superimposed – convincingly – on a porn actress’ body performing myriad sex acts. She was ostensibly targeted in retaliation for criticizing the country’s ruling party for failing to adequately condemn violence against lower-caste groups.


“My reaction for the first two days was to just cry,” said Ayyub, who says the attack against her was clearly designed to humiliate her and discredit her as a journalist. “Screenshots of the video trickled on my phone every minute, on my WhatsApp, Twitter timeline, Facebook inbox. By the next day, it was on my father’s phone, my brother’s.”


High-profile deepfakes have been used to advance political objectives in the Central African nation of Gabon and in Malaysia, as well, propelling lawmakers on Capitol Hill into action.


Last Thursday, California Congressman Adam Schiff, who chairs the House Intelligence Committee, presided over the first-ever congressional hearing on Deepfakes and Artificial Intelligence, in an effort to help policymakers address the growing threat.


“Video and audio is so visceral. We tend to believe what our eyes and ears are telling us. And we also tend to believe and tend to share information that confirms our biases,” says Danielle Citron, a professor of law at the University of Maryland who testified at the hearing.
“There’s no silver bullet. We need a combination of law, markets, and societal resilience to get through this.”

In the meantime, there’s little to suggest that candidates for political office are prepared for the Brave New World of political hactivism. An analysis conducted by Axios found that none of the two dozen Democrats running for President can point to concrete steps they’ve taken to harden their operations against the new breed of digital adversaries.

“We’ve met with a bunch of them,” said one cyber security consultant specializing in anti-disinformation campaigns. “We don’t feel like they are serious about investing the resources required to do anything about it.”

About Stephen Frank

Stephen Frank is the publisher and editor of California Political News and Views. He speaks all over California and appears as a guest on several radio shows each week. He has also served as a guest host on radio talk shows. He is a fulltime political consultant.