New Law to Prevent Fake News Will Cause Harm

Fake NewsRecently passed legislation in California that targets online bots purports to address the fake news problem and the undue influence of advertising. It accomplishes none of that. The legislation will, however, allow government officials to target ordinary behavior by companies, candidates and political organizations that is not conventionally considered “bot” activity.

The new law, which goes into effect in 2019, makes it illegal for anyone to use a bot to communicate with anyone in California online without disclosing that they are in fact using a bot. Though the law applies only to bots used to influence people’s purchases or votes, the law’s definition of “bot” is severely overbroad.

The law defines “bot” as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” Yet in our highly-automated world, this language can include innumerable activities that the legislation’s authors may not consider “bot” activity.

People who work in digital fields like social media, for instance, often use automated processes to complete simple, repetitive tasks that would otherwise take up valuable time. Many social media managers use a feature native to website platforms and media management tools that tweets new website posts automatically, as soon as they are posted on a company’s website. Organizations strapped for time and resources find this feature particularly helpful. If those posts are commercial or electoral in nature and do not state that they are bot-driven, it is possible that they would violate the law.

What about online store automated chats? These chats mimic the behaviors of customer support staff and exist to help guide users through purchases. Since customer service issues can often be solved without a single human interaction, these “bots” are far more cost-efficient than hiring additional employees. While many automated chat windows inform users they are not human so that users are not confused, some do not. Automated chats that do not disclose this information may violate California’s new law.

In the political realm, an increasing number of campaigns and committees have taken to texting supporters in much the same way they send emails. The goals of these texts vary from fundraising pushes to reminding people to vote, and the messages often include names like “Nancy Pelosi” or “Tom Cotton” to make them appear as though they originated from a big-named sender. While these texts are often approved by the person named, they are not written by the signed sender and their distribution is automated. Need a text identify itself as bot-sent under these circumstances? Under the new law, the answer is likely yes.

While all of these methods can be used nefariously, they are simply tools that can be used to achieve innocent ends. One can send an automated email to sell fraudulent goods or to sell a legitimate product. An automated campaign tweet can attempt to rally support or spread false news. Because the law as written casts such a wide net, it may be used to capture people who use bots for benign purposes.

Moreover, this law would not target the heart of the issues it purportedly addresses — fake news and fraudulent advertising. Fake news requires a human hand in the generation process because the stories are designed for shock value, often in a way that ties in current events, and thus may not be covered by the law. The concern with advertising bots relates to an incident where Russian bots tricked marketers into buying millions of dollars of video ads that never ended up acquiring real views. Yet it is unclear whether the law would substantially impede this sort of activity. In this instance, the bots did not interact with users but merely mimicked web surfers and gave the videos “views.” The law does not apply to bots that do not communicate with real people.

At best, the law will sit and collect dust. At worst, it will sweep many online stores, political committees and candidates into the justice system abyss. In practice, as with many vague laws, officials will likely apply them only to disfavored companies and groups.

Even if the law ends up requiring accounts that do spread fake news and false advertisements to be labeled as “bots,” the law will also apply to many legitimate publications and companies. This means that both legitimate and illicit automated systems will all have the “bot” label. And if everyone online is a bot, then nobody is a bot.

igital media manager and a fellow at the R Street Institute, a free-market think tank in Washington, D.C.