In a fake internet, does Google do enough to fight disinformation?
Fake news. Internet bubbles. Bots. Spam. Click bait. Troll farms. Fake comments. Fake reviews. Fake likes. Fake analytics. Fake views. Fake clicks.
This is what much of the internet looks like these days.
Whatever happened to being honest and the good old fashioned truth?
"The truth is out there!" people used to say.
"The truth shall set you free!" people used to believe.
But as it turns out, the truth means little on the internet in 2019.
Consider this sobering study conducted by MIT last year of over 126,000 stories shared by 3 million users on Twitter since the platform's existence.
It was the biggest study of fake news of its kind and what did it conclude?
Falsehoods almost always beat out the truth on Twitter.
Given a choice between a false story and a true story, people are more likely to tweet out the false one.
A false story will reach 1,500 people six times quicker than a real story, on average.
By every common metric, falsehood consistently outperforms the truth online.
Or consider the rise of deepfakes. A few months back Buzzfeed produced a video of Obama giving a speech that never actually happened.
The implications of this technology are dark. What happens to the truth in a few years when doctored videos really become indistinguishable from actual footage and people can't tell which is which?
Bloomberg did a good video on this topic you can watch below.
Yes, things are getting weird on the internet. Look at this click farm in China. Thousands of phones lined up one against one another. You pay these people for clicks and you are rewarded with fake views, likes and viral exposure. It doesn't matter that nobody actually read your article.
As NY Mag puts it:
"Fake people with fake cookies and fake social media accounts, fake moving their fake cursors, fake clicking on fake websites...where the only real things [are] the ads."
Google and disinformation
So in this age of a fake internet, what is Google doing to combat the disinformation?
Google released a white paper addressing just this issue ahead of the Munich Security Conference in Europe last week.
In the paper, Google wanted to quell growing concerns that governments have against the major tech giants that they are not doing enough to fight what information gets shared on their platform.
The paper provides a rundown of official Google policy for its main products, notably Google Search, Google News, YouTube and Google Ads.
We are going to focus on how Google combats disinformation for its advertising products.
1. Google disfavors copied or unoriginal content
Google disables ads on websites that offer little or no value, such as a site that consists primarily of news articles scouted from other sources.
The company also refuses to work with websites that include excessive advertising and offer the user little real value.
In this vein, Google has systematically increased the amount of websites it blocks because the content is bad. In 2016 it was 10,000, in 2017 it was 12,000.
Why you should care: As a webmaster, it can be tempting to boost traffic to your website by taking shortcuts on content. Publishing other people's work on a daily basis to get more traffic might work in the shortrun, but in the long run Google will punish you for it.
2. Google does not allow product misrepresentations
It's never a good strategy to misrepresent your product.
Red Bull found that out the hard way when it was forced to shell out $13 million in a class action lawsuit because the energy drink did not give its customers wings, as claimed in its ads.
Ditto for these brands.
Google does the same with its advertising. If your ad misrepresents the product or service being sold, then your ad risks being banned.
What does misrepresening a Google Ad include?
1. Hiding important information: If you try to hide the full amount a user will pay for your product in your ad, then you could end up seeing your ad banned.
2. Promising products that don't exist: For example, if you sell protein powder "from $40" but then don't offer a $40 protein powder on your website.
3. Suggesting improbable results: get rich quick schemes, miracle medicines. These kind of products could be banned by Google. The company suggests offering a clear refund policy for any promised results.
4. Made up endorsements: Claiming that a celebrity endorsed your product when they didn't is not allowed.
5. Promotions that are not relevant to the landing page If your ad bears no resemblance to the landing page it directs users to, it will
If you violate these policies you run a number of risks, including:
- your ad or extension will be disapproved
- your account may be suspended for repeat violations
- your remarketing list may be disabled
- you may undergo a compliance review
Google made a number of changes to its Google Ads and Adsense policies in response to interference in US elections in 2016. Since 2018, Google does not allow ads that direct content about politics, social issues or matters of public concern to users in a country other than your own, or that misrepresent or conceal the country of origin or other material details about the advertisers or the organization.
Why should you care? Obviously it can be tempting when writing ads to boast about your product. And a little boasting is completely fine. The problem occurs when outrageous claims are made, and there are more than a few examples you can find of ads being banned. It's best to be honest about what your business offers.
3. Google bans ads on inappropriate content
Google does not allow monetization of shocking, dangerous or inappropriate content, especially in response to recent tragic events, such as a school shooting or disaster.
While this policy may not seem controversial, it has been criticized by a few YouTubers and content creators who make videos on political events and then see their content demonetized because the topic is not appropriate for ads.
Here is a response by a small size YouTuber creating content on political issues in response to demonetization on YouTube.
4. Google combats political influence operations
All the tech giants are now increasingly sensitive to being accused of having hosted fake accounts that ran fake news stories on their platforms. Google is no exception.
As such, the search engine giant claims to "engage independent cybersecurity experts and top security consultants to provide us with intelligence on these operations. Actors engaged in these types of influence operations violate our policies and we swiftly remove such content from our services and terminate these actors' accounts."
Google does not go into details about what this means, and in fact has been criticized by the European Union (along with Twitter and Facebook) that it has not lived up to its commitments.
5. Google protects election integrity
In response to controversies that arose during the 2016 presidential election, Google now requires additional verification for anyone wanting to purchase an election ad on Google in the United States.
This verification process requires the purchaser to prove they are a US citizen or lawful permanent resident.
In terms of the ad itself, Google says the ad copy must include a clear statement of who is paying for an election ad.
The company also released a transparency report which describes how much money was spent on election-related ads on its platform, as well as a searchable database of all election ads.
Google plans to extend these measures beyond the United States in 2019, including for European Parliamentary Elections and upcoming elections in India.
What does this look like in practice?
For example, ads like these are now required to include who the sponsor is.
6. Google uses consistent enforcement teams to monitor ads
Google says it employs enforcement teams that ensure content on its advertising platform is consistent with tech company's policies. These methods include machine learning, human review and other technological methods.
If certain ads are found to be in violation of the company's policies, the company responds by:
- blocking the ad from appearing or removing the ad from a publisher page
- disabling accounts in case of repeated violations.
7. Giving advertisers control
There are certain kinds of content that some advertisers do not want to be associated with.
As such, advertisers on Google have the ability to exclude whole categories of topics where they would not like their ads to be shown.
These could include categories like politics, fashion, fitness etc.
For example, let's say an advertiser sells wood products, but does not want to be associated with deforestation policies. They could choose to not have their ads shown on videos on YouTube where trees are cut down like these.
Publishers likewise may not want ads to be shown on their content from companies they do not believe in. As such, they have the ability to block these ads from being shown on their videos or websites.
Is Google doing enough to fight disinformation?
This is where things get tricky. On the one hand, Google has been criticized by the European Union for failing to live up to commitments the company made in October to help curb the spread of fake news.
Google at the time partnered with Facebook and Twitter to combat fake news in the run up to elections in the EU.
While Facebook has taken the brunt of criticism for the content it allows users to share on its platform, Google has been able to avoid the spotlight largely because people mostly associate Google with search, where there is not an opportunity to share content the way there is on Facebook.
However, when it comes to YouTube, things change drastically. One study has linked the rise in the number of flat earth conspiracy theorists directly to YouTube. The study found that people were largely disinterested in the theory until they watched several YouTube videos on the topic that changed their mind.
And we don't have to mention the number of conspiracy videos there are on YouTube related to 9/11, the moon landing and JFK.
In response to these criticisms, YouTube said that it changed its algorithm in January to stop conspiracy videos from being recommended to users.
But there are two problems with this approach: First, is this censorship? If a bunch of people want to believe that 9/11 was a hoax, why not let them watch their videos online on the topic?
And secondly, people who rely on YouTube as a source of income that have tweeted videos related to conspiracies in the past may find their videos downgraded, which could affect the amount of money they earn, as well as how much Google can make from them.
This could keep Google from pushing back too hard on misinformation, simply because it will mean less revenue for the search giant.
In the end, it goes back to who is to blame for all the fake news on the internet.
If people like sharing fake articles, how much can tech giants like Google really do?