Google's Project Owl: An Initiative Against Fake News

Google’s Project Owl: An Initiative Against Fake News

By: Rank Media
Google's Project Owl

From presidential candidates to the everyday Joe, it seems like the usage of “fake news” as a phrase has risen exponentially over the past year. Quick to dismiss disparaging reports, politicians from both major US political parties have been quick to levy attacks against others by using the “fake news” label to discredit the reputation of their opponents. However, while the usage of the term “fake news” only skyrocketed after Donal Trump assumed office (as exhibited below), social media platforms and search engines have been infected (for lack of a better term) with misleading content for years now, with agenda-driven websites pumping out content to mislead followers.

Fake News Search Trends
Note the correlation between the increase in the usage of “fake news” and the number of times Donald Trump has called news organizations “fake news” on Twitter.

While it may be difficult to corroborate the authenticity of content published within social media circles and indexed on search engines, tech giants are doing their best to minimize the potential outbreak of misleading or offensive content. Enter Google.

Google’s Project Owl

Due to the alarming increase of mendacious content posted online, Google has decided to follow in the footsteps of Facebook by implementing new features that will bury fake news within search results. Google’s new initiatives include the following:

Improving Search Rankings

While Google admits that a small percentage (0.25%) of daily search queries return offensive or deceitful content, it’s clearly become an issue that needs to be dealt with to improve the quality of search results. As a result, the search engine giant has decided to improve the evaluation methods for reviewing content and implement algorithmic changes to adjust search results based on certain signals. With regards to evaluating content, Google has stated that they will use feedback from real-time experiments to gauge the quality of search results and understand if any misleading and/or questionable content passes through its filters. These guidelines help evaluators assess content appropriately and flag any web pages that circumvent Google’s guidelines. On the technical side, the company has stated that adjustments have been made to increase the presence of veracious and authoritative content, while at the same time demoting spurious and blatantly inaccurate content. While Google is currently in an experimental phase, it is expected that it will take some trial and error to minimize the risk of specious content from permeating search results.

Direct Feedback

In order to improve the quality of search results, Google has launched various direct feedback tools that will allow end users to flag content that appears within Google’s Autocomplete feature and Featured Snippets. These feedback tools will allow Google to automatically tweak search results based on user feedback. While the Autocomplete feature is popular among end users, it has led to dubious phrases showing up that may not be fully accurate. Google will now allow users to submit feedback by stating whether or not certain predictions were hateful, sexually explicit, violent and/or offensive, of just flat-out inaccurate. Typically, queries shown in the Autocomplete feature are based on recent trends and rising search terms, so this form of feedback will allow Google to adjust the predicted searches to remove anything that may be misleading or offensive. As for Featured Snippets, Google will also allow users to submit feedback based on whether the content was helpful, offensive, vulgar, etc. As illustrated by the Google’s interactive GIF below, the Featured Snippets feature typically extracts content from websites to best answer any questions entered into search engines.

Autocomplete Feedback
Autocomplete Feedback
Featured Snippets Feedback
Featured Snippets Feedback

Greater Transparency

While Google doesn’t want to give away its secret sauce regarding how its algorithm works (and why should they?), the company does understand the need for greater transparency regarding its search engine. As a result, Google had decided to update its content policies and help center in order to better educate users on how the Autocomplete function works, as well as the company’s approach to removing content that may be misleading, offensive, or deceitful. Google has also updated its “how search works” website to better educate the masses on how content is crawled, indexed, and then associated to specific search queries.

Long Term Impact

While Google’s mission is based on providing the most relevant content possible, the most powerful search engine is not impervious to manipulation of search results. These safeguard measures will help the search engine serve more accurate results on the first page and weed out fallacious content. With almost 15% of daily search queries never processed by Google before, it’s expected that specious or inaccurate information will tend to appear in search results. However, this is an important step in the right direction for Google, which took some heat last year when holocaust denial websites permeated search results. As we continue to make the shift to an all-digital age, the ability to filter out fake content will become increasingly important.