Since developing its Search engine in the late 1990s, Google has embraced the challenge of consistently tweaking its algorithm and updating mechanisms to help users receive the pertinent answers they seek as quickly as possible from an ever expanding abundance of Web pages.

In its most recent endeavor, Google aims to address the high-profile and growing issue of “fake news,” or the problematic situation where, as Ben Gomes, vice president of engineering for Google Search, describes it, ”content on the Web has contributed to the spread of blatantly misleading, low-quality, offensive or downright false information.”

Not only were certain sites “increasingly producing content that reaffirms a particular worldview or opinion, regardless of actual facts,” but people also searched in large enough volumes for those “rumors, urban myths, slurs or derogatory topics” that they influenced the Search suggestions Google offered in offensive and even potentially dangerous ways, according to Danny Sullivan, a widely cited authority on search engine topics.

As a response, Google is working to push high-quality content to the top of its Search engine results through a few recently unveiled structural changes, an update that was internally codenamed Project Owl.

First, the project takes action to improve the company’s evaluation methods and algorithmic updates to improve Search rankings and put an emphasis on surfacing more authoritative content, according to Gomes. This should help prevent the Search engine from returning clearly misleading or offensive content produced by a small fraction of queries – such as questions about whether the Holocaust actually happened showing up in autocomplete predictions or denial sites inundanting search results. Google also adjusted its ranking signals and updated its Search Quality Rater Guidelines so its evaluators, who assess the quality of Google’s Search results, have better criteria for knowing which low-quality webpages to flag or why suggestions might need to be removed. While Google’s human evaluators can’t change Search results directly, their reports “are supposed to help train the algorithms to better weed out the hateful or misleading stuff Google wants to downgrade,” according to an article on Fortune.com.

Google’s Project Owl also enhances users’ ability to provide direct feedback about the content appearing in the Featured Snippets or as autocomplete predictions. The feedback forms readily available for both features now have clearly labeled categories so users can directly flag content they find unhelpful, hateful, racist, offensive, vulgar, dangerous, misleading, inaccurate or more. Users also can provide comments or suggestions if they want.

Finally, Google is trying to provide greater transparency about its products. According to Gomes, the company has been working to figure out why Search would occasionally return offensive or disturbing predictions – such as “Why are women dumb?, ““Did the Holocaust happen?,” and the like. As the company updates its content policies to help improve the situation, though, it wants to keep users informed about the changes. The new policy has been published to Google’s Help Center, and even more details can be found on the recently updated How Search Works site.

Manager of Content & Product Marketing at

Jordan is the Manager of Content & Product Marketing at UpCity. With almost a decade of experience designing websites and writing copy, Jordan has helped countless brands find their voice, tell their story, and connect with real people.