Google reveals how reviews are scrutinised on Maps

Linda D. Garrow

Google has explained exactly how reviews are moderated on its Maps service in a detailed blog post, stressing that much of the “work to prevent inappropriate content is done behind the scenes.” The blog post explains exactly what happens when a user posts a review for a business such as a restaurant or a local shop on Maps. It has outlined the measures which are taken to ensure that fake, abusive reviews do not go up. In the past, Google has also explained how recommendations work on YouTube.

The post has been written by Ian Leader, Group Product Manager, User Generated Content at Google. “Once a policy is written, it’s turned into training material — both for our operators and machine learning algorithms — to help our teams catch policy-violating content and ultimately keep Google reviews helpful and authentic,” Leader wrote.

According to the company, the moment a review is written and posted, it is sent to the company’s “moderation system” to make sure that there is no policy violation. Google relies on both machine-learning bases systems and human reviewers to handle the volume of reviews they receive.

The automated systems are “the first line of defense because they’re good at identifying patterns,” explains the blog post. These systems look for signals to indicate content that is fake, fraudulent and remove it even before it goes live. The signals which the automated systems look for include whether the content contains anything offensive or off-topic, and if the Google account posting it has any history of suspicious behaviour in the past.

They also look at the place about which the review is being posted. Leader explains this is important, because if there has been an “abundance of reviews over a short period of time,” this could indicate fake reviews being posted. Another scenario is if the place in question has got any attention in news or social media, which could also encourage people to “leave fraudulent reviews.”

However, training machines also requires maintaining a delicate balance. An example given is use of the term gay, which is derogatory in nature and not allowed in Google reviews. But Leader explains that if Google teaches its “machine learning models that it’s only used in hate speech, we might erroneously remove reviews that promote a gay business owner or an LGBTQ+ safe space.”

That’s why Google has “human operators” who “regularly run quality tests and complete additional training to remove bias from the machine learning models.”

If the systems find “no policy violations, then the review goes live within a matter of seconds.” However, Google claims that even after the review is live their systems “continue to analyse the contributed content and watch for questionable patterns.”

These “patterns can be anything from a group of people leaving reviews on the same cluster of Business Profiles to a business or place receiving an unusually high number of 1 or 5-star reviews over a short period of time,” according to the blog.

The team also “proactively works to identify potential abuse risks, which reduces the likelihood of successful abuse attacks.” Examples include if there is an upcoming event such as an election. The company then puts in place “elevated protections” for places associated with the event and other nearby businesses.  Again, it will “monitor these places and businesses until the risk of abuse has subsided.”

Next Post

Flooring ideas: 10 ways to create lasting style under foot

The floor is one of a room’s largest surfaces, so careful consideration of flooring ideas is imperative to the success of any design.  A floor’s appearance is important, naturally, but so too is how it feels to walk on, how it will stand up to the room’s activities and its […]