A lead search ranking engineer at Google Paul Haahr, said on Twitter that there haven’t been many changes to Google’s algorithms around copied/duplicate content and clustering in the query results. He said “we make improvements to our code over time, including duplicate detection.” “But that’s been mostly stable for years,” he included.
@ajkohn @methode .@ajkohn @methode Obviously, we make improvements to our code over time, including dup detection. But that’s been mostly stable for years.
— Paul Haahr (@haahr) March 30, 2017
We’ve covered a considerable lot of the unconfirmed changes around clustering and copied detection inside the Google indexed lists since before 2007 and some affirmed ones including into 2012 and possibly in 2014.
I have not seen individuals speak much about how Google filters and clusters those results in a while. Which makes sense since Paul from Google said they truly haven’t changed much there in “years.”
Gary Illyes from Google likewise happen to have posted a DYK clarifying that it is uncommon to see two results from a similar domain. However, in the event that you do, it is on the grounds that the quality of other sites are extremely low. Here is Gary’s tweet:
DYK usually when you see more than 2 results from the same site in the SERP, that’s because other results score much lower for the query? pic.twitter.com/Bz4DPaR8PC
— Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 30, 2017