Search Engine Watch just alerted us that Google has updated its human evaluator guidelines. There is so much there to talk about — 160 pages of juicy detail, and the first update since its initial release of guidelines in 2013. Here are some highlights focusing on mobile search.

Human Rating Guidelines

Search ranking is dependent on more than just mathematical calculations.

Google’s ranking decisions have long involved human evaluators who assess the quality of their search results to help determine whether their strategies are helping people find good information. Evaluators rate the quality of pages and sites as high or low, not to determine individual site search rankings, but to help Google fine tune the quality of their ranking strategies so that users can easily find what they want, to rank the best information highest.

These updated guidelines were written to instruct evaluators on how to rate pages, sites, images and other content, so it gives us good insight into what Google values.

Let’s hear it for quality valuation!

All of us altruistic types have to cheer for anyone in a place of authority who champions quality. In a long career in the technical end of state government web development, I was the artistic oddball who tried to inject high quality and user-friendliness into our deliverables, even though, sadly, high quality was never a stated goal in the binders full of documentation that define each multimillion dollar project. So go, Google, and reward people who are really trying to produce helpful, high quality information and make it easily accessible for people who need it.

Mobile Search

Mobile-friendly search results is the focus.

A good chunk of the guidelines in the “Understanding Mobile User Needs” section deals with how to determine the quality of search results from the perspective of a smartphone user. There are fine degrees of definition about what constitutes a good set of search results, depending on things like:

  • What people do on a smartphone, how they use it and what their goal is when searching
  • The effect of physical location on search results
  • How to handle queries with multiple meanings, like “Apple” — the company? the fruit? a location or a person?
  • How current and relevant search results are
  • How useful and usable web search result blocks and special content blocks are

Evaluators rate effectiveness by whether needs are met.

Guidelines state:

“Needs Met” rating tasks ask you to focus on mobile user needs and think about how helpful and satisfying the result is for the mobile users.

Ratings are quite specific, on this scale — maybe more detail than most people want to know, but it does demonstrate a level of quality that you have to appreciate.

  • Fully Meets
  • Highly Meets
  • Moderately Meets
  • Slightly Meets
  • Fails to Meet

See Google’s Human Evaluator Guidelines here.