This tool uses Google’s helpful content guidelines, and poses them as questions for an LLM to answer for each of the given articles. “Does the content provide original information, reporting, research, or analysis?”, “Does the main heading or page title provide a descriptive, helpful summary of the content?”, and so on (twenty criteria).
Evaluate which of the guidelines your articles satisfy
Get an average score for all articles (up to 25)
Full data (each question per URL, with title, and body text)
Filter, sort, export, reorder
Articles are crawled with advertools, sent to the OpenAI API, which responds with a simple True/False (no mumbo jumbo) answers per URL, per criterion.
You can test the app here: Google Helpful Content Checker: AI-Powered Content Audit
(also contains a short demo video of how to use).
Any feedback, bugs, suggestions, please let me know.
Thanks!
1 Like
Cool idea!
I tried it on a few articles I wrote (and had pretty good score, hopefully!).
I wonder how “too optimistic” this could be, I often find chatGPT being too nice about content (even worse if I mention that I wrote the content), and not criticize it enough.
But I guess for SEO this is more than enough.
Do you input more context for the question “Does the content provide substantial value when compared to other pages in search results?” ?
Thanks for sharing 
1 Like
@spriteware Thanks!
Congrats on scoring high (according to ChatGPT at least
)
I think the fact that questions are very specific, as well as the fact that answers should be boolean could reduce a lot of the bias. Also, that there are many questions, several of which have some overlapping aspects helps as well.
Great catch on the ambiguous question. I believe it could be asking the LLM to give a feel for how this article would compare to highly-ranking articles in a similar field. It might be further improved though.
Thank you.