Methodology

How AnswerMonk measures AI search visibility across ChatGPT, Claude, Gemini and Perplexity.

Overview

AnswerMonk runs structured prompt sets across four major AI engines and records which brands appear, their position, and the sources cited. Rankings are derived from appearance frequency — not paid placement or manual curation.

Prompt Design

Intent-based prompt sets

Each segment is tested with 18 intent-based queries spanning awareness, consideration, and decision stages. Prompts are written to reflect real buyer language — not keyword-stuffed test queries.

Engine rotation

The same prompts are run across ChatGPT, Claude, Gemini, and Perplexity in the same cohort window. This captures engine-specific biases and surfaces brands that appear consistently vs. those that rely on a single engine.

Scoring

SignalWeightWhat it captures
Appearance rate+2% of prompts the brand appeared in (primary signal)
Authority source present+2A recognised authority domain cited the brand
Repeated appearance (3+ prompts)+2Consistency across prompt variants
Structured entity signals+1Brand has extractable metadata across engines

Publication threshold

A query page is eligible for publication only when evidence_score ≥ 3 AND brand_count ≥ 3. Pages that fail the gate are kept as drafts and may be published manually with a logged reason.

Authority domains

AnswerMonk recognises a curated set of authority domains — regulatory bodies, established review platforms, government health authorities, and major media — as high-signal citation sources. These include: dha.gov.aeg2.comreddit.comcapterra.comhaad.ae and others.

Methodology About the Data How Rankings Work