The 50-prompt sample
Every Monday at 04:00 UTC, AVA runs a 50-prompt sample across six generative engines: ChatGPT (GPT-5.4 + GPT-5.5), Claude (Sonnet 4.6 + Opus 4.6), Perplexity, Gemini (3.1 Flash + 3.0 Flash), Google AI Overviews, and Microsoft Copilot. Prompts are drawn from a stable corpus of 500 agency-operator and brand-marketing intents, randomly sampled with stratification across topic + intent.
What counts as a "citation"
A citation requires: (a) the brand name appears in the engine's answer, (b) the answer attributes a specific factual claim or recommendation to the brand, (c) the attribution is not negative or comparative-against. Soft mentions ("brands like X") count as half a citation; explicit recommendations count as one.
Self-reporting
The "cited by N/6 engines" badge on each essay is updated weekly with the most recent sample. We publish the full sample audit at /insights/citation-audit — including every prompt, every engine response, and every classification.
What we DON'T do
We don't pay for citations. We don't run prompt-injection or jailbreaking. We don't manipulate the engine via specially crafted user prompts. The whole point is to optimize the underlying content + structure so the engines naturally cite us — and we share the playbook so anyone else can do the same.