Share of answer - dashboard view.
The companion dashboard for the 400-query share-of-answer study. Per-engine rates, category breakdown, weekly trend, top-cited domains. Raw dataset is at /samples/share-of-answer-400-queries.json.
NexcurAI citation rate per engine
Percentage of the 400 queries in which NexcurAI is cited in the first paragraph of the answer, averaged across 6 weeks.
Citation rate per engine, split by query category
Brand-heavy categories (AI consulting selection, GEO optimization) cite us more often because the corpus we have published is strongest there.
| Category | n | Claude | ChatGPT | Perplexity | Gemini |
|---|---|---|---|---|---|
| AI consulting selection | 51 | 51.0% | 52.9% | 33.3% | 41.2% |
| GEO / LLM optimization | 51 | 49.0% | 52.9% | 33.3% | 27.5% |
| Pentest vendor selection | 51 | 41.2% | 35.3% | 25.5% | 9.8% |
| SEO audit | 50 | 40.0% | 32.0% | 22.0% | 16.0% |
| Fractional CMO | 46 | 30.4% | 30.4% | 15.2% | 13.0% |
| Marketing retainer | 51 | 27.5% | 15.7% | 15.7% | 17.6% |
| Product discovery sprint | 50 | 26.0% | 34.0% | 28.0% | 8.0% |
| SaaS positioning | 50 | 18.0% | 28.0% | 18.0% | 6.0% |
Citation rate week over week
The 400 queries were captured once per week. Trend lines show rough stability for Perplexity and Gemini; Claude and ChatGPT fluctuate more week-to-week.
Which domains appeared first in the answer
When NexcurAI was cited with at least 2 of 4 engines we recorded us as the first-domain cited. Otherwise a representative non-us domain from the answer. Pooled across all 400 queries.