Skip to main content
Technology & EngineeringLlm Optimization203 lines

Measuring & Monitoring LLM Visibility

| Metric | Description | Target |

Quick Summary35 lines
| Metric | Description | Target |
|--------|-------------|--------|
| **AI Citation Frequency** | How often your brand/content appears in AI responses | Track trend over time; increase month-over-month |
| **Share of Voice (SOV)** | Your citation percentage vs. competitors | Enterprise target: >=15% SOV |

## Key Points

1. "What is [your brand name]?"
2. "Tell me about [your brand name]"
3. "What are the best [your category] tools?"
4. "Compare [your brand] vs [competitor]"
5. "[your category] recommendations for [use case]"
- ChatGPT (chat.openai.com) — test with and without web search enabled
- Perplexity (perplexity.ai) — always uses web search
- Claude (claude.ai) — test with web search enabled
- Google (google.com with AI Overview) — search normally, observe AI Overview
- Microsoft Copilot (copilot.microsoft.com) — uses Bing index
- Which brands are mentioned (you? competitors?)
- What position are you mentioned (first? last? not at all?)

## Quick Example

```
1. "What is [your brand name]?"
2. "Tell me about [your brand name]"
3. "What are the best [your category] tools?"
4. "Compare [your brand] vs [competitor]"
5. "[your category] recommendations for [use case]"
```

```
https://yourdomain.com/docs/api?utm_source=llm&utm_medium=citation
```
skilldb get llm-optimization-skills/Measuring & Monitoring LLM VisibilityFull skill: 203 lines
Paste into your CLAUDE.md or agent config

Measuring & Monitoring LLM Visibility

Key Metrics

MetricDescriptionTarget
AI Citation FrequencyHow often your brand/content appears in AI responsesTrack trend over time; increase month-over-month
Share of Voice (SOV)Your citation percentage vs. competitorsEnterprise target: >=15% SOV
Citation SentimentWhether AI represents your brand accurately and positively>90% accurate, >80% positive
AI-Referred TrafficSessions from AI platforms (tracked via referrer in analytics)Track growth rate; benchmark against organic
Reference RateHow often your brand appears unprompted in AI responses (replaces CTR)Higher = stronger entity recognition
Cross-Platform CoveragePresence across ChatGPT, Perplexity, Google AI, Claude, etc.Appear on 3+ platforms

Important context: Citation volatility is high — expect 59.3% monthly volatility for Google AI Overviews and 54.1% for ChatGPT. A 40-60% monthly fluctuation is normal. Evaluate trends over quarters, not weeks.

Monitoring Tools

ToolCapabilitiesBest For
Otterly.aiCitation tracking across 6 AI platforms, competitive benchmarking, change alertsComprehensive cross-platform monitoring
Peec AIBrand monitoring across ChatGPT, Perplexity, Gemini, Claude, DeepSeekMulti-platform brand tracking
LLMrefsGenerative AI search analytics, LLM SEO trackingGEO-focused analytics
LLM PulseAI search visibility trackerQuick visibility checks
Ahrefs Brand RadarTracks brand mentions in AI OverviewsExisting Ahrefs users, Google AI focus
Semrush AI ToolkitPerception monitoring across generative platformsExisting Semrush users, enterprise
ProfoundSynthetic query testing, strategic keyword injectionTesting specific query strategies
Answer SocratesLLM brand tracker for ChatGPT, Perplexity, Gemini, ClaudeBudget-friendly brand monitoring
DataForSEOAI Optimization API for programmatic trackingDevelopers building custom dashboards

Manual Testing Methods

Automated tools are valuable but manual testing provides qualitative insight that tools miss. Run these tests monthly:

Brand Awareness Test

Query each platform with these prompts and record the results:

1. "What is [your brand name]?"
2. "Tell me about [your brand name]"
3. "What are the best [your category] tools?"
4. "Compare [your brand] vs [competitor]"
5. "[your category] recommendations for [use case]"

Platforms to test:

  • ChatGPT (chat.openai.com) — test with and without web search enabled
  • Perplexity (perplexity.ai) — always uses web search
  • Claude (claude.ai) — test with web search enabled
  • Google (google.com with AI Overview) — search normally, observe AI Overview
  • Microsoft Copilot (copilot.microsoft.com) — uses Bing index

Competitor Benchmark

For each query above, note:

  • Which brands are mentioned (you? competitors?)
  • What position are you mentioned (first? last? not at all?)
  • Is the information accurate and current?
  • Does the AI link to your site or just mention you?
  • What competitors are cited that you should match?

Record Template

## AI Visibility Test — [Date]

### Query: "What are the best analytics platforms?"

| Platform | Mentioned? | Position | Accurate? | Linked? | Competitors Cited |
|----------|-----------|----------|-----------|---------|-------------------|
| ChatGPT  | Yes       | 3rd      | Yes       | No      | Mixpanel, Amplitude, PostHog |
| Perplexity | No     | N/A      | N/A       | N/A     | Mixpanel, PostHog, Heap |
| Google AI | Yes      | 2nd      | Outdated  | Yes     | Amplitude, Mixpanel |
| Claude   | Yes       | 4th      | Yes       | No      | Mixpanel, Amplitude, PostHog |

GA4 Configuration for AI Referrer Traffic

Track traffic coming from AI platforms in Google Analytics 4:

Step 1: Create a Custom Channel Group

In GA4, go to Admin > Data display > Channel groups > Create new group.

Add these referrer-based rules:

Channel NameReferrer Source (contains)
AI — ChatGPTchat.openai.com
AI — Perplexityperplexity.ai
AI — Claudeclaude.ai
AI — Google AIgoogle.com (filter for AI Overview referrals)
AI — Copilotcopilot.microsoft.com
AI — Meta AImeta.ai
AI — Geminigemini.google.com
AI — You.comyou.com
AI — Phindphind.com

Step 2: Create a Custom Report

Build a custom Exploration report:

  • Dimensions: Session source, Session medium, Landing page
  • Metrics: Sessions, Engaged sessions, Conversions, Revenue
  • Filter: Session source contains any of the AI referrer domains above

Step 3: Set Up Alerting

Create custom alerts for:

  • AI-referred sessions increase >50% week-over-week (opportunity signal)
  • AI-referred sessions decrease >50% week-over-week (citation loss signal)

Alternative: UTM-Based Tracking

If you add UTM parameters to URLs in your schema markup and llms.txt:

https://yourdomain.com/docs/api?utm_source=llm&utm_medium=citation

Note: This only works when AI systems actually link to you (which they do less frequently than they mention you). ChatGPT mentions brands 3.2x more often than it links to them.

Key Benchmarks (2025-2026)

These benchmarks help you contextualize your performance:

Traffic and Conversion

  • AI-referred sessions jumped 527% between January-May 2025
  • Visitors from LLMs convert 4.4x better than traditional organic visitors
  • ChatGPT now refers ~10% of new Vercel signups (up from 1% six months prior)
  • Cited pages earn 35% more organic clicks and 91% more paid clicks (halo effect)

Citation Volatility

  • Google AI Overviews: 59.3% monthly volatility
  • ChatGPT: 54.1% monthly volatility
  • Normal range: 40-60% monthly fluctuation
  • This means if you are cited today, there is roughly a 50-60% chance you will NOT be cited for the same query next month — and vice versa
  • Evaluate trends over quarters, not individual months

Cross-Platform Coverage

  • Only 11% of domains appear in both ChatGPT AND Perplexity responses
  • This means platform-specific optimization is critical — presence on one does not guarantee presence on others
  • Each platform has different source preferences (Bing for ChatGPT, Reddit for Perplexity, organic rankings for Google AI)

Crawl-to-Referral Ratios

  • OpenAI: 1,700:1 (1,700 crawl requests per 1 referral visit)
  • Anthropic: 73,000:1 (73,000 crawl requests per 1 referral visit)
  • Being crawled does not mean being cited; being cited does not mean receiving traffic
  • The value of AI visibility is primarily brand mention and authority, not direct traffic

Setting Up a Monitoring Cadence

Weekly (15 minutes)

  • Check GA4 AI referrer traffic report for anomalies
  • Review any citation alert notifications from monitoring tools

Monthly (1 hour)

  • Run manual brand awareness tests across all 5 platforms
  • Compare citation counts month-over-month in monitoring tool
  • Update competitor benchmark spreadsheet
  • Check for new or lost citations on high-priority queries

Quarterly (Half day)

  • Full competitive Share of Voice analysis
  • Review citation accuracy across all platforms
  • Identify content gaps (queries where competitors are cited but you are not)
  • Prioritize content updates based on citation performance
  • Review and update schema markup if needed
  • Refresh outdated statistics and dates in content
  • Update monitoring tool query lists for new products or features

Annual

  • Comprehensive AI visibility audit across all platforms
  • ROI analysis: AI-referred traffic value vs. GEO investment
  • Strategy revision based on platform algorithm changes
  • Update entity profiles (Wikidata, sameAs links, Knowledge Panel)

Interpreting Results

You are winning if:

  • Share of Voice >=15% in your category
  • AI mentions your brand for your primary use cases
  • Citation accuracy is >90%
  • AI-referred traffic is growing quarter-over-quarter
  • You appear on 3+ platforms for your primary queries

You need work if:

  • AI does not mention your brand for your primary category
  • Competitors are consistently cited where you are not
  • AI provides inaccurate information about your brand
  • You appear on only 1 platform (usually Google AI)
  • AI-referred traffic is flat or declining

Red flags:

  • AI mentions your brand negatively or inaccurately
  • You were previously cited but have been replaced
  • Competitors are building Wikipedia/Wikidata presence and you are not
  • Your content is older than 12 months and not being refreshed

Install this skill directly: skilldb add llm-optimization-skills

Get CLI access →