research · 3 min read · April 2026

Measuring brand health in 2026.

Insight

Brand-health tracking has been broken for a decade. Here's the 2026 stack — attention, resonance, share-of-search, and share-of-model, measured monthly.

Categoryresearch
UpdatedApril 2026

Last updated:

Quick answer
Brand-health measurement in 2026 combines three signals: attention (time and depth of brand exposure), resonance (prompted and unprompted associations weighted by salience), and share-of-model (how often large language models cite the brand in category queries). The old aided-awareness + consideration + usage stack is necessary but no longer sufficient — AI-mediated discovery has shifted where brands get surfaced, and salience now has a machine component. Refresh quarterly at minimum.

The old brand-health stack is breaking

For 20 years, brand-health measurement rested on three pillars: awareness, consideration, and preference. It worked because the funnel was linear — consumers saw an ad, a brand entered their consideration set, and intent followed.

That funnel has flattened. Consumers now discover, compare, and choose inside AI assistants, vertical search engines, and social video in a single session. Aided awareness against a category prompt still measures something — but it misses the increasing share of category queries that resolve inside a generative model before a human ever types a brand name.

The three signals that matter in 2026

1. Attention. Not exposure. Not reach. Attention measures the time and depth of brand contact, weighted by modality. A two-second skipped pre-roll is not the same as a 15-second completed unit in an owned app. Use TVision, Adelaide AU, or Lumen metrics where available; model a proxy using viewability × completion × engagement when not.

2. Resonance. Prompted and unprompted associations weighted by salience. Ask open-ended before closed-ended. Pre-register the coding frame. Report both recall rate and association strength — a brand recalled by 60% with weak associations ranks below one recalled by 35% with strong, distinctive ones.

3. Share-of-model. The percentage of times leading LLMs cite your brand when responding to representative category prompts. We define a 30–50 prompt set per client, run it monthly across GPT, Claude, Gemini, Perplexity, and Copilot, and track citation count, position, and sentiment. See our AI visibility methodology for the full protocol.

What to stop reporting

Stop reporting top-of-mind awareness as a trend line without context. It moves with category salience, not brand strength.

Stop reporting unaided ad recall beyond 6 weeks after a flight. It's a campaign KPI, not a brand-health one.

Stop reporting consideration as a binary. A weighted consideration score (primary / secondary / tertiary consideration) produces a dial that actually moves.

A worked example — mid-market B2B SaaS

For a mid-market SaaS brand in a category dominated by three incumbents, a 2026-fit brand-health tracker looks like this:

Quarterly quant (n=400 ICP buyers): attention × resonance × prompted consideration × share-of-model. Total instrument: 18 minutes, $35 incentive.

Monthly share-of-model run: 40 prompts × 5 models = 200 data points. Automated, ~$60/month in API cost.

Annual qual: 12 IDIs with lost deals and won deals. Triggered additionally whenever share-of-model drops more than 5 points month-over-month.

Total annual cost for a defensible tracker: ~$85K, down from the ~$140K an equivalent legacy tracker would run. Better signal, lower cost, faster cycle time.

How we measured this

This methodology is applied across NUUN's quantitative market research and brand-tracking engagements in Canada, the US, and the GCC. The benchmarks referenced are averaged from NUUN panel waves in Q3 and Q4 2025, weighted by population and segment representativity. Full methodology is available on request to active clients.

Sources & further reading

About the author

NUUN Research Editorial

Reviewed by NUUN's market research practice lead

CMRP, MRIA; panel infrastructure spanning Canada, US, and GCC.

Frequently asked.

Is share-of-voice still a useful brand-health metric?
Share-of-voice is still useful as an input to media planning, but weak as an output metric. It measures how loud you are, not how memorable. In 2026 we replace it with share-of-model — how often LLMs cite your brand when users ask category questions.
How often should we refresh a brand-health tracker?
Quarterly is the floor for most B2C categories; monthly for fast-moving categories (retail, fintech, travel, media). Annual trackers miss every inflection point that matters.
What sample size do I need for a defensible brand-health tracker?
n=400 per target segment per wave gets you ±5% margin of error at 95% confidence. For syndicated national trackers we recommend n=1,000. For B2B categories with narrow audiences, prioritize panel quality and weighted cell representativity over raw volume.
How do I measure unprompted brand associations without leading the respondent?
Use open-ended elicitation before any stimuli, code responses using a pre-registered coding frame, and report both raw mentions and weighted salience. Anything else is a category cue in disguise.
What is share-of-model and how do I measure it?
Share-of-model is the percentage of times an LLM cites your brand when responding to representative category prompts. To measure it, define 30–50 prompts a buyer would realistically use, run them monthly across GPT, Claude, Gemini, Perplexity, and Copilot, and track citation rate over time.
Should brand-health studies include AI-mediated exposure?
Yes. Add a section asking respondents whether they've encountered the brand via AI assistants, comparison engines, or generative search. In categories where AI mediation is meaningful (software, travel, research providers), this exposure channel is already material.
What's the right way to combine qualitative and quantitative signals for brand health?
Quant sets the dial; qual explains the movement. Run quant waves quarterly, trigger a qual deep-dive (online community or 12 IDIs) whenever any core metric moves by more than 3 points between waves, or ahead of a major campaign launch.

Want NUUN on this problem?