Goals, Not Clicks
Funnels are great at telling a story that isn’t the real one.
“Step 3 drop-off is 47%.”
Okay - but what were users actually trying to do? Did they intend to finish that flow, or were they just browsing? Did they get blocked by a bug, by the UX, or because it wasn’t the right task for them in the first place?
When the unit of analysis is a click, everything looks like a funnel problem.
When the unit of analysis is a goal, prioritization gets simple.
This write-up lays out a quiet shift that changes how growth work gets done: define real user goals, verify completion, then decide - sharpen the path or ship something new.
Start from intent (not events)
Before instrumenting anything, pin down who and why:
ICP / Industry / Plan: early-stage SaaS, SMB vs mid-market, free vs trial vs paid.
Role: admin, operator, manager, exec.
Use case: “Send first campaign,” “Invite the team,” “Export a board,” “Create an automation.”
Then translate that into a plain-language goal:
“Create and schedule the first campaign.”
“Invite two teammates and assign roles.”
“Connect a data source and sync once.”
If intent isn’t explicit, completion rates won’t mean much.
Define “done” like a contract
A goal without proof is just a wish. Give each goal a Done Definition:
Start signal: the first clear action that commits to the goal (e.g., “Clicked New Campaign”).
Completion signal: the irreducible proof (e.g., “Campaign scheduled” event with a valid audience).
Quality bar: lightweight guardrails (e.g., audience ≥ 1, no validation errors).
Timeout window: how long counts as the same attempt.
Now the metric isn’t “page views.” It’s Goal Completion.
The five numbers that matter
For each goal, track:
Seen – users who encountered the goal entry point
Started – users who committed to it
Completed – users who hit the proof of done
Time to Goal – median time from start → done
Hesitation Rate – % of attempts with pause / loop / backtrack patterns
(Optionally add Assist Rate: cases needing help - chat, tooltip, doc - before completion.)
These five beat a dozen charts because they tell the whole arc:
intent → execution → confidence.
A simple decision tree for roadmap calls
Once the numbers are on the board, decisions get boring - in a good way.
A. High intent, low completion
Likely cause: bug or UX/process friction.
What to check: hesitation spikes on specific steps, repeat errors, back-and-forth loops.
Action: fix the moment (microcopy, defaults, step order), not the whole module. Re-measure in 48 hours.
B. Low intent, high completion when started
Likely cause: positioning / discovery problem.
What to check: who sees the entry point, source campaign, feature findability.
Action: move, rename, or pre-qualify the entry; teach benefits earlier; target the right cohort.
C. One power user, everyone else idle
Likely cause: org-level adoption gap.
What to check: invites, role assignments, teammate comparisons.
Action: targeted enablement for named users on named steps. Don’t run a 50-person training.
D. High completion, slow Time to Goal
Likely cause: cognitive load.
What to check: steps with long dwell, unnecessary fields, decision bottlenecks.
Action: remove fields, prefill defaults, progressive disclose. Shave minutes, not pixels.
Weekly “goals review”
Thirty minutes is enough if the inputs are clean.
Prep (15 min)
Pick 1–3 goals that drive money (activation, upgrade, expansion).
Sort by: high Seen, low Completed, or spiking Hesitation.
Draft a cause hypothesis: bug / UX / knowledge / discovery.
Meeting (15 min)
For each goal: show intent, completion, time, hesitation.
Agree the smallest change that could move the number in 48 hours.
Assign owner, deadline, and the single metric that will confirm it worked.
Everything else goes on a parking lot. The point is momentum.
Instrumentation that won’t eat your week
You don’t need a NASA stack to start. A minimal, durable setup:
Events: goal_started, goal_completed, assist_shown, error_name, step_name.
Context: role, plan, source campaign, account size.
Tags / segments: ICP, industry, use case.
Replay (optional): only to sample the stuck steps, not for doom-scrolling.
Save the query as “Goal: {name} - Weekly.” Rinse, repeat.
What to look for in behavior (the tells)
Hesitation clusters: pause → hover → backtrack on the same control (“Validate,” “Confirm,” “Post”).
Deviations from the golden path: unnecessary detours, step skipping, tab ping-pong.
Workarounds: export → spreadsheet → re-import to “make it work.”
Form thrash: repeated field edits, validation loops, error-copy rereads.
Outlier time: steps where one cohort is 2–3× slower than peers.
Org imbalance: one “hero” user vs. idle teammates.
These tells separate “needs a tooltip” from “needs a redesign” from “needs a fix.”
Two quick examples
Upgrade intent, stalled: lots of pricing views + team invites, few plan changes.
Read: high why, broken how.
Likely fix: make upgrade the natural next step in-flow; clarify limits; offer safe preview of paid features.
First automation, never activated: many starts, long time, low completion.
Read: motivation exists, confidence doesn’t.
Likely fix: template first; prefilled sample; show a dry-run result; rename scary steps; add “undo.”
In both cases, the question isn’t “what page is worst?” It’s “what goal is failing, for whom, and why?”
Where Autoplay fits (and what we learned building it)
We built Autoplay around this exact loop. Instead of highlighting clicks, it detects intent, hesitation, and deviation with UI awareness, then maps them to goals: who tried, who finished, and what got in the way. The value isn’t another dashboard; it’s faster answers to the only question that matters for growth:
Do users achieve what they came to do - and if not, is the win fixing the path or adding capability?
Use whatever stack you have to run the loop. Autoplay just compresses the time between “we think” and “we know.”
The quiet north star
Teams love big north-star metrics. Here’s a smaller one that moves them all: Time to Confidence - the time it takes from intent to “I can do this unaided.”
Shorten that, and activation rises, upgrades stick, expansions feel obvious.
Miss it, and the roadmap fills with features that look good in a deck and gather dust in the product.
Name the goal. Define “done.” Watch the tells. Fix the moment.
When the unit is a goal, growth becomes a series of easy decisions.