Skip to main content
utk®

01.

Designingacomparisonfeaturefordecisionspeed


Users comparing cars on Spinny were making a decision the product couldn't see. They bounced between listings, returned to the same cars repeatedly, managed their own tabs. The data had the signal. The product had nothing built for it.

Role
Lead Designer
Team
1 PM · 3 Eng · 1 Analyst
Platform
iOS · Android · PWA
Timeline
6 Weeks · 2024

02.

Problem


The A→B→A pattern.
A user doing comparison work the product wouldn't help with.

Behavioural analysis across 238,000 users revealed a pattern: users would visit car A, switch to car B, return to car A, go back to car B. We called it A→B→A. It wasn't indecision , it was a user actively trying to decide without the right tool.

“Users were already comparing. The product just wasn't helping them do it.”

Drop-offs spiked at this exact point. The problem wasn't that users couldn't choose , it was that the product was forcing them to hold two cars in memory simultaneously, across multiple sessions, without any support.

The BrowserThe ComparerThe Gap
BEHAVIOURScrolling listings. Saving favourites. Not ready to decide.Visiting the same 2–3 cars repeatedly. Bouncing between PDPs.
GOALFind cars worth consideringPick between cars already in their shortlist
PRODUCT OFFERSListings, filters, search, serves this user wellThe same listings again. No comparison tool.
DROP-OFF RISKLow, still discoveringHigh, cognitive overload leads to abandonment

Two user types, one product, the comparer was entirely underserved.

03.

Thedata


I analysed conversion across the 2.3M PDP user base to understand how comparison behaviour correlated with downstream purchase. The signal was unambiguous, and the gap between comparing and non-comparing users widened at every stage of the funnel.

1 in 6

PDP users ever compare
318,287 comparing users out of 2.3M. Most users browse without ever comparing, the behaviour is undiscovered.
11×

Higher delivery conversion
Comparing users deliver at 1.74% vs 0.15% for non-comparing. Same funnel, same period, entirely different outcomes.
59%

Engaged segment reach delivery
Users who revisited 3+ cars deeply showed the highest purchase follow-through of any behavioural segment.
Conversion at each funnel stage, comparing vs non-comparing
Non-comparing
Comparing
Test Drive Scheduled
Non 2.04%Cmp 12.5%
6.1× higher
Test Drive Completed
Non 0.77%Cmp 6.73%
8.7× higher
Token Paid
Non 0.31%Cmp 3.56%
11.5× higher
Delivery
Non 0.15%Cmp 1.74%
11.6× higher

Conversion rates across 2.3M PDP users. Comparing users (n≈238K) vs non-comparing users. A/B validated post-launch.

The gap widens at every stage. By delivery, comparing users convert at 11.6× the rate of non-comparing users. Comparison isn't just a feature request, it's the strongest behavioural predictor of purchase in the dataset.

04.

Designquestion


The design question
How might we surface the next step for a comparing user, without interrupting the browsing they're already doing?
The shift: from showing inventory to supporting the decision already in progress. The trigger had to be behavioural, no explicit Compare CTA, no artificial adoption bias.

The HMW moved us from describing a problem to framing a product opportunity. The user wasn't broken, they were doing real decision-making work. We needed to give them a better tool for it. The key constraint: no explicit ‘Compare’ CTA. Adding a visible button would introduce artificial adoption bias. The trigger had to be behavioural.

05.

Workingwithintherealworld


Four constraints shaped what was possible before exploration began.

Performance budget
A significant portion of users were on mid-to-low Android devices. The comparison view had to load fast and avoid layout shifts, CLS was a hard constraint, not a nice-to-have.
Team ownership
The PDP was owned by a separate team. The comparison entry point had to work without touching PDP code, the tray had to be fully self-contained and contextually injected.
Data normalisation gaps
Not all car attributes were in consistent format across the inventory. The comparison view had to degrade gracefully, missing specs couldn't break the layout or the decision flow.
Six-week delivery window
Decisions had to be grounded in existing behavioural data. No time for new research cycles, every design choice had to be defensible from the data already in the system.

06.

Twopaths.Onerightanswer.


I explored two structurally distinct approaches. The goal wasn't the most ambitious solution, it was the one that solved the problem without creating new ones. Both were prototyped and reviewed against the same brief: reduce cognitive load at the moment of decision.

Option 1, Inline comparison strip

Option 1 reads as data, not decision. Two independent bars give accurate numbers but no relative signal — the user still has to do the mental subtraction. Under time pressure or on a mid-range device, that's exactly where comparison falls apart.

Option 2, Dedicated comparison surface (live version)

Option 2 · shipped.A single axis makes one car the implicit reference point. The delta reads instantly — no arithmetic, no toggling, no memory. The user's job shifts from “calculate which is better” to “confirm what I already sense.” That shift is the design.

The inline strip checked every functional box but failed on scalability and performance. The dedicated comparison surface, launched from a contextual tray that appears only after the user shows comparison intent, preserved browsing continuity and delivered a focused decision experience. Option 2 was selected for all use cases.

07.

FinalDesigns


The comparison feature surfaces at the exact moment a user shows comparison intent — after visiting multiple PDPs. It doesn't interrupt browsing. It doesn't ask the user to change behaviour. It offers a better tool at the moment they need it, and disappears otherwise. Three screens. One job: close the decision.

Feature overview
Car comparison, bento grid overview

Three screens with a clear hierarchy of jobs: select, compare, decide. Notice that none of these screens introduce unfamiliar UI patterns — the card language is the same as listings, the navigation chrome doesn't change. The only new idea is the difference indicator. Familiar enough to trust, specific enough to act on.

Entry points
Comparison feature entry points

The tray doesn't announce itself. It surfaces when the behaviour says the user is ready — only after the A→B→A pattern fires. There's no CTA to ignore, no modal to dismiss. Timing is the design here. The same interaction, surfaced ten seconds earlier, would have felt like an interruption. Here it feels like the product read your mind.

Spec comparison card
Spec comparison card with difference indicators

The hardest decision here wasn't the layout — it was what not to show. Grouping by category (comfort, performance, safety) mirrors how buyers think, not how the database stores data. The difference indicator removes the arithmetic entirely. The AI summary at the top collapses the full table into one sentence for the user who's already decided and just needs confirmation. Hierarchy doing the work that scrolling used to do.

08.

Outcome


By giving users a structured way to compare, we shortened the decision cycle and improved downstream conversion at every funnel stage. At 2.3M PDP users, the impact compounded significantly.

+5.3%

User-to-Delivery (U2D)
The most downstream measure. Comparing users reaching delivery increased significantly post-launch, confirmed via randomised A/B test.
+3.8%

User-to-Test Drive (U2T)
Earlier in the funnel but harder to move. The comparison view surfacing direct booking drove this metric up independently.
↓ A→B→A

Oscillation reduced
Users stopped bouncing between PDPs. The comparison view gave them a structured place to do that work, measurably reducing session oscillation.
−12.4%

Time-to-Token
Users moved from comparison to token payment faster after the feature launched.
−18.3%

Time-to-Visit
Time from first PDP view to test drive visit dropped, fewer sessions needed to reach commitment.
Post-launch · A/B validated · Spinny 20242.3M PDP users

09.

Whatthistaughtme


The most important discovery happened in the data before any design work. Users were already comparing. They were doing it manually, at significant cognitive cost. The product just wasn't acknowledging it.

That reframe changed everything. We weren't building a new feature from scratch , we were building the tool users were already trying to use. That's a fundamentally different brief. One where you already know the behaviour works. You just need to make it less effortful.

Working with a clear behavioural signal, the A→B→A pattern, disciplined the design process. Every decision came back to one question: does this make comparison faster and less cognitively demanding? If not, it didn't ship.

Users weren't confused about which car to buy.
They just needed the product to stop making it harder.

Read next