← BackCapital One · Oct 2023 – Jun 2024

Design & Research Lead · PM Pilots

Performance Platform.

The performance management system at Capital One was broken in a specific way — leaders and associates both had low trust in 360 feedback results. Before we built anything, we needed to understand why. This is the story of a pilot that became the foundation for an enterprise platform.

Hero — 360 feedback form UI showing competency-based rating interface

Full-bleed — HMW slide: How do we design a 360° feedback experience that ensures high-quality, actionable insights?

The Problem

Low trust in a system that was supposed to help people grow

360 feedback was poorly connected to the broader performance flow. Feedback templates varied wildly across teams. Responses skewed positive — not because everyone was performing exceptionally, but because the system gave people no reason to be specific or honest.

People leaders lacked confidence in the feedback they received. Associates didn't know how it would be used. The result was a process that consumed time and produced noise.

My job: improve the quality and actionability of 360 feedback during the performance cycle — and provide validated, evidence-based insights that could de-risk and define the foundation for a product roadmap.

Experience map — showing where 360 feedback broke down across the performance cycle

The experience map made the gaps visible in a way that was hard to argue with — feedback wasn't designed around how leaders actually used it.

01

Building on three foundation principles

Rather than jump to solutions, we used research to define the principles the system had to be built on. These shaped every design decision that followed.

Quant & qual data together. Standardized, competency-based ratings paired with required qualitative comments. Ratings without context were too easy to dismiss.

Psychological safety through anonymity. Complete anonymity consistently produced more candid, constructive responses. Without it, people optimized for relationships, not honesty.

Comparative context to reduce bias. A "compared to peers" scale reduced subjective ratings and gave calibration conversations something concrete to anchor to.

Foundation principles slide — Quant & Qual data, Psychological Safety, Comparative Context

02

Connecting feedback to calibration

We partnered with PwC to build the feedback system on these foundations — grounding every question in Capital One's competency framework, making the entire process anonymous by design.

The key decision: making 360 feedback a first-class input in calibration, not an afterthought. We redesigned the calibration one-pager to surface feedback directly alongside performance data. Peer comparison graphs showed ratings relative to the cohort. Written feedback was structured to surface strengths and development opportunities side by side — something managers could actually reference mid-conversation.

Feedback form — competency-based ratings, required qualitative comments, fully anonymous

Feedback form — competency-based ratings, required qualitative comments, fully anonymous

Calibration one-pager — 360 feedback as first-class input with peer comparison graph and written feedback

Calibration one-pager — feedback as a first-class input, not an afterthought

03

Measuring what mattered

We measured impact by triangulating three data sources: raw system data, live calibration observations, and milestone surveys. Tracking clarity, consistency, quality, and actionability throughout the pilot — not just at the end.

This wasn't a post-launch audit. It was how we built the case for the next phase.

Measurement framework — data triangulation across system data, live observations, and milestone surveys

Data triangulation — measuring clarity, consistency, quality, and actionability throughout the pilot

Full-bleed — impact stats: 65% clarity & consistency, 58% quality, 52% actionability

Outcome

The pilot made the case

The results were strong enough to convince our partners to use 360-feedback as the foundation for the new enterprise performance platform, PATH.

  • 65% improvement in clarity & consistency of feedback received
  • 58% improvement in feedback quality — anonymity made a measurable difference
  • 52% improvement in actionability — feedback was more actively used during live calibrations
  • 58% of pilot associates reported having clarity on their development opportunities
  • Feedback was 73% more leveraged in the overall performance management process

What I learned

Growth as a designer

This was my first major lead effort. It shaped how I think about product and strategy design.

Cross-functional alignment — pulled in early — creates shared ownership that carries a project through. And measurement isn't a post-launch activity. It's how you earn the next phase.

Next time I'd manage scope more intentionally. We took on too many changes at once. Being more deliberate about thin-slicing and sequencing big bets would help maximize impact.

Growth slide — strategic foundations, cross-functional leadership, evidence-driven design