KYC Onboarding for the Clients a Private Bank Can't Afford to Lose

11 screens designed
3 user flows
5 core features
Speculative PWM project

01. The Problem

Private banks promise is bespoke, white-glove service. This is a speculative redesign exploring what happens when a private wealth onboarding experience fails to live up to that promise.
KYC is the process every new private wealth client goes through before their account can be opened. Identity verification. Source of funds. AML checks. It is legally required, and it takes weeks. Goldman Sachs cannot skip it or simplify it by removing the checks that matter.
That's the tension. The first real experience a new Goldman Sachs client has with the bank isn't a welcome dinner or a portfolio review. It's a document request portal. And for clients who were personally referred in by someone who vouched for the bank, the experience of that portal is the first test of whether the brand promise is real.
Right now, it fails that test. During those weeks, three people are involved in every case: the client, the advisor who brought them in, and the compliance officer reviewing their documents. None of them have the same view of what's happening.

The Client

The client submits documents and waits. If something gets rejected, they get a generic error message. No reason. No next step. No timeline. Every day the process drags on is a day their account isn't open and their assets aren't working. For a private equity partner who was personally referred into Goldman Sachs, being treated like a form submission isn't just frustrating. It's a signal about what this relationship is going to look like.

The Advisor

The advisor brought the client in. Their reputation is tied to how this goes. When the client calls to ask what's happening, the advisor has to say "let me check on that." They have no more visibility than the client does. Instead of managing the relationship, they're stuck playing telephone between the client and compliance. Every hour they spend chasing case updates is an hour they're not spending on the work they're actually paid to do.

The Compliance Officer

The compliance officer is managing a high-volume queue of flagged cases. Most come with an AI risk score. In the workflow I designed against, flagged cases surface a high-level designation — "High Risk" — with no breakdown, no confidence level, no specific issue. To make a responsible decision, the reviewer has to manually dig through the underlying documents. For every flagged case. Every time. The AI was supposed to make the job faster. Instead it handed the reviewer a conclusion and left them to do the reasoning themselves.

The Structural Problem

The compliance system was designed for compliance, not for the people around it.
In the experience I designed against, status changes happen inside the system, invisible to the client and advisor. The AI makes determinations it does not explain. Document rejections go out without context. Every handoff between the three parties is a potential failure point, and most of those failures happen silently.
The cost is not just delay. It is lost time. Advisors spend hours chasing case status instead of managing relationships. Reviewers re-read documents the AI already processed because they cannot act on a score without a reason. Clients wait while their assets sit idle in a process nobody can clearly explain. Fixing the coordination layer does not just improve usability. It gives all three people their time back.
The stakes are high. A private wealth client is not a $10/month subscription. They represent millions in AUM, years of advisory fees, and a referral network of peers acquired the same way they were. A bad onboarding experience does not just lose a client. It creates a story, and HNW clients tell stories to exactly the people the bank wants to win next.
That is what makes this harder than a typical UX problem: you cannot simply remove the friction. The friction exists because the risk is real. If the process is too easy, bad actors get through. If it is too opaque and difficult, the relationship breaks before it starts.
The job is not making KYC easy. It is making it feel right for a private wealth client while keeping every check that matters intact.

02. Research and Discovery

I looked at the five major KYC platforms in the market: Persona, Jumio's KYX Portal, Alloy, Onfido/Entrust, and Stripe Identity.
They're technically capable. That's not the problem.

What the Market Does Well

Platform
Strength
Persona
Best client-side flow. Progressive disclosure, clean consent screen, automatic risk routing. Low-risk clients through in under 10 minutes.
Jumio
Strongest compliance backend. Risk scores with reason codes, 800+ detection rules, entity relationship graph.
Alloy
Purpose-built for manual review queues. Configurable routing, clear case management. Does one thing well.

Where Everyone Falls Short

I looked at all five as both a compliance officer and as a client. The same gap showed up every time.
Every platform is designed for one person.
Persona optimizes the client experience. Jumio optimizes the compliance review. Alloy optimizes case routing. None of them are designed for the coordination problem between all three. The moment Robert needs to know what's happening, Michael needs to answer his client, and Jessica needs to explain a decision: that's where every platform falls apart.

AI
Explainability

No platform explains AI confidence scores in plain language to the reviewer. So when the AI flags something, the compliance officer can't act on it without digging through the underlying documents manually. The AI does the work and then makes the human redo it.

Advisor
Visibility

No platform gives advisors real-time case visibility without a separate login. So when a client calls, the advisor responsible for that relationship has to go find out. Every time that happens, trust erodes.

Rejection Transparency

No platform gives clients an actionable reason when something gets rejected. Just "unable to verify." The client doesn't know what to fix or whether there's even a problem with them specifically.

Risk-Adaptive Onboarding

No platform adapts the onboarding flow to the client's risk profile before they start. A low-risk referral and a high-risk complex case go through the same front door.
The gap isn't in verification technology. It's in what happens between the three people working the same case.

03. User Personas

Every KYC case has three people in it. Most tools only design for one.

Meet Robert: The HNW Client

Robert Chen is a 52-year-old private equity partner being onboarded at Goldman Sachs PWM. He was referred in. This isn't a cold application. He was invited.
He submitted everything three days ago. He hasn't heard anything back. He doesn't know if there's a problem, if something was rejected, or if this is just how long Goldman Sachs takes. His advisor can't tell him either.
Robert doesn't need speed. He needs to know where he stands. His breaking point isn't a slow process. It's a process that doesn't respect his time enough to tell him what's happening.
Design Implications: A status portal with real progress tracking, plain-language rejection messages, and a direct line to his advisor. No ambiguity about where things stand or what to do next.

Meet Michael: The Wealth Advisor

Michael Torres brought Robert in. His reputation depends on this onboarding going smoothly.
Michael's problem is structural: he's accountable for an outcome he has no visibility into. When Robert calls, Michael opens the same portal and sees the same information Robert has. He has to say "let me check on that" like he's a third party. That's not a UX problem. That's a relationship problem.
Design Implications: A real-time case dashboard across his full roster. Not a notification he has to chase. A place where he can see status, take action, and message compliance directly without leaving the tool.

Meet Jessica: The Compliance Officer

Jessica Park manages a queue of 40+ cases. Most come through with an AI risk score. When the AI flags something, all she gets is "High Risk."
To make a responsible decision on a flagged case, she has to open the source documents herself and read through them manually. Every override she approves needs to be auditable. She's regularly making calls she can't fully explain, because the system that flagged the case won't tell her why.
Design Implications: An AI breakdown screen that shows exactly what flagged, what the finding was, and how confident the model was. Structured for audit. Written in plain language.

The Design Challenge: One Case, Three Perspectives

Robert, Michael, and Jessica are all working on the same case. None of them have the same view of it.
The design challenge wasn't building three separate tools. It was making all three people feel like they're inside the same process.

04. Design Strategy

The research pointed to one core problem: multi-party coordination failure. The fix wasn't more features. It was getting information to the right person at the right time in the right format.
I defined five features around the three specific failure points I kept finding.

Design Decision 1: Give the Client Real Status, Not Just Progress

The Problem:
Robert has no visibility into what's happening with his case. Generic progress bars don't tell him if there's a problem.
What I Built:
A status portal with a milestone tracker tied to actual case stages, document-by-document submission status, inline rejection reasons with specific next steps, and a direct contact link to his advisor.
Why:
The anxiety isn't about the process being slow. It's about not knowing if the process is working. Showing Robert exactly where he is, and exactly what to do if something needs attention, removes the ambiguity.
Trade-off:
More information could mean more anxiety. I scoped to status and action, not raw case data. Robert doesn't need to see the AI risk score. He needs to know what's needed from him.

Design Decision 2: Give the Advisor a Real Dashboard, Not a Notification

The Problem:
Michael finds out about case problems when his client calls him. He has no proactive visibility.
What I Built:
A case dashboard across his full roster. Tabs for All, Action Required, In Progress, Completed. Case status, risk tier, assigned compliance officer, and time in onboarding at a glance. One click into any client gets him full case detail and direct action buttons.
Why:
A notification tells Michael something happened. A dashboard tells Michael what's happening across all his clients, right now, before anyone has to call anyone.
Trade-off:
Surface area. This is a meaningful amount of UI. I kept it scoped to what Michael actually needs to act: status, risk, time in queue, and three action buttons (message client, request documents, contact compliance).

Design Decision 3: Explain the AI, Don't Just Report It

The Problem:
Jessica gets a risk score. Not a reason. Making a judgment call on a flagged case without understanding what flagged it means she either approves things she shouldn't, or digs through documents manually every time.
What I Built:
An AI Risk Breakdown screen that shows every check run, the result and finding for each, and a confidence signal. Structured as a table. Written in plain language. Tied directly to the approve / request documents / escalate decision flow.
Why:
The override has to be auditable. If Jessica can't explain the decision, she can't make it responsibly. The breakdown screen makes the AI's reasoning legible, so her decision is informed rather than blind.
Trade-off:
I'm showing Jessica more complexity, not less. The risk is information overload. The table structure and the plain-language confidence summary are doing the work of keeping it scannable, but this is the screen I'd want to test with actual compliance officers before committing to the layout.

What I Cut

Biometric verification UI, document OCR screens, MFA flows, CRM integration.
These are real implementation needs. They're also vendor and engineering layers that sit above the coordination problem. Designing those first would have been designing the wrong thing.

05. The Solution

Three flows. Eleven screens. One case, three perspectives.

Flow 1: Client Onboarding Portal

Screen 4: Document Status + Rejection Recovery

Every document Robert submitted, with its current status. For any rejection, the reason is shown inline with a direct "Upload now" link in the same row.
He doesn't have to go find the problem. The problem comes to him.

Screen 5: Onboarding Status Tracker

Diamond milestone tracker at the top. Below: an action-needed callout if something's waiting on him, a document summary with color-coded status badges, and "Need Help? Message your advisor" pinned to the bottom.
He knows where he stands. He knows what to do next. He knows who to call.

Flow 2: Advisor Case Dashboard

Screen 8: Client Roster

Michael's full client list. Tabs for All / Action Required / In Progress / Completed. Columns for client name, risk tier, status, assigned compliance officer, time in onboarding, and next step.
One view, every client. No digging.

Screen 9: Client Detail

Left column: case metadata. Risk tier, time in queue, assigned compliance officer, current status. Below that: three action buttons. Send Document Request. Message Client. Contact Compliance.
Right column: the full case timeline, document checklist, and compliance notes.
Michael can take action without opening another system.

Screen 11: Compliance Thread

Full-width messaging thread between Michael and Jessica, with a thin case header at the top. Their conversation lives in the case record. "Messages are retained as part of the case record" so both parties know this is documented, not informal.

Flow 3: Compliance Review Queue

Screen 13: Review Queue

Jessica's full case list. Tabs for All / Full Manual Review / Expedited Review / Auto-Approved. Columns: client name, advisor, risk score, status, time in queue.
The tabs reflect actual routing logic. Full Manual Review needs her attention. Expedited means a deadline. Auto-Approved is logged and auditable without her action.

Screen 14: Case Detail

At the top: a three-panel AI summary card (AI Screening Summary, Issues, AI Confidence) with a "View Full Breakdown" link. Below: the case timeline and document checklist with color-coded status badges.
She can see at a glance what's been submitted, reviewed, and flagged.

06. Reflection

This started as a compliance problem. It's actually an information problem.
The tools exist to verify identity, score risk, and route cases. What doesn't exist is a design layer that makes that information legible to everyone who needs it. Robert doesn't know what's happening with his case. Michael can't answer his client's questions. Jessica can't explain what the AI flagged. None of that is a technology gap. It's a design gap.
The five features I built don't introduce new data. They surface existing data to the right person, in the right format, at the right time.

What I'd Tackle Next

The AI Explanation Layer is the highest-stakes screen in this system. I designed how the results are displayed. The harder problem is the logic behind confidence calibration: what makes a result "Medium" vs. "Inconclusive," and how those thresholds get set and adjusted over time. That conversation with engineering and compliance leadership is where the real design work lives.
I'd also want to test the check table on Screen 15 with actual compliance officers before committing to it. I designed it to be scannable. Whether that column structure maps to how a compliance officer actually reads and processes risk is something I need to watch in practice, not assume from the outside.
And the client flow language on Screen 1 needs real HNW client testing. "Enhanced Due Diligence" is deliberately plain, but how much upfront disclosure is the right amount, and how to frame it without making the process feel more intimidating than it needs to be, is something I wouldn't finalize without talking to actual clients.

Disclaimer: Speculative project, not affiliated with or commissioned by Goldman Sachs.

Kevin's Portfolio Assistant
Ask me about Kevin's work!

Kevin's AI Assistant

Ask me anything about Kevin's work!