Back to Projects
Cedar Rose
Cedar Rose

Cedar Rose — Product Design

Product design ownership across Cedar Rose’s risk intelligence ecosystem — from building a real-time monitoring dashboard (0→1) to cross-product workflow design across internal, QA, client, and report surfaces.

Role
Product Designer (Enterprise / B2B SaaS)
Duration
Ongoing
Scope
Multi-product Ecosystem

At a Glance

  • Products: CRiS Backoffice (Internal) • CRiS Intel (Client-facing) • QA Platform • Credit Report Output
  • Primary users: Business Researchers • QA reviewers • Clients/Analysts
  • My role: Product Designer (Enterprise / B2B SaaS) — cross-product ownership
  • Focus: workflows, IA, reusable patterns, states/edge cases, developer handoff, and scalability decisions

Ecosystem Context

Cedar Rose’s CRiS ecosystem connects internal production workflows to client consumption:

  • CRiS Backoffice (internal researchers): screening entities, due diligence, and generating structured risk profiles
  • QA Platform: reviewing report quality, completeness, and researcher KPIs before delivery
  • CRiS Intel (client-facing): portfolio monitoring, alert triage, dashboards, and report access
  • Credit Report output: a sellable product output reflecting consolidated and validated data

Design goal: Ensure the same data semantics flow end-to-end so “what researchers capture” becomes “what clients see,” and QA can validate quality before release.


What I Owned

Across the CRiS product suite, I focused on:

1) Enterprise workflow design

  • Designed structured data capture and review workflows from research → QA → client delivery
  • Defined statuses, ownership, and clear action paths to reduce handoff friction

2) Cross-product consistency & patterns

  • Standardized tables, filters, and forms (sorting, pagination, bulk actions, validations)
  • Defined interaction states (loading/empty/error/permission) across products
  • Ensured consistent field definitions and semantics for shared concepts (risk levels, alert categories)

3) QA governance and quality workflows

  • Designed workflows to ensure researchers’ outputs are checked before final delivery
  • Supported KPI tracking concepts and review signals that map back to client-facing semantics

4) Competitive research & market positioning

  • Conducted competitor UX research to identify best practices, gaps, and differentiation opportunities
  • Delivered recommendations to support product positioning and prioritization

Spotlight: “Call / Interview Details” (Cross-product Workflow)

Why this flow matters

To improve traceability and quality of investigations, we introduced a structured “Call / Interview Details” flow that standardizes what information is captured, how “obtained data” is tracked, and how validated data appears consistently across products.

Where it appears (4 products)

  1. CRiS Backoffice (internal): researchers add call/interview details and captured data fields
  2. QA Platform: reviewers assess completeness and quality before the report can be finalized
  3. CRiS Intel (client-facing): consolidated validated information becomes visible to clients
  4. Credit Report output: generated report reflects the same structured dataset

What I owned

  • Defined the field structure and grouping for call/interview data
  • Designed the multi-entry UI pattern for adding several records efficiently
  • Mapped how consolidated data should surface consistently in client-facing views and report outputs
  • Specified states + edge cases (draft vs saved, missing info, QA rejection, permissions, partial data)

Interactive Prototype

Embedded prototype (alternate UI kit) used to validate the workflow, states, and cross-product data flow. Final implementation aligns with Cedar Rose design system.


Key Product Decision: Batch Saving for Scalability

The problem

Researchers often add multiple rows/entries in one session. Saving each row immediately creates unnecessary requests and increases operational load — especially when users edit multiple times.

The approach

We designed a draft → batch save flow:

  • Multiple entries are kept in a draft state while the user is working
  • On Save, all entries are submitted as one batch transaction
  • If a single saved row is edited later, we update only that row (targeted update)

Why it’s scalable

  • Reduces request volume and backend load
  • Supports future automation where external sources may add data that requires auditing/validation
  • Prevents repeating validation work per row and keeps the system efficient

System Thinking

States & edge cases

  • Draft vs saved data handling
  • Loading/empty/error states for table + forms
  • Permission behavior (who can add/edit vs view)
  • QA rejection loops and required fixes before finalization
  • Partial data handling and safe fallbacks

Shared patterns reused across products

  • Tables: sorting, inline filters, pagination, bulk actions
  • Forms: validation, conditional fields, error prevention
  • Notifications: status changes, QA flags, and workflow alerts

Outcomes

  • Improved cross-product consistency (same field semantics across internal, QA, client, and report outputs)
  • Reduced user friction with structured multi-entry capture + clear draft/save model
  • Reduced unnecessary request volume via batch save behavior (scalable foundation)
  • Strengthened QA governance by ensuring data quality checks occur before final report completion

Project Overview

Cedar Rose provides risk intelligence to help organizations monitor companies and portfolios for changes in risk, compliance exposure, and emerging red flags. This project delivered a new Portfolio Risk Dashboard designed for everyday monitoring and rapid investigation — combining high-level portfolio health with drill-down clarity for individual companies.


Product Goals

  • Make portfolio risk changes visible at a glance (trend + risers)
  • Improve alert triage with clear severity and categorization
  • Consolidate key portfolio signals into one view to minimize navigation and context loss
  • Ensure patterns scale for large datasets (100+ entities) with predictable filtering and table behavior

Research & Analysis

Discovery Inputs

  • Stakeholder sessions to understand triage and investigation workflows
  • Review of existing reporting outputs and alert definitions
  • Audit of current navigation paths to identify the highest-friction steps

Competitive Research

Reviewed leading risk intelligence platforms to validate best practices for trend monitoring, alert triage, and portfolio exposure views, and to identify opportunities to differentiate through clarity and speed of investigation.

What I extracted and applied:

  • Trend & change detection patterns (portfolio risk over time)
  • Alert clarity patterns (severity, categorization, assignable actions)
  • Exposure & segmentation (sector/region breakdown to support portfolio decisions)

Key Design Decisions & Tradeoffs

1. Dashboard-first triage (KPIs + alerts prioritized)

Decision: Place portfolio health KPIs and alert signals at the top to enable instant triage.

Tradeoff: Less space for deep analysis above the fold.

Why: Users start sessions asking “what changed?” and “what’s urgent?”

2. Risk Trend + Top Risers for early signal detection

Decision: Add trend visualization and a “Top Risers” view to highlight increasing risk.

Tradeoff: Requires careful aggregation logic and handling partial data.

Why: Helps users spot shifts without opening each company record.

3. Single table as the investigation source of truth

Decision: Consolidate company metrics (risk score, alerts, exposure, sector, trend) into one powerful table with filters.

Tradeoff: Density management needed (progressive disclosure, column priorities).

Why: Prevents context switching and reduces navigation.

4. Severity-based alert system (clear differentiation)

Decision: Introduce distinct alert categories with severity styling and triage actions.

Tradeoff: Requires alignment with compliance expectations and consistent semantics.

Why: Reduces missed critical alerts and improves response speed.

5. Reusable enterprise patterns (scalable across dashboards)

Decision: Use consistent table/filter/state patterns to support future dashboards and features.

Tradeoff: Less UI freedom per screen.

Why: Improves learnability and reduces future design/dev rework.


Solution Overview

The dashboard brings together the portfolio’s key signals into one real-time view:

  • Portfolio KPIs: instant health snapshot (risk levels, critical alerts, exposure)
  • Risk Trend: portfolio behavior over time to spot shifts early
  • Sector Exposure: segmentation to understand concentration risk
  • Top Risers: companies with increasing risk score requiring attention
  • Geographic Risk View: regional distribution for global portfolios
  • Company Table (core): consolidated investigation view with filtering and sorting
  • Alert Center: severity categories, triage actions, and quick filtering

Interactive Prototype


System Thinking

States & Edge Cases

  • Loading: skeletons for KPIs and charts while data loads
  • Empty: clear messaging when portfolios have no alerts or filters return no results
  • Error: graceful fallback when risk engine is unavailable
  • Partial data: charts and widgets degrade safely when data points are missing

Shared Patterns

  • Table patterns: sortable columns, inline filters, bulk actions, export
  • Validation rules: consistent date range and filter validation across widgets
  • Permission behavior: read-only views for some roles; edit/assign actions for authorized users
  • Notification semantics: alert badges and severity logic mirror triage categories across the platform

Validation & Iteration

  • Iterated with stakeholders on layout priorities (triage → investigation flow)
  • Refined alert severity and labeling to reduce ambiguity
  • Tuned table density and filtering to keep high signal without overwhelming users
  • QA review with engineering to validate feasibility of states, performance constraints, and edge cases

Impact

  • Consolidated portfolio monitoring into one real-time investigation surface
  • Improved alert clarity through severity-based categorization
  • Reduced context switching by bringing the most used signals into a single view
  • Established reusable patterns (tables, filters, states) for future risk dashboards
  • Significantly reduced navigation steps by consolidating 3 views into 1

Lessons Learned

  • Hierarchy wins: KPIs first improves time-to-decision
  • Severity semantics matter: consistent alert visuals reduce cognitive load
  • Consolidation beats fragmentation: one investigation surface is more valuable than multiple separate views
  • System patterns pay off: shared components/states reduce rework and improve quality

Tools: Figma, FigJam, Competitive Research, Stakeholder Sessions

Platform: Web (Desktop + Tablet)

Timeline: 3 months (Discovery: 3 weeks • Design: 6 weeks • Iteration: 3 weeks)