
Cedar Rose — Product Design
Product design ownership across Cedar Rose’s risk intelligence ecosystem — from building a real-time monitoring dashboard (0→1) to cross-product workflow design across internal, QA, client, and report surfaces.
At a Glance
- Products: CRiS Backoffice (Internal) • CRiS Intel (Client-facing) • QA Platform • Credit Report Output
- Primary users: Business Researchers • QA reviewers • Clients/Analysts
- My role: Product Designer (Enterprise / B2B SaaS) — cross-product ownership
- Focus: workflows, IA, reusable patterns, states/edge cases, developer handoff, and scalability decisions
Ecosystem Context
Cedar Rose’s CRiS ecosystem connects internal production workflows to client consumption:
- CRiS Backoffice (internal researchers): screening entities, due diligence, and generating structured risk profiles
- QA Platform: reviewing report quality, completeness, and researcher KPIs before delivery
- CRiS Intel (client-facing): portfolio monitoring, alert triage, dashboards, and report access
- Credit Report output: a sellable product output reflecting consolidated and validated data
Design goal: Ensure the same data semantics flow end-to-end so “what researchers capture” becomes “what clients see,” and QA can validate quality before release.
What I Owned
Across the CRiS product suite, I focused on:
1) Enterprise workflow design
- Designed structured data capture and review workflows from research → QA → client delivery
- Defined statuses, ownership, and clear action paths to reduce handoff friction
2) Cross-product consistency & patterns
- Standardized tables, filters, and forms (sorting, pagination, bulk actions, validations)
- Defined interaction states (loading/empty/error/permission) across products
- Ensured consistent field definitions and semantics for shared concepts (risk levels, alert categories)
3) QA governance and quality workflows
- Designed workflows to ensure researchers’ outputs are checked before final delivery
- Supported KPI tracking concepts and review signals that map back to client-facing semantics
4) Competitive research & market positioning
- Conducted competitor UX research to identify best practices, gaps, and differentiation opportunities
- Delivered recommendations to support product positioning and prioritization
Spotlight: “Call / Interview Details” (Cross-product Workflow)
Why this flow matters
To improve traceability and quality of investigations, we introduced a structured “Call / Interview Details” flow that standardizes what information is captured, how “obtained data” is tracked, and how validated data appears consistently across products.
Where it appears (4 products)
- CRiS Backoffice (internal): researchers add call/interview details and captured data fields
- QA Platform: reviewers assess completeness and quality before the report can be finalized
- CRiS Intel (client-facing): consolidated validated information becomes visible to clients
- Credit Report output: generated report reflects the same structured dataset
What I owned
- Defined the field structure and grouping for call/interview data
- Designed the multi-entry UI pattern for adding several records efficiently
- Mapped how consolidated data should surface consistently in client-facing views and report outputs
- Specified states + edge cases (draft vs saved, missing info, QA rejection, permissions, partial data)
Interactive Prototype
Embedded prototype (alternate UI kit) used to validate the workflow, states, and cross-product data flow. Final implementation aligns with Cedar Rose design system.
Key Product Decision: Batch Saving for Scalability
The problem
Researchers often add multiple rows/entries in one session. Saving each row immediately creates unnecessary requests and increases operational load — especially when users edit multiple times.
The approach
We designed a draft → batch save flow:
- Multiple entries are kept in a draft state while the user is working
- On Save, all entries are submitted as one batch transaction
- If a single saved row is edited later, we update only that row (targeted update)
Why it’s scalable
- Reduces request volume and backend load
- Supports future automation where external sources may add data that requires auditing/validation
- Prevents repeating validation work per row and keeps the system efficient
System Thinking
States & edge cases
- Draft vs saved data handling
- Loading/empty/error states for table + forms
- Permission behavior (who can add/edit vs view)
- QA rejection loops and required fixes before finalization
- Partial data handling and safe fallbacks
Shared patterns reused across products
- Tables: sorting, inline filters, pagination, bulk actions
- Forms: validation, conditional fields, error prevention
- Notifications: status changes, QA flags, and workflow alerts
Outcomes
- Improved cross-product consistency (same field semantics across internal, QA, client, and report outputs)
- Reduced user friction with structured multi-entry capture + clear draft/save model
- Reduced unnecessary request volume via batch save behavior (scalable foundation)
- Strengthened QA governance by ensuring data quality checks occur before final report completion