Research Phase
Correspondence & Cluster represents a sophisticated suite of AI-powered data visualization tools that transform how analysts identify patterns and relationships in complex datasets. This case study explores my leadership of the redesign process, balancing technical machine learning capabilities with human-centered design principles to create a more intuitive path to insights. The project demanded deep understanding of both the underlying AI algorithms and the mental models of users working with abstract statistical concepts—a challenge uniquely suited to my expertise in AI UX design.
Project Overview
As this was an existing product with established users, I needed to conduct thorough research to understand how the product functioned, identify opportunities for improvement, and define clear problem statements.
Initial Discovery
My research phase began with over 15 hours of stakeholder interviews spanning the entire project ecosystem—from developers and designers to leadership and account managers. I complemented this with 8 in-depth user interviews, observing how they interacted with the existing system and documenting their frustrations and workarounds.
To ensure analytical rigor, I conducted a heuristic evaluation against established UX principles, performed competitive analysis of similar AI visualization tools, and analyzed usage data to identify where users were struggling or abandoning the process altogether.
The Problem Statement
After synthesizing research findings and collaborating with data scientists, we defined our primary problem statement:
"Users need to be able to create unique groupings of people and chart those relationships against their brand in a way that makes complex data-driven insights accessible."
As Correspondence and Clusters are two complementary AI features, I'll focus first on addressing the initial component of our goal: "Users need to be able to create unique groupings of people." This is where Cluster comes in.
Understanding AI Technology
Cluster Analysis
Cluster Analysis is an AI Machine Learning technique that identifies patterns in data and groups similar items together based on their characteristics. This unsupervised learning approach helps users discover natural groupings without predefined classifications.


♡ Group 1: Organized by shape (less optimal)
♡ Group 2: Organized by color (less optimal)
♡ Group 3: Shows the maximum variety with minimal groupings (optimal)
The cluster algorithm evaluates multiple potential groupings and presents the one that maximizes between-group differences while minimizing within-group variations.
When applied to consumer data, this might look like:

This addresses the first part of our challenge, creating unique groupings of people. For the second part—charting relationships against brands—we needed Correspondence Analysis.
Correspondence Analysis
Correspondence Analysis is a dimensionality reduction technique that visualizes relationships between categorical variables in a low-dimensional space, typically 2D or 3D.

If we place two distinct items (like pearls) in this space:
♡ Pearl 1: White, luminous, common
♡ Pearl 2: Black/onyx, less shiny, rare, expensive
The AI algorithm places them at opposite sides of the space due to their contrasting attributes. This same principle applies to the consumer groups identified by Cluster:

By adding brands to this visualization (Patagonia, REI, Apple, Microsoft), users can see how different consumer segments relate to their brands and competitors.

In real-world applications with hundreds of data points, the visualization forms more of a "galaxy" shape, with each point positioned relative to all others based on their unique characteristics.
Defining Phase
User Persona: "Alex" - The Insights Analyst

After interviewing current users, I developed a persona to guide our design decisions:
♡ Role: Market Research Manager
♡ Goals: Identify meaningful consumer segments, map brand positions in the market
♡ Pain points: Overwhelmed by complex UI, uncertain about data requirements, difficulty interpreting visualizations
Identifying Key UX Issues
Through user observations and heuristic evaluation, I uncovered several critical issues hampering the user experience.
Cognitive Overload emerged as the primary concern, with users confronting 13 distinct UI sections before even scrolling. This violated Miller's Law, overwhelming users' working memory capacity and creating significant mental friction. The control panel exacerbated this by mixing tools for both Cluster and Correspondence features, forcing users to sift through irrelevant options.
The information hierarchy worked against users' goals, with critical actions buried among lesser options while the least important elements (like segment comparison) commanded the strongest visual weight. This directly contradicted the Von Restorff effect principle, where important elements should be visually distinctive to guide attention.
Workflow inefficiencies created unnecessary barriers to insight. The system required 6 data points even when users only needed Cluster analysis (which required just 3), forcing them to run both analyses simultaneously and wasting computational resources. The interaction model further limited users by only allowing vertical label movement, restricting their ability to organize their visual space.
Perhaps most critically for an AI visualization tool, users faced significant interpretation challenges with the correspondence view. Confusion about axis meaning, unclear performance models, and difficulty understanding relationships between data points meant that even when the AI generated valuable insights, they often went unrecognized or misinterpreted.
Addressing User Flows
After auditing the entire user journey, I identified three critical flows that needed significant improvement to unlock the full potential of the AI visualization capabilities.
For the Data Input Flow (Composer), I removed artificial constraints by reducing the minimum requirement from 6 to 3 data points when users only needed Cluster analysis. This seemingly small change dramatically improved the onboarding experience, particularly for new analysts exploring the tool's capabilities. I implemented contextual help that appeared only when needed, preserving a clean interface for power users while providing guidance for newcomers. The addition of in-line validation created immediate feedback loops, helping users understand what the AI needed to generate meaningful visualizations.

The Cluster Analysis Flow received a complete structural overhaul, separating it into two distinct cognitive stages: Solution Selection and Solution Breakdown. This aligned with how analysts actually work—first deciding on the appropriate clustering solution, then diving into its composition. By showing only relevant tools for each step, the interface respected users' cognitive load while maintaining access to powerful functionality. The improved visual representation of groups made the AI's clustering decisions more transparent and comprehensible.

For the Correspondence Flow, I enhanced the 2D visualization with more intelligent labeling that conveyed relationships more clearly. The model performance metrics—previously hidden in separate panels—were integrated directly into the visualization, creating immediate understanding of data quality. The control panel was simplified with focused, contextual options that appeared when relevant, reducing the learning curve while maintaining analytical power.

Design Process
Iterative Design Approach
I employed a systematic design process centered around continuous validation and refinement. Beginning with low-fidelity wireframes tested with five core users, I focused on establishing the fundamental structure before adding complexity. This early testing revealed critical insights about mental models that informed subsequent iterations.
For key decision points in the interface, I implemented A/B testing and iterative design reviews to evaluate competing approaches to interactions and information architecture. This data-driven approach allowed us to move beyond subjective preferences to measurable performance improvements.
Mid-fidelity prototypes served as the bridge between concept and implementation, providing enough detail for data scientists to evaluate analytical integrity without getting lost in visual details. These collaborative sessions with technical specialists proved invaluable for ensuring the redesigned interface accurately represented the AI's capabilities and limitations.
The high-fidelity designs underwent rigorous validation with both users and stakeholders, confirming that the interface balanced analytical power with intuitive usability—the central challenge of effective AI UX design.
Composer Redesign

The initial redesign addressed several key issues:
♡ Progressive disclosure of requirements for new users
♡ Contextual guidance triggered only when needed
♡ Clear error messaging with actionable solutions
♡ Visual indicators of minimum requirements

By providing just-in-time information, we empowered users without overwhelming them, reducing their anxiety about "doing it wrong."
Cluster Redesign

For the Cluster interface, I focused on:
♡ Visual representation of how groups relate to the whole
♡ Separation of concerns between solution selection and analysis
♡ Information hierarchy that guides users through complex decisions
♡ Streamlined interactions for comparing and selecting solutions

After multiple iterations and user tests, the final design provided a clear visual breakdown of groups while maintaining analytical depth—striking the balance between simplicity and power.
Correspondence Designs

The Correspondence visualization received significant improvements:
♡ Intelligent label placement based on quadrant position
♡ Integrated performance model within the visualization itself
♡ Enhanced chart settings with clearer controls and defaults
♡ Multi-directional label movement for better customization

Although we explored a true 3D visualization that users loved in testing, timeline and technical constraints led us to enhance the 2D visualization instead—focusing on label clarity and improved control placement.
Implementation & Delivery
Cross-Functional Collaboration
As Product Owner, I led implementation through orchestrated cross-functional collaboration. My approach centered on granular feature definition—creating over 300 meticulously defined tickets that served as the project's DNA. These decomposed complex capabilities into implementable units while preserving the overall vision.
Weekly sprint demos became crucial alignment touchpoints, where stakeholders could see progress, provide feedback, and ensure we remained on track. These sessions weren't simply about demonstrating functionality; they became forums for collaborative problem-solving when technical challenges emerged.
Daily scrums with developers and data scientists facilitated tight feedback loops between design and implementation teams. I established clear, testable acceptance criteria for each feature, ensuring that technical implementation aligned with user needs. This disciplined approach to product management enabled the team to tackle a complex redesign with minimal rework and maximal alignment.
Technical Constraints & Solutions
The implementation phase revealed several technical challenges that required creative solutions at the intersection of design and engineering. Real-time correspondence mapping presented a challenge, particularly for interactive exploration. I collaborated with developers and data scientists to optimize the computational approach, finding the balance between analytical precision and performance. Together, we developed a solution that provided immediate feedback using a simplified algorithm and optimized coding envcioremetn before refining results with more sophisticated processing in the background.
Browser compatibility constraints threatened to undermine our visualization ambitions, particularly for users in enterprise environments with legacy systems. Rather than compromising the experience for all users, accessable design approaches were used to ensured accessibility across environments while still delivering cutting-edge capabilities where supported.

Launch & Impact
After six months of research, design, and development, we successfully launched the new version of Correspondence and Cluster.
Quantitative Results
The redesign produced remarkable performance improvements across key metrics. Feature usage surged by 45% within just three months of launch, indicating user adoption and satisfaction. Support tickets related to these features dropped by 75%, freeing up considerable customer success resources while indicating improved usability and self-service capabilities.
Task completion rates saw a dramatic 41% improvement, transforming what had been a source of user frustration into a productive workflow. New user adoption of advanced analytics features increased by 28%, suggesting that the redesigned interface made sophisticated AI analysis more approachable. Perhaps most tellingly, average session duration with the tool increased by 25%—users were not only able to complete tasks but were engaging more deeply with the analytical capabilities.
Qualitative Feedback
♡ "I can finally understand what the data is telling me without having to be a statistics expert."
♡ "This is the best version of Correspondnce and Cluster I have seen in my 15 years working here"
♡ "The new clustering visualization makes it much easier to explain findings to my clients."
♡ "What used to take me three hours now takes about 45 minutes."
Reivew of the project
Technical Insights
This project reinforced that data-driven UX decisions are absolutely essential when designing AI/ML interfaces. The complexity of these systems means that intuition alone is insufficient; we needed empirical validation for design choices. I discovered that algorithm transparency directly correlates with user trust—when users understood how the AI formed its clusters, they were more likely to act on the insights. The progressive complexity approach we implemented proved particularly effective, allowing novice users to engage with basic functionality while providing experts with advanced capabilities through contextual disclosure.
Process Insights
Early involvement of data scientists in the design process proved transformative for this project. Their expertise helped identify visualization limitations before we invested in unimplementable designs, and their insights into algorithm behavior informed how we communicated system capabilities to users. Regular user testing with actual datasets caught numerous issues we wouldn't have anticipated through abstract exercises, particularly around performance with diverse data types. Perhaps most critically, I learned that balancing technical accuracy with usability required careful, deliberate tradeoffs—decisions that needed to involve both technical and design stakeholders.
Personal Growth
This project substantially deepened my understanding of machine learning visualization techniques, particularly how different dimensional reduction approaches impact user comprehension. I significantly improved my ability to translate complex statistical concepts into intuitive interfaces without oversimplifying the underlying mechanisms. The cross-functional nature of the work enhanced my leadership skills, particularly in facilitating productive collaboration between technical specialists from different domains who often approached problems with different priorities and vocabulary.




Based on post-launch user feedback and market trends, I developed a roadmap for future enhancements that would further differentiate the product. The true 3D visualization that users responded so positively to during testing remains a priority, with implementation planned to deliver this capability without compromising performance.
Automated insight generation using natural language processing represents the next frontier for our AI capabilities—moving beyond visualization to proactively identify patterns and anomalies that might otherwise go unnoticed. This would transform the tool from a visualization system to a true analytical partner. If you would like to see how, see Ask CAIT .