Usability Testing
QuaLITATIVE Research | USER RESEARCH
With a background in psychology and human factors, I often take on research responsibilities - sometimes independently, sometimes in partnership with a dedicated UX researcher. So far in my career, it has included everything from SUS scoring and concept testing to running feedback loops and helping teams align when direction isn’t clear. Here's a look at some of the research work I’ve led or contributed to across complex internal tools.
Concept Testing
Context: Across several projects, my teams have needed to test and align on experience structure - especially when there were competing perspectives on how workflows should be organized. In one case, we explored a program-first versus people-first model for manager-driven reward and promotion flows. Another effort focused on shaping a new decision-making experience for recommending annual rewards.
My role: I helped design and run early concept testing using low-fidelity A/B mocks and flow comparisons. We scoped lightweight studies and structured feedback sessions to identify friction points and clarify user expectations. In some cases, I took the lead on testing when research resources were limited - helping run participants, synthesize insights, and bring stakeholders back into alignment.
Impact: The feedback helped us land on a hybrid structure that balanced business needs with manager usability. For newer workflows, early testing surfaced structural issues before high-fidelity work began. Several of the concepts later scored well in UX evaluations (3.7+ across usability, clarity, and confidence), and the process helped unblock teams that had been stuck in a directional gridlock.
System Usability Scale (SUS Scoring)
Context: As part of a broader redesign for a manager-facing HR tool, we needed to evaluate whether a new information architecture, task structure and justification flow improved usability.
My role: I helped create and run a System Usability Scale (SUS) survey during a time when no dedicated researcher was available. Using before and after walkthrough videos of the experience, I analyzed quantitative scores to identify key pain points and improvement areas.
Impact:
The before and after SUS score results were significantly different. After refinements, SUS scores rose by +54.5 points (Before = 24.5, After = 79.0), confirming that the changes significantly improved user confidence and usability.
Heuristic Evaluation
Context: Across multiple efforts, there were points in the design process where research support wasn’t immediately available - but decisions still needed to move forward. To reduce risk and maintain quality, we needed a way to quickly identify major usability issues and inform early iteration.
My role: I ran informal heuristic evaluations to catch friction points in early design concepts and flows. Using UX scorecard categories like clarity, confidence, and navigation as a framework, I reviewed in-progress work for common usability pitfalls and documented areas of improvement. These evaluations were often used to unblock teams and help guide design discussions ahead of formal testing or validation.
Impact: Several key issues were caught and resolved before reaching engineering handoff or research review, saving time and reducing churn. In one case, the findings helped shape the structure of a newly proposed workflow, which later tested well in user studies and met target score thresholds across all UX categories.
UX Tenets & Traps
Context: Throughout my time at Microsoft, I often worked with PMs, engineers, and business partners who needed help assessing the UX of early concepts. When traditional research processes weren’t possible, I used the Tenets & Traps cards to quickly evaluate experiences and facilitate grounded design discussions.
My role: I regularly led or supported Tenets & Traps sessions - walking through flows with partners and identifying design “traps” that might block user understanding, trust, or task flow. These sessions became especially useful for aligning on next steps when work was exploratory, misaligned, or lacking clear UX support. For many partners in the HR space, it also served as an introduction to UX thinking.
Impact: The approach helped teams course-correct early and raise the overall UX bar without requiring formal studies. In multiple efforts (including rewards workflows, pre-onboarding, and AVA), these reviews helped influence design decisions and even preempt larger issues before they reached engineering.