top of page
Understand AWS Users’ Primary JTBD Using CloudWatch Synthetics's Console

Role
As the first and solo UXR embedded in one of AWS's product teams; I led the end-to-end research process and delivered results in 4 weeks. The main stakeholders included the Product lead, Senior PM, Principle designer, and Senior engineering manager. 

Problem
The team disagreed on the design direction of a new feature they were building, which required high engineering effort and would impact all of the product's users. Therefore, the Product Lead requested evaluative research to help the team align on the design solution. 

Approach
I identified the root cause of the disagreement through stakeholder interviews and saw the need to change the research scope by adding generative research questions. I conducted contextual interview to learn about users' JTBD followed by concept testing to evaluate prototypes.  

Impact
User research revealed that the new feature wouldn't address the actual user need; and I identified a simpler solution that would solve the real user problem and require less effort. After I delivered the report and facilitated a discussion on research insights, the team decided to deprioritize the new feature and aligned on a simpler one that would benefit both the users and the business. 

1. Prodct Introdction

PRODUCT INTRODUCTION

Amazon Web Services (AWS) is the most popular vendor in the cloud infrastructure service market. In 2022 Q1, it took 33 percent of the entire market, followed by Microsoft Azure at 21%, and Google Cloud at 8%

CloudWatch Synthetics is one of the monitoring products in AWS. It allows users to create tests that generate fake data simulating consumers’ actions on the website, so users can automatically test their websites and fix any issues before consumers encounter them.

 

2. Project background

PROJECT BACKGROUND & CHALLENGE

Synthetics' console has 2 views to display all the test results, but they were pretty complicated (see images below). Therefore, the team was planning to build a 3rd view to improve users’ experiences using Synthetics' console. 

Console View 1

Console View 2

However, when the designer shared the prototype of the new feature with leadership, 2 different design directions emerged and the team couldn’t align. 

Since this is a high-profile feature, the senior PM asked me to conduct evaluative research on the prototype. Upon getting the request, I knew that I had to unpack the problem to understand if that was the best approach.

Timeline

TIMELINE

  • As my internship was ending, I had 4 weeks to complete the end-to-end research process from scoping, recruiting, execution, and analysis, to presentation. 

  • I want to highlight that Analysis & Sharing was overlapping with research execution, because my initial scoping work had shown me how little alignment there was on the team, so I wanted to make sure that the team would buy into the research findings through frequent and early sharing.

3. Planning & scoping

PLANNING & SCOPING

Getting the context

​I started with secondary research because I didn't want to repeat what has already been done.

Afterward, I conducted stakeholder interviews with the product team to figure out what exactly we should do research on and with whom. The key stakeholders included the principal designer, the PM, the SDM, and the senior PM who requested the research. 

Changing the research scope

  • Initially, I was only asked to conduct evaluative research on 2 prototypes of the new feature to resolve disagreements about the design direction.

  • As I started investigating the cause of the disagreement, I found out that the different design directions were actually driven by 2 conflicting assumptions about users' JTBD, and both of them were not supported by research.

Therefore, I realized it was important for the team to step back and align on which problem they should be solving by understanding users’ JTBD for using Synthetics’ console.

  • After considering the risks and timeline, I decided to change the research scope and included both a generative research question and an evaluative one. 

Frame 62.png
Method selection & Execution

METHOD SELECTION & EXECUTION

To answer both generative and evaluative research questions, I employed contextual interview and concept testing in a 1-hour session. Below is the rationale for selecting each method :

  • Contextual interview

    • Goal: to understand participants’ current JTBD through observation

    • How & Why: I observed how participants were actually using the console. My participants were domain experts in the tools they were using, so observing their behaviors would allow me to capture details that they couldn’t recall, articulate, or weren’t aware of to get a more in-depth understanding of their experiences​

  • Concept testing

    • Goal: to understand if the 2 designs could solve any problems and gather feedback for improvement

    • How & Why: I showed participants the prototype and asked them to think aloud. I also asked clarifying questions to make sure that their comments were connected back to their existing use cases they mentioned earlier to prevent them from speaking in abstraction. 

As my stakeholders wanted to get a broad spectrum of understanding from existing users, I talked with 13 participants with a mixture of engineering roles who are either external users or internal Amazon employees. 

Analysis & Stakeholder buy-in 

ANALYSIS & STAKEHOLDER BUY-IN

Research findings contradicted the team's assumptions

  1. Contrary to what my stakeholders believed in, participants didn’t find the new feature beneficial for either of the 2 JTBDs that the team made assumptions about.

  2. Furthermore, the whole team was missing a more important user problem that designing for either of their assumptions alone wouldn’t solve.

Getting buy-in from stakeholders

As the findings were completely contradictory to what many stakeholders believed in, I wanted to make sure that they would buy into the research findings and wouldn’t feel taken by surprise.

My approaches included inviting them to attend sessions, debriefing together, sharing a summary after each session through Slack, and showing video snippets for stakeholders to see for themselves

7. Presentation

PRESENTATION

Because I had kept my stakeholders on board throughout the study, when it came to the final presentation, everybody was on board with the findings. I facilitated a discussion after the presentation where the team was able to resolve their disagreements on the design solution and focus their discussions on what JTBD to prioritize as part of the product strategies.

8. Impact

IMPACT

Strategic Impact

  • The team decided to re-prioritize which features to build after learning about users' primary JTBD

Product Impact

  • Research findings provided direction for the team to redesign Synthetics' existing console

  • The team learned about how the new feature could add value for a 3rd JTBD and how to improve the prototype

Team Impact

  • Reduce disagreement and helped the team make decisions based on research data

  • Introduced "observation" as a research approach to the team and demonstrated impacts

9. Reflecton

REFLECTION

  • I learned about the importance of tailoring my sharing approach to get buy-in for research findings. For instance, there was a stakeholder who didn't have time to attend any of the sessions and had doubts about the research findings when I shared written summaries in Slack. Therefore, I put together the most relevant video snippets and shared with him. After watching those curated video snippets, he switched perspectives. 

  • I learned about how to be more agile and learn the most from the same number of participants by incorporating both generative and evaluative components in the same study when appropriate. 

bottom of page