Evaluating the Discoverability of
Recommended by Humans on HBO Max
UX Research Intern @ HBO Max
Nov 2020 (3 weeks, part-time)
Executive Summary
This study meant to uncover and assess the discoverability, understandability, and effectiveness of the homepage feature component for the Recommended by Humans feature on HBO Max.
This study focused on the Recommended by Humans feature component, sat on the homepage, meant to direct viewers to a new content recommendation as well as introduce this new recommendation engine.
💥 Research Impact
Role UX Researcher
Timeline 5 Weeks (Nov to Dec 2020)
Stakeholders Designers and Product Managers working on Recommended by Humans
Methods Interviews, Cognitive Walkthrough, A/B Testing
Tools UserTesting, Miro
Business Problem
Don't know what to watch? In 2020, HBO Max introduced "Recommended by Humans," a human-powered tool that set the platform apart from its algorithmic-driven competitors.
This study focused on the Recommended by Humans feature component, sat on the homepage, meant to direct viewers to a new content recommendation as well as introduce this new recommendation engine.
But with its limited Beta rollout, the product team had questions about how easily users could actually discover and understand this feature component on the HBO Max homepage.
Product Managers and Designers had questions like as to whether the "Recommended by Humans" feature component on the homepage was making a strong enough impression. Was it prominent enough to catch users’ attention, or was it getting lost in the homepage layout? Did it successfully drive engagement, or were viewers simply overlooking it? And most importantly, did users grasp the value of this human-curated recommendation tool?
Research Objectives
Thus, at a high level, the product team wanted to learn more about the feature component's discoverability, understandability, & effectiveness. With multiple ways to design the overall component, which combination of elements would be most effective?
To unpack this further, I outlined specific research goals and questions to formulate what to tackle in interviews.
Research Goal #1
Evaluate the discoverability of this feature component on the homepage.
Research Goal #2
Evaluate the understandability of this feature component’s purpose and promotion.
Research Goal #3
Assess which individual design variations would be the most effective way to display (UI), promote (content meaning), and provide access (function) to featured HBO Max content.
Approach
To start, I collaborated with a designer to build two prototypes to test with users. This helped answer the second and third research goal, as we wanted to know how well design variations of individual parts (ex: title, image) effectively conveyed their intended purpose.
By creating two distinct prototypes to test with users based from all the individual part design variations, we were able to evaluate how different design choices impacted discoverability and understandability. For instance, Prototype A featured a celebrity-based recommendation with a standard title and details, while Prototype B used an influencer-based recommendation with a quote-style title and an image of the show.
Gathering Stakeholder Hypotheses
To align stakeholders and ensure easy insight delivery, I started off by collecting their initial hypotheses on how well the feature component would perform with users.
Before kicking off the study, I facilitated a session where stakeholders aligned on hypotheses by coloring in statements with a red, yellow, and green denotation system. This was beneficial for a couple of reasons:
Created a shared understanding of our goals—what needed validation or refutation
I was able to understand which hypotheses were highly contentious versus those we were already aligned on, allowing me to prioritize key areas of focus and anticipate where deeper discussions might be needed.
🤔 Q&A: How did gathering hypotheses also help in delivering research insights?
It turned out that leveraging a traffic-light denotation system also ensured a quick turnaround of top-line insights following study completion. Immediately after the last interview, I recolored the hypotheses as a top-line for my stakeholders, building anticipation for my research read-out. PS: It worked!
Study Design
Over a week, I conducted 60-minute interviews with 8 HBO Max subscribers on UserTesting.com. These sessions were structured in two back-to-back parts:
Pre-study: Recruitment
Using UserTesting.com, I recruited 8 HBO Max subscribers from various regions across the United States. The participants were a mix of ages, viewing habits, and streaming preferences, and more.
Part 1: Blind Cognitive Walkthrough
Participants were randomly assigned Prototype A (homepage with celebrity rec) or B (homepage with influencer rec) to view
Using screen-share, participants scrolled through the homepage prototype, narrating their thoughts
We dove into looking at the Recommended by Humans feature component specifically, probing for impressions, understandability, and points of confusion
Part 2: Build-Your-Own Activity
The second half of this study let participants provide feedback in a fun and engaging way, where they were able to take on their “designer” cap and tinker around with the design variations of the prototypes themselves.
Activity Setup: I created a Miro board that contained all the design variations of the Recommended by Humans feature. These included options for images, titles, CTAs, and other components we wanted to test!
Choice Selection: Participants then built their preferred feature component by dragging and dropping different variations onto an empty component template. They could only choose one they felt would be most appealing.
Explaining Choices: As participants built their idealized component, I asked them to explain their reasoning for each selection. For example, I asked why they preferred one type of image over another or why a specific title style resonated with them.
🤔 Q&A: Why was this a blind study? What does that mean?
This study was conducted as a blind study to minimize bias from participants, given they may come with pre-existing notions of HBO Max. To accomplish this, the prototypes were stripped of the HBO Max logo and any branding elements so that participants were more likely to be unaware that they were interacting with a HBO Max researcher (me!).
Insights & Impacts
Insights provided design guidance to Product Designers, Content Designers, and Product Managers on which components to leverage in order to optimize for discoverability and understandability.
Here's a sample insight from the report I read out on my last week. Please reach out for more!
🤔 Sample Insight
Being shown an image of the recommender in the component increases trust in the authenticity of a recommendation, resulting in an increased value proposition.
Design variations of the feature component with an image of the recommender were received more positively than those without. Having an image of the recommender allows the recommendation feel more personal, and authentic, reinforcing the purpose of the Recommended by Humans as a human-powered recommendation system.
What do participants say?
“The picture makes it feel a little bit more personal, whereas without the picture, it's more like the platform is recommending it. I know it says it's by humans, but I still feel like it’s the computer recommending it whereas with a picture of Evelyn [the recommender], it's a little bit more real.” — Participant
So, what does this mean for you?
Research shows that there can be value in opting for design variations that feature a picture of the human recommending the content, so that "Recommended by Humans" can indeed feel recommended by humans.
💥 Research Impact from this Insight
Although my ability to directly measure the long-term impact of this research is limited due to the nature of internship work, the findings did influence my design stakeholders into opting towards design variations with the recommender image.
Learnings
I loved being able to research a B2C product for the first time! Here are some key takeaways.
Constant communication with stakeholders is key when there's tight turnarounds.
As this was a 3-week (part-time) study, I made sure to be in constant communication with stakeholders, either in-person or async, so that I could quickly build out the study. It was also key to keep them engaged by adopting strategies, such as giving a quick topline summary immediately after the last interview to keep them hooked.
What's crucial to efficiency is the willingness to ask for help.
Being on a small team (there were 4 UXRs total) allowed me to be very autonomous and take full ownership of all parts of the research process. It was key to be disciplined, ask for a helping hand when needed, and not be afraid to seek advice from others on the research team in order to move things along.
Many Thanks
Many thanks to the UXR team — Susan, Ayo, Amanda, and Michael — for giving me this chance to do work here and aiding me in tackling research on my own! Special thanks to Susan and Ayo for those times of 1-1 mentorship and insight.