A Shopping Platform Combatting Fake Product Reviews
My focus at Masse, a high-growth startup in its seed stage, was to demonstrate positive signs of stickiness for our investors by improving the user experience. In six months, I conducted five experiments to validate the hypotheses.
Nuances of Stickiness
Masse had the vision to become the largest source of truth for product recommendations on the internet. The mission was to become the go-to place to find reliable and trustworthy product reviews. The company had already amassed a community of 40,000 users sharing their experiences on the platform through their mobile app devices. I joined to study user behaviors and improve the experience for them.
“So how exactly does the app work? Upon creating a profile, Masse users may ask for a recommendation. A current example includes, “I’m looking for an old school coffee machine, drip, not Nespresso. Any ideas?” Fellow Masse-mates can then respond with any item within the app’s catalogue of objects (there’s a full inventory from Jet, Glossier, Maisonette, Sephora) or go beyond and cull from the entire world wide web, accompanied with a personal review/recommendation. In this case, a Mr. Coffee Maker fit the bill. And afterwards, as Brockhoff explains, “it’s human nature to want to help out a user, a friend.” To put it in Instagram terms, it’s the like for like philosophy; those who have had their recommendations fulfilled are motivated to pay it forward.”
Ramzi, Lilah. “Introducing Masse, A New Shopping Platform Combatting Fake Reviews.” Vogue, 13 Nov. 2018.
Making recommendations was proven to be addictive with minimal incentive other than community validation and the opportunity to build social capital. The easier it is to make a recommendation by answering a user’s question, the stronger the community becomes. The addictive behavior was a testament to user engagement; however, to better design for stickiness, we dove into NPS as a critical retention metric that potential investors would recognize.
I facilitated a workshop to get us closer to a problem statement we could design for. In doing so, we hypothesized that If we delivered value every time the user opened the app, they would keep Masse in mind. Additionally, they would also enjoy answering questions if acknowledged for their contributions.
Hypothesis 1: If we delivered value every time the user opened the app, they would keep Masse in mind
New User Drop Off
Finding the right questions was overwhelming, and 75% of new users dropped off within one week. “It's something about the layout of this app where it feels like everything is very crowded. For example, everything is close together. So it kind of looks very similar overall,” remarked a user tester. The product was not delivering on the promise for the average user, and we needed to reduce the amount of mental calories spent navigating the content.
By redesigning the Q+A module, we addressed the visual hierarchy of the entire app, making it easier to scan. That delivered immediate value to every user upon opening the app. We wanted to improve the one-week drop-off metric by creating overall greater efficiency with browsing the threads, which was the number one use case of the platform.
The thread module exhibited a faint scanning pattern and appeared visually cluttered. It presented an array of information: the question, hashtags, responses, suggestions for products, and images in different sizes. I focused on arranging the content according to user requirements to tackle the complexity. I streamlined the design by eliminating unnecessary details, achieving a more uniform look for fonts and images. The design was fine-tuned to follow a direct 'F' shaped scanning pattern, highlighting essential information with minimal visual components to differentiate.

Figure 1.1 Before the redesign, the module is separated into three different elements, the question, answer, and a CTA button

Figure 1.2. Extraneous content was removed to create a unified module focusing on the question and product imagery. The introduction of Stats helped users decide on tapping into the thread.
Recommendation Fallout
With the Recommendation flow, the user must get through the gauntlet, which had a 57% completion rate. Nearly 30% of users that came to answer a question had abandoned it before they got to the second step in the process. Testing showed that users were motivated to answer and would try multiple times to complete the task. The problem was a low-effort, high-pay-off enhancement easily remedied by best practices.
The layout placed a lot of responsibility on the user to figure out how to recommend a product. There was an assumption that they would search for a product first. However, qualitative data told us that their mental model was the contrary. They wanted to write their recommendation and then select a product. Strategically, we needed the user to choose a product first, so an interim solution was put into place where I added visual signposts to inform the two-step process. This solution proved successful as testing results showed positive engagement.
Furthermore, our studies concluded that super users would likely recommend the same product more than once. We added a recently recommended product grid to make writing a review more accessible. That empowered them to answer quickly, skipping the first two problematic steps in the flow, preventing a 42% chance of fallout.

Figure 2.1. Recommendation fallout detailing where user drop-off was occurring

Figure 2.2. The first step of the Recommendation flow where there was a 29.6% fallout rate

Figure 2.3. The first step of the Recommendation flow after usability enhancements
Figure 2.4. A prototype of the redesigned Recommendation Flow
Raising the Net Promoter Score
Quantitative research indicated that NPS scores increase if the user reaches the product detail page. I gleaned insights from user testing to determine what we could do to facilitate an intention to purchase. Product imagery, second to the question asked, was the most influential feature in enticing click-throughs from the Discover tab. Our testing observation framework gave us a high-level view of what parts were meaningful and where.
From the Recommendation thread, the intent to purchase emerged. Social validation, in particular, was what the users were looking at. The stats included social reactions they used to decide to buy a product. The number of 'Agrees' associated with a review was far more convincing than the 'Thank yous.’ The number of times a review was saved was equally important to the comments on a review. Price was another factor that we considered in the Recommendation thread.
We optimized the Product Detail page for making a purchase. I explored variations in the layout that would bring the most attention to the call to action button and price. I found that Attention to Heatmap predictions helped validate designs before user testing. The final design combined the best results.
Figure 3.1 User testing results determined the priority of factors when redesigning to raise NPS. The first graph indicates the issues on each page along the pathway to the product detail (top left). The second graph shows the problems associated with the four user intentions: Browse, Purchase, Validation, and Answer (top right). Lastly, the most frequently reported issues are displayed in priority order (bottom).

Figure 3.2. The Discover tab

Figure 3.3. A Recommendation thread

Figure 3.4. A Product Detail page

Figure 3.5. Option A of the product detail page with the Attention to Heatmap prediction

Figure 3.6. Option B of the product detail page with the Attention to Heatmap prediction
Creating Channels
Personalization meant getting relevant content on Masse. Categorizing the Recommendations was the first step to simplifying discovery. For example, if your focus is solely on electronics, you could directly explore an electronics-oriented channel rather than wading through a multitude of unrelated content. This approach addressed several fundamental user needs:
1. Finding relevant recommendations
2. Discovering new products
3. Emphasize the expertise of individuals (being associated with like-minded users)
The hashtags were written for our target personas, which helped group the 86k individual recommendations into searchable collections. The hashtags then became content for our paid ads. Users could download the app to find these collections, fulfilling the brand's promise.
The hamburger navigation was added to accommodate the requirement to browse channels. We explored keeping the navigation in the tab bar, which caused much user confusion. We learned our first iteration of the hamburger menu didn't match the user's mental model—our second iteration was a vast improvement. Channels were not a priority for our users as we thought, so we moved it one level deeper in the information architecture.
As a result, the introduction of channels also encouraged users to engage in searching. Users were looking to meet their needs. They would type in a query like 'lipsticks,' but their search wasn't targeted at a particular product like 'glossier lipstick.' A more thorough analysis of user search behaviors unveiled that they were seeking specific attributes of a product, such as 'red lipsticks.' Armed with this understanding, we deduced that adopting an e-commerce-style search approach would provide the greatest assistance.

Figure 4.1. An Instagram post of the Millennial Mom collection

Figure 4.2. Finding the Millenial Mom collection on the Discover tab

Figure 4.3. The Millennial Mom Channel in app

Figure 4.4 Iteration 1 of the hamburger nav

Figure 4.5. Iteration 2 with improvements to match the user's mental model
Figure 4.6. Search queries that are broken down by terminology. The top three keywords were attributes and products, followed by categories

Figure 4.7. Step one of the Search flow

Figure 4.8. Search flow typeahead

Figure 4.9. Search results
Hypothesis 2: Users would enjoy answering questions if they felt acknowledged for their contributions
Leaderboard
The Leaderboard provided acknowledgment and influence in real-time, bringing our most engaged users into the spotlight. Our super-users spent a great deal of effort in answering questions. The leaderboard added a dimension of competition, and we saw immediate results. New super users emerged from the crowd. To capitalize on their motivation, we created share assets specifically for Instagram (where most of our acquisition’s organic growth came from).
A welcomed side effect drove traffic to the profile page, where you could gauge a person’s expertise. Since the leaderboard was gaining traction as an area of interest, we ideated further on recognition and rewards. We aligned on our second quarter’s goals to activate the community with a monthly recap of activity.

As much as I would like to say it’s to help people, it’s really that I think MASSE speaks to the fact that everyone wants to be an influencer. But you don’t want to ACTUALLY promote on your Instagram because that’s embarrassing…using MASSE isn’t ratting someone out like a review it; it’s not cringe-y like Quora. – Ally, 28
Super users are the top 10% that stimulate user generated activity in the app

Figure 5.1. Leaderboard featuring Masse rankings of All Time

Figure 5.2. A shareable monthly recap of user activity

Figure 5.3. Sharing a product recommendation outside of the app
Figure 5.4. User testing results for the prototype release of the leaderboard where users felt motivated to make a ranking for the day
Masse Closing Stats
In six months, our user base grew by 150%. We had reached about 100k users. Organic growth grew by 9% and word of mouth at 24%. Monthly active users were up by 36%, and weekly active users by 2%. March 2020 revealed steady growth in retention despite an increase in users. With a few more months, we could have accurately measured retention performance; however, the economic conditions caused by the COVID-19 pandemic closed the business prematurely. Masse remains a great learning experience.