Research @ Leap: adapting formal practices to a smaller company
Though my official title at Leap was UI/UX designer, I was brought on to introduce user research into the product development pipeline. Research at Leap was especially fun because we were developing something that customers were using everyday, so they were really passionate about getting involved in improving the product and telling us what they need. :)
Once we had an established research backlog I was allocated more to design duties, but I pioneered a lot of Leap’s research processes. Prior to this, features were released with very little research to determine its potential success.
I adapted these processes from ones I learned during my time in University of Maryland’s graduate program, and I learned a lot about what works and doesn’t work for a company of Leap’s size. A lot of structure I had as a student wasn’t feasible to establish without a specialized user research team and larger budget.
Primary learnings:
These learnings might seem obvious, but it’s challenging to establish foolproof, adaptable research systems that work for everyone on a team!
Once we had an established research backlog I was allocated more to design duties, but I pioneered a lot of Leap’s research processes. Prior to this, features were released with very little research to determine its potential success.
I adapted these processes from ones I learned during my time in University of Maryland’s graduate program, and I learned a lot about what works and doesn’t work for a company of Leap’s size. A lot of structure I had as a student wasn’t feasible to establish without a specialized user research team and larger budget.
Primary learnings:
- Research is only useful if it is recorded:
- At a small company, everyone is participating in some way in research.
- There should be as few barriers as possible to collect research without interrupting primary job duties.
- Participant researchers should not be afraid of accidentally messing up a “system” in place.
- Research is only useful if it is read by others:
- Consider the audience for all research and what they care about.
- Eliminate as many barriers as possible to read research. People will very rarely open a research-only space; it should be where other work is. Slack integrations, mixing with already-used tools, etc.
- Very few stakeholders need to see the raw data.
These learnings might seem obvious, but it’s challenging to establish foolproof, adaptable research systems that work for everyone on a team!
The Research Repository: the challenge of democratizing research
The research repository was a pet project of mine to collect and organize all our research into one place. The “grand ideal” of a research repository is something that has well-tagged and categorized research and insights that can be easily searched through, filtered, and synthesized by stakeholders. We ended up having time to only maintain a much simpler form.
The research repository goals:
Requirements:
Initial solution: The weird, clunky, Jira machine.
We were already using Jira, so I adapted this process [link] to create a research repository and categorize our research. Jira was something we were using everyday at the time, so research wouldn’t be tucked away for people and everyone would have access to it already.
I created a structured template to mass load in interview notes with drop-downs, embedded instructions, and ways to categorize data by who said it.
Advantages:
Results:
Despite demo-ing it and templating everything, this wasn’t adaptable into our process! It ended up being too out-of the way to do excel notes and load them in, and too much added effort to the interview process for others to want to adopt. I didn’t have time to maintain the research repository for everyone else either, as I had other job duties.
Categorizing research by persona and tagging individual notes to get larger insights, while useful, was more extra effort than we were able to exert at our team size. We got quicker insights by asking our internal CS team about general trends they saw with customers.
What DID end up working?
We ended up just creating an organized Confluence space— rather than loading in individual notes, researchers would just load in all notes at once using a roughly structured template.
We lost some of the organization and tagging prowess that Jira had, but roughly recorded research is far better than uncollected research. In a small organization, we can still communicate and ask each other about research we’ve collected.
People took to Confluence’s friendly document format much more than Jira’s excel sheets; it was a direct copy-paste from the notes they were already taking.
Confluence practices implemented to control the chaos:
We learned that, though a research repository is the grand ideal for organizing research, it’s “ideal form” wasn’t something that was making our jobs easier or the research more widespread at a smaller company size. “Good enough” organization is good enough to get by.
The research repository goals:
- Surface research findings to everyone, for anyone to use in the organization.
- Empower anyone to participate in and contribute to research.
- Reduce instances of accidentally redoing research
Requirements:
- Within budget (better if something we’re already paying for)
- Integrate with other tools; automatic intake from Salesforce, forms, surveys, etc.
Initial solution: The weird, clunky, Jira machine.
We were already using Jira, so I adapted this process [link] to create a research repository and categorize our research. Jira was something we were using everyday at the time, so research wouldn’t be tucked away for people and everyone would have access to it already.
I created a structured template to mass load in interview notes with drop-downs, embedded instructions, and ways to categorize data by who said it.
Advantages:
- Worked well with Confluence; we could make embedded visualizations and filters to add context
- Could be infinitely cataloged and filterable
- Could connect directly to engineering tickets and Salesforce and create a chain of documentation
- Was not made for a research repository, so had to do some weird workarounds.
- Required “power user” knowledge to synthesize and sort data effectively.
- Ended up being clunky tedious to mass load in data using Excel.
Results:
Despite demo-ing it and templating everything, this wasn’t adaptable into our process! It ended up being too out-of the way to do excel notes and load them in, and too much added effort to the interview process for others to want to adopt. I didn’t have time to maintain the research repository for everyone else either, as I had other job duties.
Categorizing research by persona and tagging individual notes to get larger insights, while useful, was more extra effort than we were able to exert at our team size. We got quicker insights by asking our internal CS team about general trends they saw with customers.
What DID end up working?
We ended up just creating an organized Confluence space— rather than loading in individual notes, researchers would just load in all notes at once using a roughly structured template.
We lost some of the organization and tagging prowess that Jira had, but roughly recorded research is far better than uncollected research. In a small organization, we can still communicate and ask each other about research we’ve collected.
People took to Confluence’s friendly document format much more than Jira’s excel sheets; it was a direct copy-paste from the notes they were already taking.
Confluence practices implemented to control the chaos:
- Note taking templates and sources of truth for each project.
- Tagging the entire interview notes document appropriately, then having “collection” pages for research.
- Utilizing the “excerpt” function to surface key findings from an interview when browsing.
We learned that, though a research repository is the grand ideal for organizing research, it’s “ideal form” wasn’t something that was making our jobs easier or the research more widespread at a smaller company size. “Good enough” organization is good enough to get by.
User Interview Practices
- Creating user lists
- Splitting by demographic:
- Role in company
- Experience with app
- Size of company
- Plan
- Proficiency
- Location
- Splitting by demographic:
- Identify core problem: If we’re not sure what the core problem is, then we need to ask the customer for clarification.
- Form Design Hypotheses based on prior research: we might have an idea of how to solve the problem already; we can make a lo-fi mock to show users.
- Identify gaps: What aren’t we sure about in this hypothesis?
- Day-to-day experience: getting to know what this person’s role in the company is, and where Leap fits into that day. We get a lot of persona data here.
- Contextual inquiry: Current experience with the specific feature, usually while screen sharing-- how do they do the task? Workarounds? Pain points?
- Test hypotheses:
- Lo-fi mock
- Usability: do they do the thing as intended? Is there a version that's clearer than another version?
- A/B testing
- Usability: do they do the thing as intended? Is there a version that's clearer than another version?
- Questions:
- Is this useful?
- Is this something they'd pay more for / could be a higher sub?
- Measure value: would they use it? Does it address the problem?
- Things they would change, things they like
- Other gripes that are top of mind with the software in general: what should we work on in the future?
- Snowball: are there others who are good to interview?
- Lo-fi mock
Synthesis
- Discovery / covering new ground: affinity diagram synthesis, user models, etc.
- Speed / validation of hypothesis: main takeaways in Confluence
User Models
Why not Personas?
Personas are more marketing focused-- we're SAAS. so our users are usually given software to work with, rather than needing to be bought out. We need to make it functional and easy to do their job.
Ascribe biases to them, and our users are nuanced
For programmers: we can have empathy through other means-- direct quotes from user research, data, etc.
Created internal use version for our designers, and a wider educational version with main takeaways to include in onboarding.
Personas are more marketing focused-- we're SAAS. so our users are usually given software to work with, rather than needing to be bought out. We need to make it functional and easy to do their job.
Ascribe biases to them, and our users are nuanced
For programmers: we can have empathy through other means-- direct quotes from user research, data, etc.
Created internal use version for our designers, and a wider educational version with main takeaways to include in onboarding.
Pendo Data Analysis: a powerful user research ally
Used Pendo to report on usage
Important for things like:
Important for things like:
- what UI elements can we deprecate that are repetitive or confusing?
- What areas of the site are most used that we should prioritize perfecting?
- How many users are affected by an issue that is specific to a browser or system?
- Incredibly useful for survey distribution: Got more than 10% of monthly user base to participate in opinion poll.
Even if only one answer is submitted, then it still collects it. Most important and simple multiple choice questions could get 200 / 1500 monthly users to participate.
We were able to save our work on a project by noting that the painpoints noted in a survey had since been fixed, so the project could proceed without sunk cost.
User recruitment practices
Though our tools for user recruitment were limited (and primarily limited to our customer base) we had a relatively easy time with recruitment since users were passionate to help fix the product.
Methods:
Pendo ended up being the most effective for recruitment; we were able to get 600 beta signups within a week and were able to segment by demographic.
Learnings:
Methods:
- Mass emailing BCC (least effective, but fast, wide net)
- CS recommendation + introductions (most effective)
- Pendo recruitment (widest net, fastest, but usually resulted in less involved participants)
Pendo ended up being the most effective for recruitment; we were able to get 600 beta signups within a week and were able to segment by demographic.
Learnings:
- Personalization has significantly higher success rate than cold mass emailing, despite it taking a while longer.
- Recruit where your users are; people get so much email spam.
- Reduce barriers to signing up as much as possible.
Alpha & Beta Testing Processes
I headed our first and several more of our alpha & beta tests, coordinating recruitment, screening, communication, user instructions, and data collection. From this, I produced structures for a source of truth, recruitment, email communications, and other deliverables we were able to continually reuse.
Learnings:
Learnings:
- “Setup” meetings are necessary, if the user has to do anything extra: there was very little follow-through if users had to complete extra tasks to set up the beta. Unless the beta is automatic, users will often not download anything extra. This also helps to collect initial impressions
- It needs to be easy to give feedback; people need to be interrupted in a task to give more in-depth feedback, so unless they're interrupted by a bug, they will likely not go out of their way to report on something that bothers them. “Closing” interviews and surveys were helpful to collect feedback and final impressions leftover from the beta period.