3 Simple Steps to Getting Feedback for Your App -- Fast Here's how you can screen, recruit and reach people efficiently to get input on your new app or service.
This story originally appeared on KISSmetrics
When talking to companies about developing new features or products, I've observed that there is concern about getting valid qualitative data and feedback from users.
Many of you are worried that when you and your company personnel "get out of the building" to look for people and get their thoughts on your app or service, you may not find the right person or target market. Thus, you'll waste a lot of time looking for anything useful to drive your ideas.
Companies generally use surveys as a cheap and effective way to get feedback from users. It's difficult (or resource-intensive) to get people to come in for a formal user research study or for you to travel to do ethnographic field studies.
However, data from a survey is only as good as the survey itself and the people who participate. Getting feedback from friends and family is better than nothing, but you'll likely see a difference in insights from someone in the target market/field/business that you're trying to reach.
If you're reading this, you probably don't have time or resources, but you do need data fast. So let's create a way for you to get the data to come to you. Remote research and feedback collection is a fast and cheap system that constantly feeds you the answers you need. I've developed a quick solution that allows you to screen, recruit, and reach people fast.
Step 1: Create a Screener to Find the People You Want
As soon as you have some inkling of what you want to know more about, start writing a screener to define and select the type of people you want to talk to. A screener is a set of questions that acts like a recruiting questionnaire and mini-survey at the same time. I like to keep my screeners short, 3-5 questions max, so that I don't discourage people from giving me feedback.
After conducting dozens of user research projects across product, engineering, marketing, support, and sales, I've found that I get more useful insights in less time this way than by reading through irrelevant comments. Remember, this is about BOTH the speed at which you receive this data AND the quality of the data you receive. There's no point in getting back a bunch of useless data fast.
Define your criteria
List some characteristics of the target users you want to obtain information from. This can be things that you know off the top of your head like "active app users." For example, when my team wanted to validate whether or not a feature should be worked on higher in the priority list, I translated that into screening only users who had touched that feature within the previous 30 days. They were the most relevant and highly active users to gather insights and feedback from.
If the pool was too small, I would expand the criteria to include a larger date range of activity or include broader criteria. I don't recommend trying to broaden your scope too large, though, because you'll start interacting with people who do not have much to say about what you're trying to discover.
Write the screener
Once you have an idea of who you want to target, write specific questions that differentiate or screen out unrelated cases. For example, rather than asking KISSmetrics users whether or not they find a specific report useful or not, I ask:
- What report do you use to find revenue data?
- What other applications or ways do you use to analyze revenue?
- Do you consider revenue a primary metric within your analytics?
I recommend creating the screener as a Google Form so you can collect responses in a spreadsheet automatically. I am providing below an example screener I've used before. You can edit it yourself if you'd like to adapt the screener for your own research efforts.
My screener skips asking people for their names and other demographic information because I want to make it easy for them to give me feedback. That information is less relevant to me than what insight they can provide me. I can always look up their names in their account data if I need them.
Step 2: Set up a System to pull People to Your Screener
Hiten Shah inspired me with a method for pulling people to you as a system of early-stage validation, but I realized that this idea could be adapted to create an ever-growing treasure trove of research and feedback data loops.
DIY Fancy Method
One way I love getting people to interact with my screener is to give them a micro-survey or modal within the app. It gets their attention, but it's relatively unobtrusive because they can decide whether to deal with it or dismiss it. Not everyone wants to give you their input, so having an option to dismiss it is important if you don't want to annoy your users.
Qualaroo works very well here, specifically because you can expand/minimize and build in a flow of calls-to-action based on the answers.
The reason I like this method is because I know that, not only are these active users, but the micro-survey or modal is displayed only on the parts of the app that are relevant to what I want to validate. Users who fill out your screener this way are: 1) contextually aware of what you're trying to ask, 2) more likely to answer your screener, and 3) a really good fit for getting qualitative data.
This system is easy to repeat once you've set it up because you can switch out your screener and have a different "ask" on your micro-survey or modal when you run it. This perpetually lets you get people piling in who are either prime for a follow-up interview or on a list for further surveying.
DIY Direct Method
If you don't have the luxury of using a micro-survey tool like Qualaroo or setting up a quick modal, just email your customers within a certain criteria. My previous example mentioned contacting only active users, which I further translated into people who had used a specific feature within the previous 30 days. You could do this by having an engineer pull data from the database, or, in our case, we just use the KISSmetrics People Search to query the exact people we need.
I like to set up a specific email campaign in MailChimp or Pardot to make sure I separate out my general marketing email list from my data gathering list. Sometimes people will want to opt-out of user testing and feedback over time but not your marketing newsletter.
I like this method less because it is not as automatic as the first method. You have to actively set up different lists of people and send out email campaigns. But it works. It works pretty fast if your email system is good at cloning and repeating campaigns, too. It's likely you already have some sort of email service provider in place so you'll just be able to piggyback off of what you're already paying for.
Using both of these methods, I've been able to recruit 7 people for phone/Skype/Google Hangout interviews and receive over 100 screener responses within 24 hours in order to draft a more targeted survey. That's more than enough for me to dig into for validation and research purposes in such a short time.
If you're super scrappy or bootstrapped, just do this all in Gmail instead of an email service provider. This method will cost you $0 but more of your time.
Step 3: Schedule Follow-up Interviews with People or Start Creating Surveys
When you have piles of people coming into your lap, it's time to get data out of them. Pick people from your screener who are relevant to talk to individually. Or put a subset of them in a targeted survey email blast.
Before you start interacting with your users, you'll need to figure out the right research method for the job.
Surveys are great for:
- Tracking sentiment over time - i.e., NPS Score every quarter, tracking customer happiness/satisfaction before and after a feature launch
- Quantifying how many users are running into trouble with a specific problem - i.e., "Have you ever run into X while trying to do Y?"
- Measuring attitudes or customer understanding of concepts or tasks - i.e., "Do you know which report to create when you want to analyze X?"
That being said, I'd recommend NOT using surveys for:
- Usability questions - It's better to view people through a screensharing session in a user study to identify usability problems.
- Understanding user behavior and habits - People may not fully realize what they are doing or may not be able to accurately communicate it. Viewing analytics data or having a screenshare session of what they do when they want to find something tells you a lot more than having them try to convey it in a survey.
- Gathering feature requests or ranking priorities, given a stack ranking system of what people want or what they want done first - You're going to get a pretty mixed list from everyone. Prioritizing that based on number of customer responses, highest paying customer, what's easiest to do technically, or what impacts the least technical debt is going to be an extra confusing step for you to try to handle with your team. I prefer looking at analytics logs to get data about what features are working or not in terms of driving business growth.
Find What Works for You
After having your first set of conversations or first set of survey results come through, you'll have a good idea of what you could have done better and what to change going forward.
What's best, though, is that you'll have a system for constantly pulling in people to give you feedback, nearly for free. You could go the fancy route and use Qualaroo/SurveyMonkey/MailChimp/Pardot, but everything I've described can be done for free using Google Drive/Docs and Google Hangouts/Skype.
By providing a highly contextual environment for both yourself and your users, you'll be able to gather relevant and useful feedback in a matter of days. Using this system, I've never had a problem with asking myself "Who should I talk to when I want to get some ideas about X?" I have a gold mine of data and contacts that I simply reach out to. And because of the highly contextual environment I've introduced, the success rate of people responding back to me has been incredibly high.
By creating a screener for your idea, screening people based on their behaviors related to your idea, and asking only those who are interacting with a particular feature creates a win-win situation where you get the best data possible. The users happily provide feedback and data within context. And all of this happens very fast.