With just a handful of examples and minimal instruction, the GPT-3 language model we use in Viable predicts the summaries it should generate from customer feedback datasets.
Like any other model, it needs something to learn by. At Viable, we have spent many hours training our GPT-3 model to provide useful answers out of qualitative customer support data. Along the way, we learned a few tricks for getting the most useful answers.
For a broad range of questions to ask in Viable, check out our guide on what questions you should ask in Viable.
By starting with a broad question such as:
What’s a common complaint from users?
The response will reveal specific areas that you can further probe. For example, the answer to the above question might be:
Users have trouble setting up or signing into their accounts. They are also having trouble completing checkout.
Let’s say you already know there were issues with the setup and sign up process for your product which your team is improving in the next release. But perhaps you didn’t know users were also having trouble with checkout. You might want to further investigate the checkout completion issue by asking more specific questions:
What do users find frustrating about the checkout process?
This would then provide a more insightful and detailed answer such as:
Customers are frustrated with the checkout flow because it takes too long to load. They also want a way to edit their address in checkout, and they'd like to be able to save multiple payment methods.
Starting broad about what might be challenges (or opportunities) will lead you to specific areas to dig into for deeper insights.
Additional examples of specific questions across a variety of products might include:
What features would users like to see for keyboard shortcuts?
What do our customers find frustrating about calendars?
How can we improve the alerting function?
Why are deposits challenging for our users?
Sample question flow
Question: How can we improve our email product?
Answer: We can improve our email product by making it easier to use, adding more features, and improving the product’s reliability.
Follow up question: What features do customers want us to add to our email product?
Answer: Customers want the ability to download attachments, unlimited access to calendars, and more options for shortcuts.
Follow up question No. 2: How can we improve attachments?
Answer: We can improve attachments by building a one-click link to files and supporting more formats.
As you ask questions about specific topics, the model will pull from the relevant data points to answer your questions so long as there's available customer feedback data.
GPT-3 is best at answering questions that start with What, Why or How. For example:
What is confusing about our onboarding?
What do customers find frustrating about checkout?
How can we improve keyboard shortcuts?
How can we make the calendar feature better?
What is difficult about adding attachments to messages?
These types of questions will generate responses that are more accurate, consistent, and specific than comparison questions such as which is better, x or y?
Avoid yes and no questions. Much like in user research, asking yes or no questions won’t yield much insight in Viable. The model will answer but won’t give you much more than a yes or no answer. Examples of yes or no questions include:
Do customers like our tutorials?
Do our customers like our support?
Do customers cancel their subscriptions?
Do customers use our knowledge base?
Avoid using the question box as a search bar. Using the question bar like a search bar is less likely to yield good answers. The model is smart enough to interpret the words and will do its best to provide an appropriate answer; however, it’s more likely to craft better answers when questions are entered rather than single word terms since that’s how the model was trained.
Don't stay too high level in your questions. We’ve seen that asking questions that are too broad and never digging deeper into specific areas of your product won’t generate much insight. The model is good at identifying specific topics in the customer feedback and surfacing those so you might as well take advantage of it.
What do our customers like most about the mobile app?
Varies slightly from:
What do people like most about the mobile app?
The model will likely provide a more useful answer for the first than the second variation of that question. By using ‘customers’ and ‘our customers’ (or ‘our users’), GPT-3 will focus more on your specific dataset rather than pull from its broader knowledge about how the world works (you may have heard that GPT-3 has been trained by a large set of publicly available data points!).
The best way to learn what questions work best is to try a few different questions or variations of the same question. You’ll notice that even asking the same exact question multiple times will yield slightly different results. The same core concepts will be included in the answer summaries but the exact language will vary. That’s because the model is not deterministic.
So go ahead and ask away!
Last Updated: 01/07/21
Viable adheres to best practice standards for ingesting, handling and protecting data as well as responding to vulnerabilities or incidents. Our goal is to ensure a high level of security for our customer data as well as our own. PostgreSQL, with AES-256 at rest encryption. Hasura is what we use to …
Typeform Zap for Viable
You can analyze customer feedback collected via Typeform by connecting Typeform to Viable via Zapier. Under Set up action choose the fields you want to analyze. We recommend you set up a separate Zap for each free text question/response. (We do not recommend setting up a Zap for multiple choice or …
Twitter Zap for Viable
You can analyze customer feedback from tweets by connecting Twitter to Viable via Zapier. Under Set up action choose the fields you want to analyze: You should be ready to turn on the Zap.