With just a handful of examples and minimal instruction, the GPT-3 language model we use in Viable predicts the words or text it should generate next.
Like any other model, it needs something to learn by. At Viable, we have spent many hours training our GPT-3 model to provide useful answers out of qualitative customer support data. Along the way, we learned a few tricks for getting the most useful answers.
By starting with a broad question such as:
What’s a common complaint from users?
The response will reveal specific areas that you can further probe. For example, the answer to the above question might be:
“Users have trouble setting up or signing into their accounts. They are also having trouble completing checkout.”
Let’s say you already know there were issues with the setup and sign up process for your product which your team is improving in the next release. But perhaps you didn’t know users were also having trouble with checkout. You might want to further investigate the checkout completion issue by asking more specific questions:
What do users find frustrating about the checkout process?
This would then provide a more insightful and detailed answer such as:
“Customers are frustrated with the checkout flow because it takes too long to load. They also want a way to edit their address in checkout, and they'd like to be able to save multiple payment methods.”
Starting broad about what might be challenges (or opportunities) will lead you to specific areas to dig into for deeper insights.
Additional examples of specific questions across a variety of products might include:
What features would users like to see for keyboard shortcuts?
What do our customers find frustrating about calendars?
How can we improve the alerting function?
Why are deposits challenging for our users?
GPT-3 is best at answering questions that start with What, Why or How. For example:
What is confusing about our onboarding?
What do customers find frustrating?
How can we improve keyboard shortcuts?
How can we make the calendar feature better?
What is difficult about adding attachments to messages?
These types of questions will generate responses that are more accurate, consistent, and specific than comparison questions such as “which is better, x or y?”
What do our customers like most about the mobile app?
Varies slightly from:
What do people like most about the mobile app?
The model will likely provide a more useful answer for the first than the second variation of that question. By using ‘customers’ and ‘our customers’ (or ‘our users’), GPT-3 will focus more on your specific dataset rather than pull from its broader knowledge about how the world works (you may have heard that GPT-3 has been trained by a large set of publicly available data points!).
The best way to learn what questions work best is to try a few different questions or variations of the same question. You’ll notice that even asking the same exact question multiple times will yield slightly different results. The same core concepts will be included in the answer summaries but the exact language will vary. That’s because the model is not deterministic.
So go ahead and ask away!
Last Updated: 01/07/21
The Tags view allows you to analyze all customer feedback data across customers and apps by a label or tag. The AI that powers Viable does the job of sorting through the ingested customer feedback data, identifying themes, and tagging them by relevant topic. Tags can be for features, capabilities, …
In the Segments view, you can view customer feedback data by specific segment or trait. Viable will import segments that already exist in your integrated customer feedback apps. The default view will show an aggregate summary of segments ranked by number of data points per segment. It includes …
The Feedback view provides all the customer feedback data points from the apps you’ve integrated. At the top of the main viewing screen is the question box. This is where you type in a question you have about customers such as: What do customers find delightful? Why do customers abandon their …