In the last VUXcellence post, we talked about how to track whether or not your user is having a bad experience, and a way to know when to slip them some extra help. Our context there was around the user's aggregate experience and the frustration they've been building up over the course of many interactions. One thing we didn't talk about, though, is the sort of techniques available us to turn around an individual error case or misfired intent. There's one skill in particular that has taken what I think is a wonderful approach to solving this problem...
Nobody can argue against the fact that the Alexa platform has grown in leaps and bounds over the last two years. Many of the problems we faced as voice designers are gone or mitigated, and we have a million tools at our disposal to address the issues remaining. Definitely a good thing, but it leads to a couple new pitfalls. The first is that we now have so much more "rope to hang ourselves with", so to speak. There are a ton of failure modes that simply didn't exist when we were building CompliBot and InsultiBot in 2015. At the same time, all of these new features have upped the level of what users expect out of a baseline Alexa experience, meaning the onus is on skill builders to solve increasingly complex problems. What I want to talk about today is one of these problems - how do you know if your user is having a bad experience, and what can you do about it?
Welcome to a new column at 3PO-Labs, which we're (for now) calling VUXcellence. The idea behind this series is to shine a light on interesting aspects of voice UX that different developers are utilizing that make their voice interfaces more intuitive, efficient, or natural. First up, we're gonna take a look at TsaTsaTzu's massive RPG "Six Swords".
We're 3PO-Labs. We build things for fun and profit. Right now we're super bullish on the rise of voice interfaces, and we hope to get you onboard.