In voice user interfaces, we often operate under the assumption that the dialog will happen in turns. This doesn't exactly track real world language, though, and so VUI has a notion called "barge-in" to describe the case where the user interrupts the interface's output. This can be a potentially powerful feature, but it also has consequences that can be difficult to work with. In this article, we explore one side effect further.
1 Comment
Towards the end of last year, I implemented a feature in a few of my skills that was meant to chip away at one small corner of one of the biggest problems in the VUI space: conveying application context. It was an idea I had been toying with for quite a while, but a few factors made the time right to implement it, and I'm glad to say it's been hugely successful! Read on to hear about what I built, and why...
It was around late November of 2015 when my original collaborator on CompliBot and InsultiBot and I started preparing our initial submission for certification. Last week, I submitted these same two skills again with a fun new feature I've been messing around with privately for a while. As you might've guessed, owing to the fact that I'm writing about it, it did not go well...
The last couple of weeks have been a bit strange in the Alexa development community. My previous post talked about the Alexa Accelerator, and in a way that could be seen as a positive indicator of the state of the platform/space. There's a flipside to a having a vibrant platform, though, which is that it is inevitably going to attract people who see its popularity and its position in the zeitgeist as a vector for shortcutting to some end goal. And indeed, we've been seeing our fair share of that lately as well, and we'll show a few examples of this sort of behavior...
Last week I had the pleasure of attending Demo Night for the Techstars Alexa Accelerator 2019 cohort. I wanted to share some thoughts on what I saw, not so much in the sense of whose pitch was most on point or which groups I'd expect to be successful, but rather to look at trends that arose (or maybe even more conspicuously - didn't arise) during this year's class.
In the last VUXcellence post, we talked about how to track whether or not your user is having a bad experience, and a way to know when to slip them some extra help. Our context there was around the user's aggregate experience and the frustration they've been building up over the course of many interactions. One thing we didn't talk about, though, is the sort of techniques available us to turn around an individual error case or misfired intent. There's one skill in particular that has taken what I think is a wonderful approach to solving this problem...
Nobody can argue against the fact that the Alexa platform has grown in leaps and bounds over the last two years. Many of the problems we faced as voice designers are gone or mitigated, and we have a million tools at our disposal to address the issues remaining. Definitely a good thing, but it leads to a couple new pitfalls. The first is that we now have so much more "rope to hang ourselves with", so to speak. There are a ton of failure modes that simply didn't exist when we were building CompliBot and InsultiBot in 2015. At the same time, all of these new features have upped the level of what users expect out of a baseline Alexa experience, meaning the onus is on skill builders to solve increasingly complex problems. What I want to talk about today is one of these problems - how do you know if your user is having a bad experience, and what can you do about it?
So, last week Amazon and Microsoft announced a major new feature - the ability to control your XBox from Alexa. While the two companies had certainly been moving closer of late (see: Cortana x Alexa cross-functionality), the announcement was a big surprise, and a welcome one at that. Unfortunately, along with that feature came a host of new issues for a lot of folks using the platform. Understanding the regressions and their implications touches quite a few interesting areas, and I do my best here to distill each of them.
|
AuthorWe're 3PO-Labs. We build things for fun and profit. Right now we're super bullish on the rise of voice interfaces, and we hope to get you onboard. Archives
May 2020
Categories
All
|