In voice user interfaces, we often operate under the assumption that the dialog will happen in turns. This doesn't exactly track real world language, though, and so VUI has a notion called "barge-in" to describe the case where the user interrupts the interface's output. This can be a potentially powerful feature, but it also has consequences that can be difficult to work with. In this article, we explore one side effect further.
1 Comment
Towards the end of last year, I implemented a feature in a few of my skills that was meant to chip away at one small corner of one of the biggest problems in the VUI space: conveying application context. It was an idea I had been toying with for quite a while, but a few factors made the time right to implement it, and I'm glad to say it's been hugely successful! Read on to hear about what I built, and why...
Nobody can argue against the fact that the Alexa platform has grown in leaps and bounds over the last two years. Many of the problems we faced as voice designers are gone or mitigated, and we have a million tools at our disposal to address the issues remaining. Definitely a good thing, but it leads to a couple new pitfalls. The first is that we now have so much more "rope to hang ourselves with", so to speak. There are a ton of failure modes that simply didn't exist when we were building CompliBot and InsultiBot in 2015. At the same time, all of these new features have upped the level of what users expect out of a baseline Alexa experience, meaning the onus is on skill builders to solve increasingly complex problems. What I want to talk about today is one of these problems - how do you know if your user is having a bad experience, and what can you do about it?
After what seems like an unconscionably long wait, the Alexa team announced earlier this week that they were finally giving us a way to look up the timezone of a given user. The feature was detailed in a blog post, with a new page documenting the "Settings API" (which is what contains the timezone feature) going up simultaneously. This may seem like a pretty straightforward change, but between the history and the implementation, there's actually a fair bit to unpack here. So let's dive in...
We wanted to take just a moment to talk about a new Alexa feature, our implementation of it in AstroBot (our space API aggregator skill), and thoughts about how to best take advantage of the new capability.
As mentioned in our previous post, we've had the opportunity to play around with the new Alexa push notifications feature for some time now. While the exact implementation details on the Alexa Skills Kit side are still not public (and therefore not something we can talk about yet until the public beta goes live), there are enough consumer-facing pieces that we CAN talk about that should be of interest to folks starting to think about their push notification use cases.
|
AuthorWe're 3PO-Labs. We build things for fun and profit. Right now we're super bullish on the rise of voice interfaces, and we hope to get you onboard. Archives
May 2020
Categories
All
|