In the last VUXcellence post, we talked about how to track whether or not your user is having a bad experience, and a way to know when to slip them some extra help. Our context there was around the user's aggregate experience and the frustration they've been building up over the course of many interactions. One thing we didn't talk about, though, is the sort of techniques available us to turn around an individual error case or misfired intent. There's one skill in particular that has taken what I think is a wonderful approach to solving this problem...
Nobody can argue against the fact that the Alexa platform has grown in leaps and bounds over the last two years. Many of the problems we faced as voice designers are gone or mitigated, and we have a million tools at our disposal to address the issues remaining. Definitely a good thing, but it leads to a couple new pitfalls. The first is that we now have so much more "rope to hang ourselves with", so to speak. There are a ton of failure modes that simply didn't exist when we were building CompliBot and InsultiBot in 2015. At the same time, all of these new features have upped the level of what users expect out of a baseline Alexa experience, meaning the onus is on skill builders to solve increasingly complex problems. What I want to talk about today is one of these problems - how do you know if your user is having a bad experience, and what can you do about it?
So, last week Amazon and Microsoft announced a major new feature - the ability to control your XBox from Alexa. While the two companies had certainly been moving closer of late (see: Cortana x Alexa cross-functionality), the announcement was a big surprise, and a welcome one at that. Unfortunately, along with that feature came a host of new issues for a lot of folks using the platform. Understanding the regressions and their implications touches quite a few interesting areas, and I do my best here to distill each of them.
After what seems like an unconscionably long wait, the Alexa team announced earlier this week that they were finally giving us a way to look up the timezone of a given user. The feature was detailed in a blog post, with a new page documenting the "Settings API" (which is what contains the timezone feature) going up simultaneously. This may seem like a pretty straightforward change, but between the history and the implementation, there's actually a fair bit to unpack here. So let's dive in...
Almost two years ago we sat down to put together all of our thoughts about testing and testability for the fledgling Alexa platform. In light of recent events causing us to link out that article a few times, we decided it may be time to do a bit of a retrospective on the topic, and present our view of where things are today.
It's a little out of the ordinary for the content we post here, but one of the things that we at 3PO-Labs (and specifically Eric) find ourselves doing with some frequency is advocating on behalf of other developers who end up in situations they don't know how to resolve. Often this happens during the certification step, where a first rejection can seem like an insurmountable obstacle, especially for folks who are less familiar with how things work behind the scenes than we are. We recently took some time to argue on behalf of a few folks, and I wanted to share what that looks like a bit more broadly...
We wanted to take just a moment to talk about a new Alexa feature, our implementation of it in AstroBot (our space API aggregator skill), and thoughts about how to best take advantage of the new capability.
As mentioned in our previous post, we've had the opportunity to play around with the new Alexa push notifications feature for some time now. While the exact implementation details on the Alexa Skills Kit side are still not public (and therefore not something we can talk about yet until the public beta goes live), there are enough consumer-facing pieces that we CAN talk about that should be of interest to folks starting to think about their push notification use cases.
We're 3PO-Labs. We build things for fun and profit. Right now we're super bullish on the rise of voice interfaces, and we hope to get you onboard.