The last couple of weeks have been a bit strange in the Alexa development community. My previous post talked about the Alexa Accelerator, and in a way that could be seen as a positive indicator of the state of the platform/space. There's a flipside to a having a vibrant platform, though, which is that it is inevitably going to attract people who see its popularity and its position in the zeitgeist as a vector for shortcutting to some end goal. And indeed, we've been seeing our fair share of that lately as well, and we'll show a few examples of this sort of behavior...
Last week I had the pleasure of attending Demo Night for the Techstars Alexa Accelerator 2019 cohort. I wanted to share some thoughts on what I saw, not so much in the sense of whose pitch was most on point or which groups I'd expect to be successful, but rather to look at trends that arose (or maybe even more conspicuously - didn't arise) during this year's class.
In the last VUXcellence post, we talked about how to track whether or not your user is having a bad experience, and a way to know when to slip them some extra help. Our context there was around the user's aggregate experience and the frustration they've been building up over the course of many interactions. One thing we didn't talk about, though, is the sort of techniques available us to turn around an individual error case or misfired intent. There's one skill in particular that has taken what I think is a wonderful approach to solving this problem...
Nobody can argue against the fact that the Alexa platform has grown in leaps and bounds over the last two years. Many of the problems we faced as voice designers are gone or mitigated, and we have a million tools at our disposal to address the issues remaining. Definitely a good thing, but it leads to a couple new pitfalls. The first is that we now have so much more "rope to hang ourselves with", so to speak. There are a ton of failure modes that simply didn't exist when we were building CompliBot and InsultiBot in 2015. At the same time, all of these new features have upped the level of what users expect out of a baseline Alexa experience, meaning the onus is on skill builders to solve increasingly complex problems. What I want to talk about today is one of these problems - how do you know if your user is having a bad experience, and what can you do about it?
So, last week Amazon and Microsoft announced a major new feature - the ability to control your XBox from Alexa. While the two companies had certainly been moving closer of late (see: Cortana x Alexa cross-functionality), the announcement was a big surprise, and a welcome one at that. Unfortunately, along with that feature came a host of new issues for a lot of folks using the platform. Understanding the regressions and their implications touches quite a few interesting areas, and I do my best here to distill each of them.
After what seems like an unconscionably long wait, the Alexa team announced earlier this week that they were finally giving us a way to look up the timezone of a given user. The feature was detailed in a blog post, with a new page documenting the "Settings API" (which is what contains the timezone feature) going up simultaneously. This may seem like a pretty straightforward change, but between the history and the implementation, there's actually a fair bit to unpack here. So let's dive in...
Almost two years ago we sat down to put together all of our thoughts about testing and testability for the fledgling Alexa platform. In light of recent events causing us to link out that article a few times, we decided it may be time to do a bit of a retrospective on the topic, and present our view of where things are today.
It's a little out of the ordinary for the content we post here, but one of the things that we at 3PO-Labs (and specifically Eric) find ourselves doing with some frequency is advocating on behalf of other developers who end up in situations they don't know how to resolve. Often this happens during the certification step, where a first rejection can seem like an insurmountable obstacle, especially for folks who are less familiar with how things work behind the scenes than we are. We recently took some time to argue on behalf of a few folks, and I wanted to share what that looks like a bit more broadly...
We're 3PO-Labs. We build things for fun and profit. Right now we're super bullish on the rise of voice interfaces, and we hope to get you onboard.