The feature essentially allows you to send the Alexa equivalent of a "loading icon" for long-running requests, providing text or audio that informs the user that you're working on preparing the information for them, rather than having awkward dead air while waiting for a response. It's a fairly small feature, and one that is pretty easy to implement, but it's one of those things that is a clear value-add in terms of the customer experience, even if it's not as flashy as other recent announcements (like push notifications or releases to new countries).
- Lookup ISS coordinates from open-notify.
- Given the coordinates, lookup the name of the place from google maps API.
- If response is null, assume coordinate is over a body of water, and do a lookup via geonames.
- Depending on THAT response, do some post-processing and make educated guesses.
With this new API, we now have the option to provide a "wait for it..." type message, which is a much better experience for the user than sitting on dead air, not knowing whether something is going to happen or not. And because we could inform the user about what was going on, we were able to extend our internal timeout out much closer to the 8 second maximum, meaning we are now much less likely to return a response with no answer. Everybody wins!
- Vary your progressive responses! This is not really surprising, as it's a good idea to have a variety of different responses for a lot of the minutiae of your skill (like goodbye messages, confirmations, etc). It keeps the conversation feeling more fresh - there's an uncanny valley type effect that happens when your skill always says the exact same thing in the exact same way for a given action. Additionally, there's a caveat in the Progressive Response API saying that if you send the same content multiple times (as you can do up to 5 progressive responses) for a given request, "duplicates" will be discarded. In our case, we built a little random "progressive response generator" utility that we'll use across all future skills.
- Short content. You want your progressive response messages to be short, because they are blocking! If it takes Alexa 2 seconds to read out your progressive response, but your main thread returns after 1 second, you're actually making the user wait an extra second (plus some buffer) to hear the thing they requested. With that in mind, you definitely want to keep your quips succinct.
- This is not a cache replacement. There's a second place in AstroBot that we use progressive responses, and that's in looking up upcoming launches, which is something we can actually cache pretty well. We look up this information no more than once every 15 minutes. The ideal is to just quickly returned our content, so we have an if-block that checks our cache ttl. If it's expired, we return a progressive response, but if it's still good we return the response straightaway with no progressive response (which would block the response, as noted in #2).
Shortcomings and Future Expansion of the Feature
- Right now, it's only for custom skills, but it seems like there's no reason it shouldn't be applicable to smarthome or flashbriefing skills (although flashbriefing skills do some sort of mysterious pre-flight or cache-based thing that we've never been able to get an Alexa developer to actually describe to us).
- Another thing to understand is that this doesn't make it so you can take longer to provide your response from your main thread. It could, though, in the future. Amazon could easily say "You may have a 12 second timeout, as long as you provide progressive responses to keep the user engaged".
- And finally, there's no option for a visual progressive response for screen devices like the Show. It's sort of a funny coincidence that the perfect analogy we have to describe the feature - the spinning wheel - is something you can't actually implement in a literal way.