There was some drama about notifying devs before updates, etc, but the post-mortem takeaway was that our skills needed to be less strict about how they deserialized JSON payloads. Cool, whatever.
Flash forward a few months to May, and we had a third live skill (DiceBot) and a fourth one we were working on. One day while working on our (still to be released) fourth skill, we noticed it stopped working randomly. Another few hours of troubleshooting and we realized it was the same thing - Amazon had rolled the change back out to production (again, without notification). CompliBot and InsultiBot were working fine with the previous fix, and our in-development skill wasn't live yet, but DiceBot WAS live, and was not able to respond to any requests from Alexa. This was a problem...
At the same time, though, both David and Eric had significant life-events happening at the time, and we hadn't done any promotion of DiceBot yet, so it was easy to let it start to slide. And the thing about a slippery slope is that it's hard to stop slipping.
So, a couple days went by and we were both suuuuper-busy and didn't get the fix in. Then days turned into weeks, and those weeks turned into months, until finally mid-August rolled around we realized we probably needed to handle it.
The fix went in, builds went up to production, and things were back to running properly. All-in-all, we took about 3 months of consecutive downtime on the skill. That triggered some questions, though, and this is what makes the whole episode interesting to us - If our skill has been utterly broken and unusable for 75% of its lifetime, why has nobody noticed?
- Discoverability is still broken. We've discussed this ad nauseum, and it's a fairly well trodden and clear problem among the development community, but this serves as a pretty clear experimental use case. The fact that we were not piling up negative reviews means that people were clearly not discovering our skill. The way the current system is built, having no reviews means you will never be seen by the users who could give you reviews. It's a catch-22, and it's what leads to shady practices like self-reviewing.
- Recertification is not happening. Amazon has been adamant in their assertion that skills will be revisited by the cert team on a regular basis to make sure that they are still in compliance with the rules. The idea here is to make sure people are not using the bait-n-switch approach - building their skill to match the spec and then changing it to do whatever they want after going live. Had there been even the most minimal recertification test pass, our skill would've failed miserably and been pulled from the store. The takeaway here is that our anecdote implies there are no repercussions for what your skill does after getting through cert. The ramifications of this are especially meaningful given the recent release of the long-form audio feature, and its capacity for IP infringement.
The result for normal developers like us is a crapshoot where luck and/or deception seem to be the only two ways to get ahead, and where a single imperfect review can bury you forever. And the most frustrating part of all? There are a lot of really simple solutions that the dev community has come up with that could be implemented immediately, but cry as we might, we can't seem to get any traction.