The end of an era for literals
If we've already lost you, a good analogy might be to think about it like Cards Against Humanity (or Apples to Apples, for you less-than-horrible people). The sample utterances can be thought of like the black template cards, which have a gap for you to fill in.
Now, this was all well and good for a while, but developers started to realize there were times where exchanging freedom for more accurate matching was preferably. Sometimes, you just wanted to pick from something formatted in a very specific way, or from an enumerated list of values. Amazon saw this and responded with two features - built in slots (like dates), and custom slots (where a developer could define a set of values, and Alexa would prioritize matching to those values if possible).
Cool. Everybody was happy at this point. We had the ability to do some utterances with custom slots, and some with literals. We could even mix them together in the same utterance, if we wanted!
A couple months went by, and Amazon dropped a "deprecated" notice on the literal slot type, saying they'd basically prefer you to use other things. This happens all the time in software, and deprecated software lives on for ages, so nobody thought much of it until recently.
Important: English (US) skills using the AMAZON.LITERAL slot type should be updated to use custom slots. Starting November 30, 2016, any English (US) skill using AMAZON.LITERAL will no longer pass certification.
English (UK) and German skills do not support AMAZON.LITERAL and cannot use the AMAZON.LITERAL slot type.
So no more literals to fill the gaps that custom and built-in slots cannot. Coming back to the CAH example, this would be kind of like the makers of the game coming out and saying "Alright, from now on, you're only allowed to answer black cards with white cards from the corresponding expansion.", or, maybe more accurately "You can use cards from any expansion to answer a black card, but only for your own uses. You're not allowed to share the hilarious mad-lib with anyone else, unless you follow our rules about constraining your range of possible answers."
The effects of this are many-fold.
(Quick aside, speaking of CAH expansions - if you like the game, they have some really good mini-expansions with proceeds going to great causes. These are totally referral links, but even if you remove our affiliated id you should check them out.)
Two quick caveats
We don't know exactly why Amazon is doing this (the announcement happened silently and unceremoniously without explanation), as they are frustratingly secretive about the inner workings and roadmap of Alexa. This approach is exceedingly problematic for developers, but that's a whole other blog post for the future. The point here, though, is that all of our arguments are based on what information we do have, which is naturally incomplete.
There are two specific possibilities that come to mind that might invalidate the arguments below:
- Amazon may be planning to provide feature parity before deprecating Amazon.LITERAL, but due to their secrecy we just haven't heard about the new feature that'll fill the gap yet.
- They understand the degree to which they are weakening the platform, but literals are so much more resource intensive or technologically difficult to maintain that they are choosing the cheaper route.
A major pain for developers
- So long, dictation. Now, Alexa is not a dictation platform. That's not what it was made for, and it's less effective at doing it than it is at mapping to more constrained models. That said, it was still pretty damn good at it, and a lot of people have made (and are continuing to make) some really interesting skills along those lines, where they took the 80% matching that Alexa would give, and did some post processing of their own to clean things up for their specific use cases. We have one skill that is close-to-complete that is taking this exact approach. We're now forced into a position of likely having to abandon it, after putting 100+ hours of effort into it.
- This also signals the fall of the fallback intent. To be perfectly honest, this was never a good solution to begin with, but because Alexa lacks very important information that developers need (when an utterance misses or misfires, what was the user actually saying?), this was the only way to gather that context. Now we're back to flying blind.
- Forcing dynamic data into static slots. There is no way to update a slot's values on the fly. There's not even a way to do it programmatically (without using something like Selenium to drive the Alexa Developer Console UI). This means that if you need new values for your slot, you have to rebuild the entire model, and have it sent through certification again. Drawing once again from the Cards Against Humanity analogy, this would be the equivalent of needing to buy a complete new copy of the game with the thousands of original cards, every time you wanted to add a new 30 card expansion to the game. You can't just plug the new slot values in and be done. It should be noted that this is not an abstract concern in the least. One of the most commonly recurring questions we hear in the dev community is about people who have a dynamic product database that they want a slot to match, so a user can ask for information about individual products.
- The recertification nightmare. One really scary thing about this change is what's going to happen with recertification. Recertification passes happen randomly (or on a cadence that is close enough to random that we haven't discerned it yet), and if you fail, you're given 72 hours to fix your skill or else it gets pulled. That means that there are a ton of live skills right now that will be at risk of getting a notification on a Friday afternoon that they are no longer in compliance, and if they don't fix it by Monday, too bad. And because this isn't going to just happen all at once on November 30th (lets call it "L-Day" from now on) - rather it'll happen to skill owners individually and without warning.
As mentioned, this is a red flag for one of the skills we have in development, but it's not the only one of our projects that depended in the literal. We've been getting some really great feedback on our article from the other day about the Wookieepedia skill we experimented with. The thing is, for reasons #1 and #3, there's absolutely no way we could've done what we did without Amazon.LITERAL.
A black mark on the product
First, this is a clear and unconditional concession to the competing platforms. Now, credit where credit is due - Amazon were first to market by a long shot and essentially invented this product space; their choice to open up the platform to everyone for free from the start was a master stroke, and the evangelists and developer marketing teams have done an extraordinary job getting people on to their platform.
But you can't rest on your laurels when the barbarian hordes are bearing down. SiriKit and Cortana are limited in who can use them and how, but they're perfectly happy to give users the data necessary to do post-processing. Third party tools like Mycroft are trying to compete in this space, and lets not forget that Google's big announcement next week is gonna tell us what Google Home is all about.
Beyond all this, there is a cornucopia of strong natural language platforms popping up to sit behind voice services. Microsoft has Luis, Google has its Cloud Natural Language API, IBM has Watson, Wolfram made available the system behind Alpha, and Stanford CoreNLP has been chugging along solid as ever.
By suppressing these more advanced use cases, Amazon is openly conceding that their platform is no longer an appropriate sandbox for pushing the state of the art in NLP. As mentioned above, this may very well be a calculated business decision - they may have decided that the two core competencies that are best for their bottom line are skills tied to preexisting products, and quickie skills built by the masses.
Which leads to my second point, which is that this sends a really poor message to a broad swatch of Alexa's base of developers. There has been a growing sense of disenfranchisement among many of Alexa's most ardent supporters of late (this topic probably deserves its own post too...). To say that we can no longer get certification for the projects we're working on if they use Amazon.LITERAL is tantamount to saying that our projects are not worthy of being seen by the Alexa user community. It implies that the quality of our work is inferior to that of the flood of tutorial-based skills continuing to be released. And it outright tells us that our ideas are untenable if they don't fit a small set of patterns.
It might be a bit melodramatic, but the implications herein, combined with the way the change was announced (or not announced, as it were), makes it feel like a slap in the face to those of us who have been the biggest champions of the platform.
The good news is that it's not too late for Amazon to grant the venerable literal a well-earned pardon.