What is the Techstars Alexa Accelerator?
This year's event
Looking across the spectrum for ideas
I suspect a lot of that just has to do with the nature of managing a portfolio of companies. If you're the bigwig picking who to invest in, you probably want to have your fingers in as many pies as you can. But I think it's also a testament to Amazon's biaxial approach to building out Alexa's surface area - getting as many instances of Alexa out there with their "Alexa Everywhere" initiative, and constantly expanding the scope of what it means to be part of Alexa by constantly introducing new interfaces (Gadgets, Display Devices, etc), capabilities, and ways of interacting with the platform. If Amazon was planning on doubling down on what has already been built, you'd expect to see a much more homogenous portfolio, but I don't think anybody is putting money on Amazon sitting on their laurels right now. And when you think about it, with the success of the Echo Show and its camera-toting lineage, is it really that weird that the Alexa Fund might want to be involved with a company using machine learning to describe motion in a video (nflux)? Microsoft showed years ago with Kinect how great a voice + gesture interface could be...
Voice-first tenets as a sort of connective tissue
Some of these were super straightforward. Anycart, for example, spent a good chunk of the time at their booth showing attendees a web interface to explain how the product was better than the various food delivery or meal prep services on the market today and a browser plugin to show how you could easily manage customization and expansion of their base content. And while these things were cool and their use case was plausible, on the surface there was nothing even voice-adjacent to what they were showing. When I inquired along those lines, their founder's eyes lit up in what I recognized as the face of "interviewee who has been studying interview questions and gets asked one he prepped for". He immediately came back with "If we've made it easy to order all of the meal ingredients you need with one click, then obviously we've also made it easy to order with one voice command". And he was absolutely right - the classic Alexa 101 "good use case vs bad use case" wisdom places Anycart's ordering use case - one or two necessary interactions to trigger a meaningful outcome (assuming they can get all of the upfront stuff to work) - squarely in the "good fit" pile.
Another one of the "should I put a voice interface in front of it?" use cases that you'll hear VUI experts talk about is the case of an extremely broad, potentially non-linear interface. Think of it this way - when you start an Alexa session, there are literally tens of thousands of things you could do by saying different things after that wake word. Now imagine a case where you had a GUI that was trying to accomplish the same thing - providing you tens of thousands of different pieces of functionality via a single step. You end up with something like a desktop set to maximum resolution with the smallest possible icons filling up the monitor, probably overlapping, and completely unintelligible. But in voice, you can easily build as many parallel functional paths as you'd like (assuming you can get the voice model to work, of course). And indeed that was one of the arguments that Ejenta was making - that there are so many different healthcare logical paths, so many different channels of communication, so many types of unique and dissimilar paperwork to fill out - that despite all of the difficulties of working with the healthcare system and staying compliant with healthcare laws, the fit with voice interface (and ergo the potential benefit to their users) provides enough of an incentive to pursue it.
The final connection I made came while diving into Yourika's very lofty plan to "revolutionize learning". Of all the groups, their pitch seemed the least connected to VUI or the Alexa platform, and indeed when I asked them this same question, they didn't really have an answer at first. A couple other engineering-minded folks joined me at the table, and we grilled them with technical questions for a good 15 minutes or so, before we happened upon something that unexpectedly answered the question for me. They were showing what a sample interaction might look like on something like a tablet, and when they dove into "how do we figure out exactly what the user meant?" the interaction looked exactly like a mock-up of a standard multi-turn dialog with slot elicitation that you might see in a low-fi prototype of an Alexa skill. In retrospect, it isn't especially surprising - their input system was always going to need to be something like a conversation with the user. I'm not sure that really has much effect on the bigger problem, of how do they actually do the AI bit to revolutionize learning, but it certainly makes sense from an accelerator fit perspective.
Whether it's multi-turn dialogs, flat voice interface hierarchies, or single-touch solutions to problems, any of these topics would've felt right at home in an Alexa webinar or Dev Days event, and it was a lot of fun sleuthing out these synergies.
What didn't we see?
This sort of follows a trend the accelerator has taken in the past two years - as far as we can tell they aren't really looking for people to make the next big game on the Alexa platform, or to build another product using the SDK Amazon has put out there. Instead, they're looking further afield for new technologies or things that have the potential to open up entirely new lines of business.
Still, it surprises me that Amazon has built these two massive platform-level products (ASK and AVS) and yet their focus with the accelerator completely skips over them. The closest thing we got this year was probably VoiceHero, playing the role (like Jargon and PulseLabs before it) of voice app enabler. But at a time when skill reward payouts seem to be dwindling, and Amazon is pushing developers to finally make the leap into building successful lines of business on the skill store via ISP, the fact that Amazon is not using this avenue to directly invest in any of these companies is notable.² I'll definitely be watching next years cohort to see if they buck that trend.