3PO-LABS: ALEXA, ECHO AND VOICE INTERFACE
  • Blog
  • Bots
  • CharacterGenerator
  • Giants and Halflings
  • The Pirate's Map
  • Responder
  • Neverwinter City Guide
  • About
  • Contact

3PO-Labs: Alexa, Echo and Voice Interface

Thoughts on the Alexa Accelerator's Demo Night

10/20/2019

5 Comments

 
​Last week I had the pleasure of attending Demo Night for the Techstars Alexa Accelerator 2019 cohort. I wanted to share some thoughts on what I saw, not so much in the sense of whose pitch was most on point or which groups I'd expect to be successful, but rather to look at trends that arose (or maybe even more conspicuously - didn't arise) during this year's class.


​What is the Techstars Alexa Accelerator?

The Techstars Alexa Accelerator is a three month program taking place here in Seattle at the University of Washington's Startup Hall each summer. Techstars brings in a group of 8 budding companies working either in voice or an adjacent space (ML, AI, etc) and provides an environment where they can focus on creating their initial product with access to all sorts of subject matter experts and mentors. Teams also each receive a small investment from Amazon and Techstars. This is the third year of the program, which has produced a wide variety of graduates working on all sorts of neat products (including a couple of companies in the space - Jargon and PulseLabs - doing platform-level work that I think is really important).
Picture
Dave Isbitski, Amazon's Chief Evangelist for Alexa and Echo, delivered with a solid keynote to kick off the event.

This year's event

​This year's pitch night was held at the Amazon Events Center¹ and we were lucky to get a keynote by Dave Isbitski, Amazon's Chief Evengelist for Alexa and Echo before jumping into the presentations. I'm not going to spend a lot of words recapping the event - you'd be much better off checking out Taylor Soper's writeup on Geekwire for that. Instead, I'll just drop the companies' one-liners here and then dive into my analysis.
Picture

Anycart

Shop meals in 1 click. Get groceries in 1 hour.
Picture

Ejenta

Remote patient care at scale.
Picture

EX-IQ

Powering enterprise productivity in the information age.
Picture

Midgame

A Voice-enabled AI companion for gamers.
Picture

nflux

The world's most intelligent video analysis platform.
Picture

Togethar

AI that uses customer feedback to directly improve your product.
Picture

VoiceHero

An analytics platform to help businesses optimize and adapt their voice experience.
Picture

Yourika

The future of learning is now.
So, what were some of the recurring themes that occurred? Well, in talking to the various teams (I had a chance to chat with all of them other than EX-IQ), here's what I found:

​Looking across the spectrum for ideas

Probably the most obvious takeaway from this year's cohort is how wide a net the team was casting in choosing who to admit. The companies ran the gamut on a number of axes, from very specific problems solved (Anycart) to wide open technologies (Yourika); from products that will help the Alexa platform today (VoiceHero) to those whose possibilities for Alexa itself are much more speculative (nflux); and across consumer verticals, be it shopping with Anycart, Gaming with Midgame, or Healthcare with (Ejenta).

​I suspect a lot of that just has to do with the nature of managing a portfolio of companies. If you're the bigwig picking who to invest in, you probably want to have your fingers in as many pies as you can. But I think it's also a testament to Amazon's biaxial approach to building out Alexa's surface area - getting as many instances of Alexa out there with their "Alexa Everywhere" initiative, and constantly expanding the scope of what it means to be part of Alexa by constantly introducing new interfaces (Gadgets, Display Devices, etc), capabilities, and ways of interacting with the platform. If Amazon was planning on doubling down on what has already been built, you'd expect to see a much more homogenous portfolio, but I don't think anybody is putting money on Amazon sitting on their laurels right now. And when you think about it, with the success of the Echo Show and its camera-toting lineage, is it really that weird that the Alexa Fund might want to be involved with a company using machine learning to describe motion in a video (nflux)? Microsoft showed years ago with Kinect how great a voice + gesture interface could be...

Voice-first tenets as a sort of connective tissue

​One of the questions I asked of several of the founders, after hearing their elevator pitch or seeing their demo, was "what is the connection to voice or to Alexa that made you believe your product was a good fit for the Alexa Accelerator?", and that ended up leading to some really interesting conversations, many of which came full circle to some of the lessons that the Alexa developer evangelists preach daily.

Some of these were super straightforward. Anycart, for example, spent a good chunk of the time at their booth showing attendees a web interface to explain how the product was better than the various food delivery or meal prep services on the market today and a browser plugin to show how you could easily manage customization and expansion of their base content. And while these things were cool and their use case was plausible, on the surface there was nothing even voice-adjacent to what they were showing. When I inquired along those lines, their founder's eyes lit up in what I recognized as the face of "interviewee who has been studying interview questions and gets asked one he prepped for". He immediately came back with "If we've made it easy to order all of the meal ingredients you need with one click, then obviously we've also made it easy to order with one voice command". And he was absolutely right - the classic Alexa 101 "good use case vs bad use case" wisdom places Anycart's ordering use case - one or two necessary interactions to trigger a meaningful outcome (assuming they can get all of the upfront stuff to work) - squarely in the "good fit" pile.

Another one of the "should I put a voice interface in front of it?" use cases that you'll hear VUI experts talk about is the case of an extremely broad, potentially non-linear interface. Think of it this way - when you start an Alexa session, there are literally tens of thousands of things you could do by saying different things after that wake word. Now imagine a case where you had a GUI that was trying to accomplish the same thing - providing you tens of thousands of different pieces of functionality via a single step. You end up with something like a desktop set to maximum resolution with the smallest possible icons filling up the monitor, probably overlapping, and completely unintelligible. But in voice, you can easily build as many parallel functional paths as you'd like (assuming you can get the voice model to work, of course). And indeed that was one of the arguments that Ejenta was making - that there are so many different healthcare logical paths, so many different channels of communication, so many types of unique and dissimilar paperwork to fill out - that despite all of the difficulties of working with the healthcare system and staying compliant with healthcare laws, the fit with voice interface (and ergo the potential benefit to their users) provides enough of an incentive to pursue it.

The final connection I made came while diving into Yourika's very lofty plan to "revolutionize learning". Of all the groups, their pitch seemed the least connected to VUI or the Alexa platform, and indeed when I asked them this same question, they didn't really have an answer at first. A couple other engineering-minded folks joined me at the table, and we grilled them with technical questions for a good 15 minutes or so, before we happened upon something that unexpectedly answered the question for me. They were showing what a sample interaction might look like on something like a tablet, and when they dove into "how do we figure out exactly what the user meant?" the interaction looked exactly like a mock-up of a standard multi-turn dialog with slot elicitation that you might see in a low-fi prototype of an Alexa skill. In retrospect, it isn't especially surprising - their input system was always going to need to be something like a conversation with the user. I'm not sure that really has much effect on the bigger problem, of how do they actually do the AI bit to revolutionize learning, but it certainly makes sense from an accelerator fit perspective.

Whether it's multi-turn dialogs, flat voice interface hierarchies, or single-touch solutions to problems, any of these topics would've felt right at home in an Alexa webinar or Dev Days event, and it was a lot of fun sleuthing out these synergies.


What didn't we see?

Alexa Skills or AVS implementations.

This sort of follows a trend the accelerator has taken in the past two years - as far as we can tell they aren't really looking for people to make the next big game on the Alexa platform, or to build another product using the SDK Amazon has put out there. Instead, they're looking further afield for new technologies or things that have the potential to open up entirely new lines of business.

Still, it surprises me that Amazon has built these two massive platform-level products (ASK and AVS) and yet their focus with the accelerator completely skips over them. The closest thing we got this year was probably VoiceHero, playing the role (like Jargon and PulseLabs before it) of voice app enabler. But at a time when skill reward payouts seem to be dwindling, and Amazon is pushing developers to finally make the leap into building successful lines of business on the skill store via ISP, the fact that Amazon is not using this avenue to directly invest in any of these companies is notable.² I'll definitely be watching next years cohort to see if they buck that trend.
All-in-all, it was a great event, and I'm super happy that I had a chance to attend and talk to all of these companies' founders in person. I'm excited to see where the graduates go from here, and I've got a few that I'm definitely gonna keep a close eye on. If you get the chance to attend next year's event, make sure to take it! If you had a chance to attend, or have interacted with any of these companies thus far, I'd love to hear your thoughts to see if they match up with mine. And for most of you who weren't able to attend, I'll keep watching for the pitch videos to get dropped and link them here for everyone to see.
¹: Coincidentally, this was the same venue at which Amazon held its very first big developer focused event, which I recapped way back in 2016. It's worth revisiting just to note how far along the platform has come in a few short years.
²: Admittedly, I have absolutely no insight into who applied but didn't get into the accelerator this year - for all we know maybe zero skill or AVS builders applied, or maybe the ones that did apply just did not meet muster. But if that's the case, then wouldn't that also​ say something about the health of the ASK/AVS?
5 Comments
MariadelMar Gonzalez link
10/23/2019 01:12:37 pm

Great post! Thanks so much for including us (PL) in this! Also, thank YOU for being such an amazing Alexa Champion.

Reply
Jo Jaquinta
10/23/2019 01:53:53 pm

Along the lines of "I'll definitely be watching next years cohort", an idea for another blogpost: a followup on what happened to the ideas/proposals from last year's event.

Reply
Eric Olson
10/28/2019 10:56:08 pm

Hmmm, that's a solid idea. I'm finding, though, that it takes me a longass time to write blog posts these days, and I feel like that one would be super research heavy.

Reply
vlogging camera flip screen link
1/5/2020 11:02:46 pm

This compact and small camera will shoot brilliant videos even in dim light conditions. It has almost no flaws. It is great for indoors shooting and has Wi-Fi capabilities built-in

Reply
website link
9/14/2020 11:09:55 pm

The versatile is a magnificent innovation for more than one explanation, it goes about as the interesting number which can recognize an individual, and furthermore go about as a correspondence channel between the customer and the brand.

Reply



Leave a Reply.

    Author

    We're 3PO-Labs.  We build things for fun and profit.  Right now we're super bullish on the rise of voice interfaces, and we hope to get you onboard.



    Archives

    May 2020
    March 2020
    November 2019
    October 2019
    May 2019
    October 2018
    August 2018
    February 2018
    November 2017
    September 2017
    July 2017
    June 2017
    May 2017
    April 2017
    February 2017
    January 2017
    December 2016
    October 2016
    September 2016
    August 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015

    RSS Feed

    Categories

    All
    ACCELERATOR
    ALEXA COMPANION APPS
    BOTS
    BUSINESS
    CERTIFICATION
    CHEATERS
    DEEPDIVE
    EASTER EGG
    ECHO
    FEATURE REQUESTS
    MONETIZATION
    RECAP
    RESPONDER
    TESTING
    TOOLS
    VUXcellence
    WALKTHROUGH

Proudly powered by Weebly
  • Blog
  • Bots
  • CharacterGenerator
  • Giants and Halflings
  • The Pirate's Map
  • Responder
  • Neverwinter City Guide
  • About
  • Contact