Towards the end of last year, I implemented a feature in a few of my skills that was meant to chip away at one small corner of one of the biggest problems in the VUI space: conveying application context. It was an idea I had been toying with for quite a while, but a few factors made the time right to implement it, and I'm glad to say it's been hugely successful! Read on to hear about what I built, and why...
1 Comment
It was around late November of 2015 when my original collaborator on CompliBot and InsultiBot and I started preparing our initial submission for certification. Last week, I submitted these same two skills again with a fun new feature I've been messing around with privately for a while. As you might've guessed, owing to the fact that I'm writing about it, it did not go well...
We wanted to take just a moment to talk about a new Alexa feature, our implementation of it in AstroBot (our space API aggregator skill), and thoughts about how to best take advantage of the new capability.
So, some exciting news from Amazon today - the first batch of skills with push notifications enabled from the private beta have finally been released. Unfortunately, in their official post Amazon called out several skills as exemplars of the new feature, but failed to mention that we at 3PO-Labs were also in the beta, and also have a live skill with notifications: AstroBot. In fact, due to a clerical error with certification a couple months ago, we were actually the very first third-party skill to go live with notifications. We've been incognito ever since, but we're relieved to finally be able to talk about what we've done...
We're super excited to introduce to you our newest Alexa Skill, AstroBot. AstroBot is an aggregator of a few space related APIs, originally inspired by the simple-yet-brilliant howmanypeopleareinspacerightnow.com. Details, attributions, etc, after the break...
Here at 3PO Labs, a common topic of conversation is that of "semantic vs syntactic" language for voice assistants, specifically Alexa. The real crux of the discussion is "How do you get a voice assistant to do what you mean, rather than what you say?" This question was was one of the driving forces behind our creation of DiceBot, as we explain in more detail after the break...
There's been a lot of talk in the Alexa dev community lately about all of the tutorial or template based skills that are flooding the market (and of course the related discoverability concerns). All these "build a skill in under an hour" type walkthroughs are great for bringing new devs into the fold, but it got us thinking about what it really takes for an experienced Alexa developer to build something well. The question we came to was "could one of us build a skill from nothing to submission in just one day?". To answer the question, I decided to try it out, all the while cataloging the journey. Read on for more...
*As presented by CompliBot and InsultiBot We've been heads down working on DERP's Next Big Thing™ (almost there!), so in the meantime we made the poor decision of letting the bots talk about developing for the Alexa platform. All that after the break...
|
AuthorWe're 3PO-Labs. We build things for fun and profit. Right now we're super bullish on the rise of voice interfaces, and we hope to get you onboard. Archives
May 2020
Categories
All
|