Many of these aspects are still prevalent, but naturally as the install base of Alexa compatible devices began to grow, and the number of people jumping into the development game expanded, we've started to see a more aggressive and competitive undertone in the skill store. In the right situation this can be great for a platform. A modicum of competition can help the ecosystem pump out better skills for the users.
Unfortunately, along with the upsides of a marketplace comes the shady underbelly - the grey area where people are willing to do anything and everything not explicitly forbidden in order to make their product "win". This could show up in all sorts of ways in the "Skill Store" - invocation squatting, naming a skill to show up at the front of the list, etc - but what I want to talk about today is astroturfing.
What is "Astroturfing"?
A Precedent Set
A post went up on the development forums about the topic, and it didn't take long for us to discover (thanks, LinkedIn!) that in fact all three reviewers were employees of the company that had just released the skill. To further exacerbate things, all three reviews were completely unhelpful, saying nothing more than "Great work!".
The response was angry and immediate, with other members in the community filing 1-star reviews to balance the astroturfed reviews. It sparked a great discussion on the Alexa forums about proper review etiquitte, and at least among a subset of developers there was a consensus that this sort of behavior was inappropriate.
This specific instance did end up with a happy ending, however, as the proprietor of the skill ended up becoming involved in the conversation, resulting in a mutual agreement to remove the AstroTurf as well as the counter-reviews. For a brief moment, the Alexa community was granted a reprieve from being spoiled by the internet, but it was clear to everyone that the illusory wall surrounding the platform couldn't hold out much longer.
Wherein I call out those who have crossed the line...
Amazon... wait... what?
And so many more...
So why does this even matter?
- The Alexa ecosystem, and especially the skill store, are still in their infancy. One of the strangest phenomena so far has been the spottiness of review volume - some skills are approaching the 100 review mark, whereas others sit unreviewed for long periods of time. There doesn't yet seem to be a critical mass to drown out biased reviews.
- Along those lines, the skill store's sorting mechanisms are rudimentary. While Amazon did finally add a "best rated" sorting option within the last week, that algorithm looks only at star rating, and doesn't consider number of reviews. As a result, of the skills listed in this article, all of them regularly show up on the first page of sorted-by-rating results, despite having little-to-no reviews from actual users of their systems. This is especially bad in that it hides skills that are actually really good. Something like TsaTsaTzu's Starlanes, which is fairly universally understood to be a top tier skill, is now buried several pages deep.
- This ecosystem is still fairly untainted - I don't think we've reached the point, yet, where we have to concede that the rabid anonymous masses of the internet are eventually going to ruin it. Something that is standard behavior on other platforms doesn't have to become standard behavior here.
At this early stage, we are faced with something similar to the prisoner's dilemma. If we, as developers for this platform, could all agree to take the high road here, it would be better for everyone. Barring that agreement, however, we are compelled to make a bad choice just to maintain parity with the other skills on the store. At this point I have no solutions, just the quandary.