Monday, November 04, 2013

Amazon, TV Pilots, and the Wisdom of Crowds

On Saturday, the Wall Street Journal reported on Amazon's attempt to develop original television programming (similar to Netflix's entry into this market).   Here's an excerpt from the article that describes how Amazon's efforts differ from the way the large television networks launch new shows:

A group of 14 "pilot" episodes had been posted on the company's website a month earlier, where they were viewed by more than one million people. After monitoring viewing patterns and comments on the site, Amazon produced about 20 pages of data detailing, among other things, how much a pilot was viewed, how many users gave it a 5-star rating and how many shared it with friends.Those findings helped the executives pick the first five pilots—winnowed down from an original pool of thousands of show ideas—that would be turned into series. The first will debut this month: "Alpha House," a political comedy about four politicians who live together, written by Doonesbury comic strip creator Garry Trudeau.

Amazon is taking advantage of the wisdom of crowds.   Traditional networks use focus groups to test out ideas for new shows.  Aren't thousands of reviews better than the responses from a small number of people in focus groups?  After all, focus groups can be very problematic, and not simply because of the small sample size.  Well... it depends.  The wisdom of crowds works when the responses from a larger, diverse audience are INDEPENDENT of one another.  In other words, the wisdom of crowds works when each individual response does not influence the response of others.  Does independence hold when it comes to online reviews, or does herd behavior take place?

Tim Harford of the Financial Times asked this question in a recent column.  He cites an interesting study that I blogged about earlier this fall.  According to the Wall Street Journal, researchers Muchnik, Aral, and Taylor found that, "positive online ratings can be strongly influenced by favorable ratings that have come before."  They discovered that initial positive ratings did create herd behavior.  Ratings that followed were more likely to be positive as a result of the influence of the initial evaluations.  According to Harford, "Positive comments tended to attract birds of a feather – a comment sent into the online world with a single positive vote attached was 30 per cent more likely to end up with at least 10 more positive votes than negatives."  Interestingly, initial negative ratings did not lead to similar herd behavior.  People who liked a product often chimed in to "counterbalance" an early, unusually negative review. 

No comments: