January means resolutions, heavy gym/dating/holiday marketing campaigns, and endless “7 social media trends for 2016” type blog and news posts.
But does anyone stop to consider whether last year’s predictions came true? Are these types of posts really worth reading (or writing)? And how original are they – if you’ve read one, have you read them all?
We decided to do what we do, and apply some quantitative and qualitative analytical rigour to answering these questions.
We analysed social media trend and predictions posts from over a hundred media outlets, agencies, consultancies, research houses, and social software providers, to see just how much insight there really was. This included the top UK social, creative, PR and media shops, plus selected others. We looked at their 2015 and their 2016 posts, and assessed them for:
- Originality (how different were their forecasts to others)
- Accuracy (how many of their 2015 forecasts came true)?
Not all produced this type of content (at least publicly). We found about a third to be chancing their crystal balls.
The results were striking.
- Originality: Two thirds of 2016 predictions were repeated across more than 70% of the authors. That’s even more similar than last year, which saw closer to 50% of predictions shared across 70% of authors
- Accuracy: Around 60% of 2015 predictions can be said to have ‘come true’. However, many of these predictions were an evolutionary, multi-year phenomenon (e.g. mobile social usage), so can be regarded as pretty safe bets. Predictions of things which could actually occur solely within the 12 month timeframe (e.g. a service being acquired or rocketing in popularity) saw c.30% accuracy
- Are there notable differences in the ‘good’ predictors? Those who were more accurate in 2015 didn’t share many traits. The only discernible themes identifiable are that they typically pick fewer predictions, and they tend to publish themselves, rather than on third party sites
- Wisdom in crowds? The frequency a prediction is repeated within the cohort does improve the likelihood somewhat of it coming true, but this correlates to multi-year evolutionary type predictions which are a) ‘easier’ and b) have some preceding evidence for their veracity. They’re safe bets to make, essentially
Interesting asides from trawling all of this future gazing:
- In 2015 predicting significant user growth across platforms was notably popular, yet has dried up in 2016. Which is amusing considering this was one of the more accurate predictions made, and many less accurate ones are being repeated for 2016!
- Predicting the growth of content was most universally popular both last year and this. It is worth reflecting whether the popularity is as much to do with it serving the interests of a good number of the predictors than any other claim
- The most notable ‘wrong’ prediction, yet seeing repetition heavily for 2016, is the advent of mass virtual reality adoption. Given the providers of these platforms, such as Oculus Rift, don’t envisage it yet being mass, this is puzzling
- Based on the above data, the most effective use of your trend reading time is to stop after reading two posts on the topic. You are more than 90% likely to have read two thirds of all the predictions being made across those posts. To have a good chance of reading more than four-fifths of the predictions being made requires much more reading, given the concentration of a relatively small proportion of authors as originators of the more novel predictions. Plus, if a post has few predictions, read it (probably good advice for life)
Worth noting is that the traditional consultancy businesses, who have increasingly moved into digital marketing provision in recent years, don’t produce these types of forecasts. Perhaps reflecting their background, they contribute predictions only within the wrapper of heavy duty research.
Conclusion: Trend predictions for social media are only moderately likely to come true, despite the high degree of overlap between forecasters. Highly original forecasts don’t significantly reduce the accuracy, when judged like-for-like with predictions which come true within the same time frame.
The best thing which could happen is a) for predictors to start including their success stats from the previous year, b) assess their own originality, and c) for people to move towards only predicting multi-year phenomenon, where everyone appears to both share some degree of consensus, and achieve some degree of accuracy. Which is what the rest of the business world has known for some time, it appears…
Note 1: We specifically haven’t named good/bad performers, because the point here is about the content trope as a whole, not specific parties within it.
Note 2: The sample size is sizeable enough to draw some meaningful conclusions, but not to deliver statistically significant accuracy.
Note 3: There is some inherent subjectivity in the decision as to whether something ‘came true or not’ last year, as well as how we’ve clustered recommendations.