I can’t stand reading business books. I think they’ve become
an inherent oxymoron; many extolling bold new ways of creating personal or business
efficiency while somehow justifying 270 pages of my valuable time to explain
it. Bugger!
At first pass, James Surowiecki’s “The Wisdom of Crowds” falls right into this category. The book is split into two parts – the interesting
part and the part that makes the book thick enough to charge $14.95. The best business
books though are the ones that have psyche-burning memes; those annoying
simplicities that stick in your head like the catchy chorus of an 80’s pop song,
constantly bubbling into your conscious long after you’ve put the book down. The
“Wisdom of the Crowds” is Wham!-like in its ability to do this.
The basic premise of the book is (shockingly) that crowds of
people can actually be pretty smart. The unintuitive part of the argument is
that in certain situations the crowd is actually smarter than any individual
expert in the group. To some extent this is an interesting counter argument to
one of my favorite thought pieces, Jaron Lanier’s Digital Maoism. The classic example given is the gumballs-in-a-jar game where a group of
people is asked to pick the number of gumballs in a jar. While the individual estimates
vary widely, systematically the averaged guess of each person in the crowd ends
up approximating the actual number of gumballs more accurately than the best
individual guesser. It’s a bit of a trite example as I’m not familiar with many
gumball counting experts - but let’s not move too quickly here, we have 269
pages to go.
A more impactful example would be the long term success of
fund managers versus market indexes (fund managers rarely outperform the market
over a long period of time). Another example would be the Iowa Election Markets where the “crowd” of people
buying and selling political outcome futures has historically been quite
accurate. “Wisdom of Crowds” is littered with other examples covering politics,
economics, gambling, social experiments, and beyond.
Surowiecki’s justification for this phenomenon is that in
most groups of people you have a distribution of localized or limited knowledge
(which can also be regarded as a bias). By aggregating localized knowledge and
bias, the outliers tend to cancel each other out and the resulting aggregate
wisdom approximates reality. This is a sort of crowdsourcing version of
dropping the high and low score in the Olympics to approximate the real value
of any 15-year-old Chinese gymnast’s floor routine. Another way to consider localize
phenomena’s affect is to consider that sometimes localized (or biased)
information is actually the most influential to the greater outcome. Take for
example Palin’s VP nomination. Some people will have a bias that her political
earmarking will have a large affect on people’s opinion of her while others
will dismiss it as a triviality. The results will show that one of these
answers is correct. If you expand this information to include the 100 other accusations
that have and will come up, somewhere there is a permutation of specifics that
will affect the outcome.
Surowiecki further argues that in many situations if you add
in even a meager financial incentive to be right and allow individuals to
refine their guesses over time, you tend to get an even closer approximation of
the actual results (bookies evening out bets on both sides of a football game
would be a good example of this). In general this tends to work well in
situations where new information and developments over time will affect the
actual results (e.g. new information such as the weather on a day the football
game will be played or the outcome of the 2008 presidential election).
Okay, so enough about the basics – you get it. I just saved
you $14.95 so don’t say this blog never did anything for you.
CrowdMatch.com
So I was sitting around trying to think of all the
situations where the wisdom of crowds could outperform the experts. And then it
came to me as I saw a commercial for Free Communication Weekend on eHarmony: dating!
I have been on very few blind dates in my life (with paltry
success). I think the reason is that I don’t trust any one individual in my “dating
recommender group” to understand what ends up being a good match for me. Usually
an individual’s recommendation is based on what they want for me (or their
friend) and not necessarily what I will enjoy. So it dawned on me, why not
throw this decision out to the crowd in some kind of crowdsourcing fashion?
Here’s an experiment you can all run. Get 15 of your friends,
family members and perhaps even exes (yikes) to sign up for eHarmony’s next
free communication weekend. Have them all peruse the dating selections in your local
area. Each person in your “crowd” would pick who they think is the best match
based on their own biases about you. Perform some sort of reduction in a few rounds
of elimination where each person must again pick who they think is your best
match, but from the increasingly limited pool of possible choices. Finally you’d
whittle it down to one (or a few) people that should theoretically be your best
match. Why not outsource your dating choices to the crowd as they should
supposedly be smarter than you (the self anointed expert of the crowd)?
The reason this might work is based on Surowiecki’s argument
for local bias. Some people in your recommender crowd are going to be more
sensitive to a physical match for you (assuming they know what you like), some
are going to focus on hobbies you have in common, and some may focus on beliefs
or career choices. Through the aggregation of these biases a true pastiche of
what might make you happy may emerge. Come to think of it, this would be a
really interesting application to build on Facebook. People propose friends
from their network that they think you should date and then your friends effectively
narrow down the field to the one (or few) people you should go out with. When CrowdMatch.com turns profitable please
just remember you heard it here first!
But Seriously..
While I believe that the results of CrowdMatch.com would
have some legs (all puns intended), it dawned on me that there was likely a
more valuable business problem in wait of a crowd solution. I think I have found
the perfect problem. This situation is one that many of us have found ourselves
in countless times; a situation where traditionally the (supposed) expert is
listened to over the crowd’s opinion and where the results have historically been
very poor.
Sales forecasting in startups should absolutely be determined by the
wisdom of the crowd.
Let’s review why:
1) Sales forecasting in startups is generally
mediocre at best (so it’s a problem that needs a different solution than the
status quo)
2) The forecast is usually compiled by one person
who is supposedly the expert (the VP Sales)
3) Every person in the sales ecosystem has only a
limited amount of information during the quarter (only certain people have
directly interacted with prospects, sales people are regionally focused, there
may be limited knowledge of the true deliverability of a product in the
quarter, etc..)
4) The expert simply cannot know all the localized
information that impacts the outcome
5) New information enters the sales ecosystem
constantly throughout the quarter
6) Every person in the sales ecosystem has a bias
based on personal experience, style, compensation plan, or management approach
For a long time now I have had the pleasure of working with
Wendy Lea. Among many other things, Wendy
ingrained in me a fundamental belief that sales is not art, it’s a science.
When you find yourself in a pipeline debrief and you notice you’re starting to
hear more stories than data you know you’re dealing with a VP of Sales that
either has no good news, has little definitive information, or doesn’t
fundamentally believe in the theory of sales being a science not art. As a side
note, Wendy and I have also worked Seth Levine in a number of board situations
and he wrote a fantastic piece cum plea to rationality on his blog about this exact
subject. The point of this preamble is that I have come to believe strongly that
sales forecasting should be a rational and predictable exercise. So why is it
so damn hard to get right? I’ll attempt to argue Surowiecki-style that it’s
because we have learned (poorly) to trust the expert and not the crowd.
As background, I have always been close sales but not directly
responsible for it. That’s allowed me somewhat unimposingly to have a lot of private
conversations with people in the sales ecosystem about their own predictions. Will
a certain deal close or not? What will the eventual size of a specific deal be?
Will we make the number? Usually their predictions are based on their personal experiences
in the sale process either working directly with revenue sources (prospects,
existing customers in the pipeline that you plan on upselling) or their own
interactions with the sales team (if they are not directly a quota carrying sales
person). In hindsight, inevitably somewhere in the cacophony of conversations
there have almost always been pretty accurate predictions for my questions. I’ll
also note that rarely is it the VP of Sales that, no matter how much he or she
talks to their team, has the closest prediction of the final quarterly number
during the first couple months of the quarter.
The last point is particularly interesting because it shows
a weakness in the expert driven forecasting model. Depending on the dynamic
between the VP Sales and your sales team, answers to various questions are
going to vary widely depending on who is asking the question (a peer like me or
a manager like the VP of Sales). I believe this is a trickle-down of bias phenomenon.
CEOs are inclined to stay optimistic to the board and thus the VP of Sales is inclined
to stay optimistic to the CEO (even many times in the face of abundant
information to the contrary). And thus sales people are inclined to stay
optimistic to the VP of Sales. When peers talk about the quarter they predict
without this implicit bias.
The Wisdom of Sales
Crowds
So this takes us back to the wisdom of the crowds. Why not
leverage the crowd to get the best possible forecast of the quarter as it moves
along. Theoretically the collective perspective of the crowd would almost
always be the best guess at any point in time. That doesn’t mean that the collective
guess of the crowd would be correct on the first day of the quarter but it is
likely that it would be the most accurate possible prediction given all the
information available at the time. As the quarter progresses this guesstimate
should get closer and closer to the final number in theory much more accurately
than the VP of Sales could individually give. How many board members have been
told by a VP of Sales that the quarter will be X only 2 weeks before quarter
end and the final number is something dramatically off X (unfortunately almost
always lower).
What’s fantastically counter intuitive to the classic expert-driven
model, per Surowieki’s analysis, is that the more diversity you have in the
crowd of estimators the better. Why not
let sales engineers participate in the prediction? What about professional
services? What about other executives that are exposed to prospects during sales
discovery, or product managers who might have the most current customer
conversations at their disposal? This diversity, while it might seem to allow
uneducated opinions into the mixture, tends to lead to more accurate results. To
compound the problems of a crowd limited to just direct sales people, remember
that sales folks are many times driven from a perspective of wanting to hear
that there might be a sale. Someone like a sales engineer who might not be comped
on making the sale directly might be more inclined to hear the few “not gonna
happen this quarter” signals thrown out by the prospect. The diversity should balance
out these situations where an overly optimistic sales person might be
counterbalanced by an overly pessimistic sales engineer.
What’s interesting about this is that other than informally no
one I know does this, or even does anything like it! To some extent this is the
result of hierarchical structures in an organization. In the same way the VP of
Engineering is looked to for the definitive reality of a release schedule, the
sales VP is looked to for the definitive reality of the sales forecast. Come to
think of it, the wisdom of crowds here could be as equally well applied to
project deadlines as to sales (engineering, professional services
implementation, etc..). One unique oddity of sales though is that the VP of
Sales, in my experience, has a more inherent need to manage and control
expectations (even if they are dead wrong in the end). Perhaps it’s the
quarterly pressure of predictions (engineering teams don’t get lambasted if
they deliver a product on October 4th instead of September 28th)
or perhaps it’s just bad old school behavior that we’ve not unlearned to
question. While crowd predictions might be broadly applicable, for now let’s just
focus on solving all the problems of the sales world.
Sales and Marketplace
So how would you implement this? While poor man’s approaches
might work (an email survey here an email survey there), Suroweicki provides a
much more elegant approach. Remember that predictions sometimes have more
accuracy if there is an incentive to be right. A simple way to engage the crowd
would be to set up a sales forecasting prediction market just like the Iowa Election
markets. This market would allow anyone in the sales ecosystem to buy and sell
shares in “forecasting assets” about quarterly sales outcomes. Consider a
marketplace with the following “assets” that could be bought and sold freely
between market participants (for real money):
1) Final results of the quarter in increments (say $750,000-1M
is an “asset”)
2) Whether a
certain deal will close (e.g. “ECOLab will close this quarter” is the asset)
3) The size
of a deal (e.g. “ECOLab will be greater than 250K”)
4) If the quarterly goal will be met (e.g.
“Quarterly target of 2M will be met”)
In this forecasting market all the participants can
constantly buy and sell shares of these assets amongst each other based on
their local information. This would provide an incentive for the participants
because they could make real money in the transactions. You could also set up
the market to payout to the most accurate participant (e.g. the person holding
the most shares of the right final answer would get a $1000 bonus).
What would be interesting about this forecasting market is
that the current and historical share prices of each assets could be made available
to anyone: CEO, VP sales, board members, engineers, etc.. As the quarter
progressed you’d see these share prices shift as new information came into the
business. One could imagine that you’d see a lot of volatility in the share
prices when dramatic new information came in as well. Imagine what you’d see in
the forecasting market after a disastrous customer demo? A plunge in share
values for the “We will make the quarter” asset. Or a run up in share value
when a blue bird deal came into the pipeline. It would also be interesting to
note if trading volume was high on an asset which might imply an opinion disparity
between buyers and sellers around a forecasted sales outcome. Another interesting
scenario would be when there was high transaction volume on polarized assets classes
in the market (a lot of people buying “The Quarterly Number Will be High” assets
and a lot of people buying “The Quarterly Number Will be Low” assets).
Part of what would make this forecasting market work well
would be the anonymity of the buyers and sellers (as it is, with some
exceptions, in the general stock market). This anonymity would prevent peer
influence (e.g. if you knew all the sales people were buying high revenue
assets you might be inclined to buy some as well). This would also solve the
practical human problem of sales people not wanting publicly to disagree with
the VP of Sales whose inclination might be to manicure forecasting information
to the CEO or the Board until better information became available. This market
would inevitably drive transparency into the sales forecasting process which
would also require all participants to curtail capricious actions (e.g. if a board
member sees the high revenue assets in the market tank they need to not get on
the phone and start rattling the CEO’s cage).
The goal here would be to transcend the emotional nature of
forecasting volatility (the “art”) in favor of better information for all the
participants (“the science”). If for example a specific transaction asset dips
(e.g. “Company XYZ will close this quarter”) the VP of Sales would have a leading
indicator to try and figure out what was going on – a leading indicator they
may likely not get from talking to just the account manager for that prospect.
Putting the Force in Salesforce.com
The more I have thought about this the more I am a believer
that this is a fundamentally better approach to sales forecasting. I’d love for
someone to build a forecasting marketplace application for Salesforce.com. I’d
love to see (as a starting point) all Salesforce user accounts for a company be
able to buy and sell shares on the market anonymously. Perhaps in the beginning
all the forecasting marketplace “assets” for the quarter are automatically created
from SFdC data (you can see which opportunities in SFdC are scheduled to close
this quarter and what the predicted aggregate quarterly number would be). Or
perhaps the VP of Sales can set up the assets each quarter. Eventually you’d
like to see anyone participating in the market be able to “float” an asset that
they wanted a prediction on. To some extent this approaches the generality of
some of the new prediction market players like Predictify.
The key here is to structure it as an incented marketplace and not just an
opinion poll (which much of Predictify is).
Either way this approach would drive a level of transparency
into sales forecasting that is sorely lacking. It’s not that most startup VP of
Sales are bad, it’s just that the expert model doesn’t work with such broad
diversity of localized information. And hey, and if nothing else you can finally
have a way to pacify anyone that that complains
that only sales people make money off of sales.