Indeed, this era of rapid technological advances and marked disruption often brings problems that are poorly structured or involve non-routine decisions, as well as challenges with no real precedents or plagued by conflicting facts or inadequate information. Also confounding this “quagmire of ambiguity” are motivational factors, including the perceived importance of the decision or its potential impact on the decision-maker.
For some time now, tech firms, social media platforms and marketing companies have positioned “big data” as the silver bullet of problem-solving. Big data is to contemporary decision-making what “training” was to HR functions in the 1980s. Make no mistake, we agree that more data is always more useful than less. But, this assumes that data is synonymous with information or insight – and that often can be a tenuous proposition. Data must be interpretable in order to be actionable and by extension to be useful and by further extension to be profitable.
“We agree that more data is always more useful than less. But, this assumes that data is synonymous with information or insight – and that often can be a tenuous proposition.”
Having enormous bytes of ambiguous data is arguably no better than having nothing at all. In fact, having data can be considerably worse if it leads to the wrong inferences and hence, ineffective decisions. We are referring to a common trap that has been known to statisticians for decades, i.e., large samples of data can make random or fluke occurrences appear meaningful. Dr. Paul Meehl – a famous Professor of Psychology at the University of Minnesota who was within the top 100 most cited psychologists of the 20th century — put it another way when he cautioned that “everything correlates to some extent with everything else”. Therefore, overblown or even illusory trends are lurking about in all samples of big data.
Although big data might not be the silver bullet for decision-making in today’s “quagmire of ambiguity,” new research using computer modelling is validating a simple “hack” voiced in interviews with top leaders in the hospitality industry. These leaders have independently touted the utility of a “Personal Board of Advisors (PBA)” throughout their careers and current problem solving. The PBAs created by these leaders tended to have the same general characteristics – they were carefully selected to be (1) small in the number of advisors selected, (2) composed of individuals with markedly diverse skill sets, e.g., some members were sports coaches and economics professors, and almost always the PBA included the leader’s spouse, and (3) the advisors were independent from the organization to guard against group think, internal politics, and the blind-spots that come when everyone is watching the same proverbial ball. And plus, this impartiality enabled advisors to be completely candid, honest, and focused only on the leader’s best interests. Having access to such a resource also protected leaders from the inherent loneliness, heaviness, and isolation that comes with leadership roles.
That said, computer modelling by renowned psychometrician and computer scientist Dr. Rense Lange revealed some other critical nuances when leaders put their PBAs to work. Here we are interested in dynamic decision-making, where it the intent is to keep gathering information until one of the two solutions can confidently be rejected. For the purpose of the computer simulations, this approach was applied to the situation where an executive needs to gather from his/her colleagues or staff additional information to reject one of two, clearly different alternatives, i.e., it does not resemble a 50-50 coin flip. Rather, the scenario is one where a reasonable amount of statistical risk is acceptable, but the time available to consult staff members is limited. Accordingly, Lange compared the case where two outcomes (called “no” vs. “yes”, or “low” vs. “high”) are thought to occur with probabilities of 20 vs. 80% – and this situation was compared to the more extreme case where these outcomes occur with 10 vs. 90% likelihood, or the less extreme case of 40 vs. 60%.
t was built into the modeling that additional consultations would be sought until there was a 90% certainty, i.e., the consultation process stops when it is clear that the outcome is either “low” (its chance of occurring is 20% or less) or “high” (its chance of occurring is 80% or higher) – with additional information being sought otherwise. Throughout, it was also assumed that the likelihood of making an erroneous final decision (i.e., we label being “low” as “high”, or vice-versa) is allowed to occur in at most 10% of the cases.
These simulations showed that, in practice, a leader only needs to consult at least three (3) advisors but very rarely no more than seven (7) advisors to make the most reasoned decisions. Unlike what is assumed in this environment of big data, more data is not necessarily better. Moreover, the advisors in the computer models were consulted in a sequential, cumulative way, versus leaders taking a poll or allow the advisors to deliberate collectively akin to a trial jury. That is, leaders should consult sequentially with each advisor one-on-one and without the advisor being influenced by feedback from any previous advisors.
In short, there is both an art and science to gaining and leveraging advice… and it is a tactic and skill that helps to define great leadership.