Management Mayhem or Management Science?

We’re continually bombarded by information and advice in many parts of our lives. Management consulting epitomizes this, for better or worse. Some of this barrage is certainly of significant value, some is faulty but benign, and some is misguided with detrimental consequences. Given that the advice that’s being sought is generally outside the expertise of the seeker, how can one tell the difference and make an intelligent choice? References from those whom you trust, and who have had experience with the adviser, can be very helpful. But what else can you invoke?

Much is made of the importance of evidence-based research and science in making decisions in various domains. And this certainly holds as well in seeking organizational, management and leadership advice. There are many aspects of knowledge in these areas for which there is good predictive theory, not in the pejorative sense of ‘just a theory’ (e.g., your gut explanation of why your dog doesn’t like certain people), but in the scientific sense of a rigorous explanation (e.g., Einstein’s theory of relativity is not ‘just a theory’). So what exactly comprises a scientific approach and how can it be used as a measure to make a more informed decision when seeking consulting advice?

There are a number of steps that unfold in the course of acquiring scientific knowledge. The sequence generally goes something like this:

Observations: we observe something to happen (e.g., when we release an object that we’re holding in our hand it falls to the ground).

Empirical generalizations: based on a variety of repeated, consistent observations we reach some general conclusions (e.g., an object released from constraint and support will fall directly to the ground, absent other intervening forces such as wind or magnetism to provide the missing support).

Theory development: we develop a rigorous narrative that explains the empirical generalizations and enables us to make predictions (e.g., general relativity explains that things consistently fall to the ground because large masses such as the earth curve the ‘spacetime continuum’ and objects move along these curves, an effect we call gravity; we can predict the existence of black holes and gravitational waves).

Generation and testing of hypotheses: we develop propositions and design associated experiments or qualitative studies to test things predicted by the theory (e.g., light waves from the sun will bend as they pass a large mass such as the moon, and this deviation can be measured and verified on earth during a solar eclipse, and is uniquely explained by general relativity; data can be collected and measurements made to provide evidence confirming the existence of black holes and gravitational waves).

And this becomes a virtuous circle: other observations are made and information gathered via the experiment or qualitative studies, the theory is provisionally confirmed or rejected, the original generalizations are refined and enhanced, the theory is adjusted or recast based on the new observations and modified/new generalizations, new hypotheses are generated and new experiments and qualitative studies designed to test them, new observations are generated from these experiments and studies, and so on.

This knowledge acquisition sequence applies as much to organizational, management and leadership knowledge as it does to physics, chemistry, and biology.

So if you’re trying to assess the validity and value of what’s being proposed by a consultant, put it to the test: has the knowledge that’s being described as addressing your needs been rigorously developed via scientific method as outlined above? Or is the offering based on empirical generalizations gleaned from various experiences (or sometimes from thin air), but for which there is no rigorous and robust theory nor quantitative or qualitative studies that have produced validating empirical results (i.e., no evidence-based research). I suspect that you will frequently find the latter much more common than the former, i.e., the first two steps in the sequence have happened – generalizations based on some limited observations and experience – but not the last two critical steps of rigorous theory and robust testing, nor any virtuous circle. And hence fad of the week, flavour of the month, and guru of the day, with unsubstantiated ‘knowledge’ cobbled together and sold as modern day snake oil, shown to be flawed, wanting, and often detrimental during execution or, worse, after the consultant’s departure, leaving mayhem in their wake.

Not everything lends itself to scientific method. But where it can be (and has been) applied, it warrants your serious attention and scrupulous review in screening consulting offers and validating the ensuing application of knowledge proffered.

Previous
Previous

CEO or C-3PO?

Next
Next

Rethinking Leadership: Tired Myths and New Insights