Experimentation has been on my mind a lot lately. I am coming to believe that moving compensation (and all HR) programs and practices forward, at the ground level where most of us live and work, will happen through evidence-based learning and innovation, through tinkering and tailoring, and through raising the bar on our understanding of fundamental data analysis. With this in mind, I was happy to bump into the HBR article Step-by-Step Guide to Smart Business Experiments (registration required for full article) written by professors Eric T. Anderson of Northwestern's Kellogg School and Duncan Simester of MIT's Sloan School of Management. And happier still to borrow shamelessly from their material to create my own set of rules for compensation and HR experimentation, presented below.
Rule 1: Clarify the Question, Define the Concept
Any experiment begins, of course, by getting as clear as we can about what we are seeking to learn and the kind of evidence that will provide us an answer. In academia, as Anderson and Simester note, researchers typically change one variable at a time so that they can know exactly what caused an outcome. While ideal, in business this may not always be possible or practical. For this reason, the authors advocate instead a proof-of-concept approach where you "change as many variables in whatever combination you believe is most likely to get the result you want."
Rule 2: Set it Up Like a Scientist
A business experiment requires three things: a treatment group (where the compensation or HR action in question is "applied"), a control group (a complementary group with no action) and a feedback mechanism (which allows you to observe how those in each group respond). Feedback can come via data or metrics that measure the impact of the treatment on individuals in the group; this could include things like voluntary turnover, engagement survey results, productivity or operating statistics, etc. The feedback might also be drawn through more targeted survey efforts, through interviews or focus groups.
Rule 3: Don't Miss the Natural Experiments
The article quotes Norwegian economist Trygve Haavelmo, who won the 1989 Nobel prize, and observes that that there are two types of experiments: “those we should like to make” and “the stream of experiments that nature is steadily turning out from her own enormous laboratory, and which we merely watch as passive observers.” The point here is that we (as busy, overextended professionals) should learn to recognize and take advantage of the low-hanging fruit; experiments that are either already happening in our organizations or conditions that readily lend themselves to easy experimentation. Keep an eye out for treatment and control groups that may already exist, or are being created by factors outside your control.
Rule 4: Measure as Much as You Can
The more you measure, the more you may potentially learn. Slicing your data by different variable turns one experiment into many, while examining only aggregate data may cause you to miss things. You might look at your results by, for example, differences in tenure, performance, HQ versus field, income level and even demographics like age, gender, etc.
Rule 5: Keep Your Eye on the Goal
Some companies, Anderson and Simester note, mistakenly believe that the only useful experiments are the successful ones. The goal is not to conduct perfect experiments; the goal is not even (really) to vindicate your preferred policy direction. The goal is to learn and to position ourselves for better business decisions. The goal -- at the end of the day -- is to bring your leadership team data and evidence, rather than hopes and beliefs, about your compensation and HR recommendations.
What can you add to these rules based on your experience?
Image: Creative Commons photo "Mad Scientist Doggie" courtesy of www.petsadviser.com
Comments