The problem is that most of these claims are empty promises, based on old results or validated by poorly constructed tests. Case in point: many vendors claim that a positive change from a pre- to a post-test after using their solutions is evidence of its effectiveness. What they fail to recognize is that without a proper control group (a test group that takes the same tests, but does not participate in the learning experience) it is almost impossible to draw definitive results from pre- and post-testing alone.
We can’t necessarily blame our industry brethren for this oversight, given that few of them have been schooled in experimental research and design. Therefore, the notion of control groups is likely new to them. What’s more, perhaps I’m merely expressing my latent frustration over having to spend more than four years in graduate school learning the ins and outs of experimental design and statistics.
The fact of the matter is, though, that talent management firms or training suppliers shouldn’t make claims about stellar results unless the proper controls for all the varied factors are in place. Often, this requires spending extensive time, resources, and effort on data collection. Sadly, this is time and resources that aren’t always easily available.
Again, let’s consider the control group idea. Quite frankly, it can become overly complicated to “control” all the potentially intervening—and interfering—variables when doing real-world research. Let us not forget that working with real people is not like controlled experiments with rats in a laboratory. Nevertheless, this isn’t a suitable excuse for either not conducting any validation research or relying on poorly constructed experimentation.
Don’t agree with me? Let’s dig even deeper into vendor validity claims for talent selection inventories. While it may be true that significant correlations can be achieved and reported (with very small coefficients and very large test populations), these correlations typically account for a relatively insignificant or miniscule amount of variability in the prediction process.
For instance, even a statistically significant .25 correlation of an assessment tool with some measure of on-job performance accounts for just 6 percent of the 100 percent potential variance of ALL factors that can contribute to a perfect correlation. Consequently, suggesting a that specific inventory is an effective predictor of job performance is like saying that two people looking for love, who match up extremely well on a personality inventory, are going to get married. It is certainly possible, but the chances are very slim based on this one measure.
Instead, I believe that every buyer should act like they are from the “show me” state of Missouri. They need to demand that vendors show them exactly how a learning or performance solution (whether it is a selection instrument, a training program, or a developmental inventory), does what it says it does. More importantly, the vendor needs to have plenty of validated data to back-up their claims.
Conversely, it behooves talent management firms to make sure the claims they make about their products and services are actually validated. In fact, if proper validation research has not been conducted, it is certainly better to admit that—and let the client purchase on face value–than to unprofessionally make false or uneducated claims of effectiveness.
To be sure, knowingly making false claims isn’t a widespread practice in the talent management industry. Rather, claims are based on inadequate data. The problem does occur often enough, however, that talent firms should be aware of the issue—if for no other reason than to challenge clients to clearly understand the difference between valid and invalid claims when evaluating a potential purchase. After all, it takes only one rotten apple to spoil the entire barrel. And, as guardians of our industry, it is our responsibility to oversee that bogus claims do not go unchallenged.
All of this leads to a few questions: If you are a leader of a talent management firm, have you thoroughly validated your effectiveness claims? Are there experiments that should include control groups to more clearly demonstrate their effectiveness? Lastly, would you put your claims up to the test of the courts if you were sued for false representation?