Stack of hands showing teamwork

Recently there has been increased publicity around “fake” scientific studies. These have two major implications for our work. First, they may serve to increase existing skepticism about the value of applying scientific evidence to D&I efforts. Second, well-meaning D&I consultants and trainers may not know how to distinguish real studies from fake evidence or high-quality studies from low-quality studies.

The push to ensure more diversity and inclusion in the workplace has many organizations taking steps to evaluate their policies and institute more robust inclusion training programs. Overall, this demonstrates that organizations recognize the benefits of a diverse workforce and are willing to take steps to make it a reality. However, as the pool of D&I consultants expands at what seems like a breakneck pace, the risk of relying on partners who base their methods on sketchy science rises exponentially.

Sometimes consultants may be influenced by fake science – fabricated scientific “facts” disseminated across the internet. These findings may also originate from scientific-sounding journals, but ones of low or uneven quality. All scientific facts are not created equally. Today’s media landscape has a ravenous appetitive for content that fits into short, easy-to-consume headlines and talking points. This can result in the media publicizing weak evidence and/or oversimplifying strong evidence.

Without a strong academic background in place, the layperson often has difficulty discerning the difference between legitimate, peer-reviewed journals with rigorous criteria for publishing and publications with academic-sounding names but no real credibility in the scientific community.

How do true scientific experts distinguish fake from real, weak from strong evidence?

All journals are not created equal

We take a close look at each journal’s editorial board, reviewers, and standards and criteria for publication. Most of the fake studies that get published are published in sources that are, to an expert, simply not credible. If the author has to pay to be published, there is a very good chance that it is not a high-quality journal.

All studies are not equally valid

The study design, sample (people included), measures and methods matter. A lot. That is why scientists spend so long in school. We evaluate the quality of each study in considering whether and how to apply its findings.

Replication matters

Findings from one study that have not been replicated (same finding found in other high-quality studies) should get less weight, if any.

Depth of understanding matters

One of the most common ways to measure implicit (unconscious) bias is determining an individual’s score on the Implicit-Association Test (IAT). This test, from Project Implicit, measures the strength of associations between concepts and evaluations or stereotypes. Interventions that claim to change implicit bias are not credible unless they are measured again a few weeks and months after the intervention. However, people without this depth of understanding might prematurely think they have found something that works.

Complexity matters

As another example, an intervention that raises someone’s awareness of their racial bias can also raise their inter-racial anxiety. Since inter-racial anxiety can have even worse effects on interracial interactions than implicit bias, the intervention may do harm. Naïve trainers, or those with only a simple a-contextual understanding of unconscious bias, will probably overlook this possibility. It is usually undetected by the trainer who may be long gone by the time the negative effects have their impact.

How to evaluate an inclusion training partner

People with true expertise will not be heavily influenced by fake science. In order to determine whether or not a particular partner can bring effective results to your organization, you must be willing to undertake a thoughtful vetting process to gain a greater understanding of the legitimacy of the science behind their methods. Whether or not other organizations use a company, even if very well-respected organizations use the company, conduct your own evaluation.

Here are some guidelines to get started:

  • Inspect credentials. Education and training matter. Research the programs from which a consultant’s credentials are earned. Reading up is fine, but do they have the deep training needed to understand scientific evidence from the diversity sciences? How have they added to the accepted body of evidence?
  • Learn about their process. During the interview stage, ask the consultant where they obtain their information. How do they distinguish between high-quality and low-quality information? If they say, “We only use the strongest scientific evidence”, ask them for specifics about how they evaluate the scientific evidence. Ask them how they deal with conflicting evidence. Avoid consultants with no clear process for evaluating source material.
  • Determine how they stay current. A deeper understanding of diversity and inclusion is always changing. The principles, tactics and techniques are not the same today as they were twenty – or even just a few – years go. Ask if/how the consultant maintains a connection to emerging science.
  • Avoid absolute perspectives. Active, reputable scientists are willing to question their outcomes and revise their findings as they continue to learn. Take note if a consultant clings tightly to a convenient finding from ten years prior and tries to establish it as an immutable basis for their work.
  • Ask for metrics. Ask how they evaluate whether their approach works or not. For example, what are their specific training objectives? How do they find out if training is meeting its objectives?

Careful deliberation, fact-checking, and detailed inquiry can help you identify the D&I partner whose methods and perspective have been demonstrated to effectively increase workplace inclusion.

Contact us today for an evaluation of your Diversity & Inclusion policies.