Saturday, February 2, 2019

The Explicit Data for Implicit Biases



In 1995, two psychologists found that if you exposed subjects to images of people of different races, associating them with words that were positive or negative, most people would be biased towards associating black people with negative words, and white people with positive words. This was done via a clever method called the Implicit Association Test (IAT), which you can take here. This was huge, as it seemed to be a window into people’s unconscious minds, and subjects could see exactly which types of people they held unconscious biases towards. Pretty cool! The thinking was that these unconscious biases would manifest as explicit biases in one way or another, and if you were made aware of your implicit bias, you could make an effort to reduce any explicit biases that might bubble up.

The research became very popular, with hundreds of other studies being done, looking for similar effects with various groups of people. Once the general public caught wind of this, an entire cottage industry popped up, with so-called implicit bias experts promising companies that they could educate and dismantle their employees’ implicit biases… for a small fee, of course.

There are four main premises that the IAT is based on:
  • IAT is a reliable psychological tool which shows that…
  • People often have unconscious biases towards certain groups of people
  • These unconscious biases can be used to predict explicitly biased behavior
  • Being made aware of these unconscious biases can help mitigate explicitly biased behavior 
Unfortunately, it turns out that every one of the above premises is incorrect--which I will show by citing numerous studies that contradict each claim. The research I will be referring are not just small, one-off studies that I found by combing through the data in an attempt to be a grumpy contrarian. Instead, these are often very large meta-studies that look at the trends of multiple research papers. And while there is always debate over complex scientific topics, these results are not controversial at all among researchers who study implicit biases. 

Premise #1: IAT is a reliable psychological tool

I once had a psychology professor tell me that “psychology is a soft science, but it’s also the hardest science.” She meant that because psychology studies complex human beings, that also makes it the most difficult science. If you kick 10 different soccer balls, you will get similar results, but if you kick 10 different people, your results may vary!

Because humans make things so complicated, researchers have to be careful with their tests, ensuring that their studies aren’t so vague as to elicit wildly different responses from a participant that takes the test multiple times. This idea is called Repeatability, or Test-Retest. If you take some sort of psychological test 10 times and get the same response 9 out of 10 times, researchers can be confident they have tapped into some sort of psychological reality in your mind. But if you get wildly different results each time you take the test, something is wrong. Or at the least, the test is not reliable.

You can read about repeatability and the scores that are used here and here. Though, the basic breakdown is that on a scale of 0 and 1, any score below a 0.5 is unacceptable. The IAT has a score somewhere between 0.44 and 0.5. On its best day, the IAT is unacceptably bad for producing any sort of clear picture of what is supposedly going on in a person’s mind.

This low score (high variation in test results) is mostly attributed to people getting better at the task as they take the test multiple times. Either way, that fact that the IAT’s repeatability coefficient is so low makes it is incredibly unlikely that the IAT is telling us anything meaningful or useful about an individual’s mental processes. 

Premise #2: People often have unconscious biases towards certain groups of people 

People absolutely have biases towards different groups of people—this is not in question. The IAT, however, claims to be able to tap in to hidden, unconscious biases that we are not aware of. Though, when subjects were asked to predict the results of their IAT tests, their predictions were quite accurate! How could they be accurately predicting what their unconscious biases are, if the biases are unconscious? This calls into question the “implicit” part of implicit bias.

A 2006 study concluded that while people may not be aware of the origin of their biases, “there is no evidence that people lack conscious awareness of indirectly assessed attitudes.” 

Likewise, a 2014 study reported that “the research findings cast doubt on the belief that attitudes or evaluations measured by the IAT necessarily reflect unconscious attitudes.” 

Another 2014 study found that “there is compelling evidence that people are consciously aware of their implicit evaluations.”

The fact that IAT cannot discover unconscious biases is a problem for the IAT, but it does not mean people do not have biases they may not be aware of. People absolutely do--it is just that the IAT is not a reliable method to discover them. 

Premise #3: These unconscious biases can predict explicitly biased behavior 

This is a fairly intuitive, and very reasonable premise. Thoughts and actions are closely related, so it would make sense that if you had a bias towards a certain group of people, your behavior may reflect that, even subtly. However, this premise is claiming that there is a relationship between the unconscious biases discovered by the IAT, and biased behavior. The evidence for this is not good.

You can find studies here and there that will show predictive power between implicit and explicit biases. However, since the “replication crisis” in psychology started, small studies with small effects are no longer good enough. We need to use large scale, or meta-studies, to see what the larger trends are. In this case, several meta-studies have shown that there is essentially zero correlation between implicit biases and real life behavior or attitudes. Meaning, if the IAT shows you are biased towards a certain group of people, this has no correlation or ability to predict how you actually treat people of that group. 

A 2008 study found that (among other things) “The implicit association test (IAT) is the most widely used measure of implicit attitudes, and strong claims have been made about its ability to reveal high rates of unconscious racism. Empirical evidence does not support these claims.” 

A 2013 study looked at data from an earlier meta-study, concluding “across diverse methods of coding and analyzing the data, IAT scores are not good predictors of ethnic or racial discrimination.”

Another 2013 meta-study found that “The IAT provides little insight into who will discriminate against whom, and provides no more insight than explicit measures of bias.”

A 2016 meta-study found that “there is also little evidence that the IAT can meaningfully predict discrimination, and we thus strongly caution against any practical applications of the IAT that rest on this assumption.” They continue, “the overall effect of discrimination in the literature is virtually zero. There are only a handful studies that in isolation demonstrate clear levels of discrimination, and even fewer do so without having methodological problems that may plausibly have produced the result. Accordingly, there appears to be a very small amount of variance that can reliably be predicted from the IAT.”

The above studies are damming enough as it is. Though, the authors of the original Implicit Bias study stated in a 2014 study that “IAT measures have two properties that render it problematic to use them to classify persons as likely to engage in discrimination. Those two properties are modest test–retest reliability, and small-to-moderate predictive validity effect sizes."

The third premise, in my opinion is the most important premise of all. If there is no relationship between supposedly implicit and explicit biases, the test is all but useless with regards to its stated purpose.

Premise #4: Being made aware of these unconscious biases can help mitigate explicitly biased behavior 

Since the third premise has failed, the fourth one also fails, as the idea that we can change explicit biases by learning about our implicit biases assumes there is a causal link—which we have seen there is not. However, there is research that looks specifically at the fourth premise, so I think it is important to cover it as well. 

A 2015 meta-study looking at 492 studies with over 87,000 participants found that “changes in implicit measures did not mediate changes in explicit measures or behavior. Our findings suggest that changes in implicit measures are possible, but those changes do not necessarily translate into changes in explicit measures or behavior.”

And with that, the claims of the IAT have completely failed, as none of them are supported by the data. 

Conclusion 

So now what? Probably nothing. This data is not new, is not a secret, and definitely is not sexy. No one is against evolution because they are interested in the debate between the level of selection, or at what point amphibians started to transition into lizards. People who are in denial about evolution are worried about the moral and religious implications.

Similarly, I doubt that many non-psychologists who are interested in the IAT are actually interested in the research. They are interested in eliminating racism, sexism, etc, which is a good thing to be working toward! However, if they have attached too much moral or ideological weight to the IAT, they might deny the evidence above, just like creationists with evolution. Similarly, people who make their living by running anti-bias training programs will never admit that the concepts they base much of their work on are not backed up by the data. To quote Upton Sinclair, “It is difficult to get a man to understand something when his salary depends on his not understanding it."

The IAT is often treated as a magic bullet to uncover unconscious biases and help eliminate them. Science is a wonderful tool—but it can also be a cruel mistress. If you are going to use the findings of science, you have to be willing to change your mind if the evidence later points in another direction.* Despite the IAT’s inability to expose unconscious biases and reduce explicit ones, that doesn’t mean people aren’t biased or bigoted—it just means we have to find other methods and tools to help combat those biases. In order to reach that goal, we need to be honest with ourselves, admit when we are wrong, and utilize methods that actually work.

*The same goes for me and these arguments. I am not an expert, and research could come out tomorrow that completely contradicts me. Double check everything, do your own research and come to your own conclusions!

Saturday, February 11, 2017

Convincing People You Are Right (And They Are Wrong) - Part Four

We have gone over a few methods to determine if your views are true, and then discussed how to break down and analyze your views. In this post, we are going to briefly cover a variety of points to keep in mind when considering if your viewpoints are true. The points I cover are going to be somewhat all over the board, touching on a variety of random things. We can then start trying to convince people they are wrong.

----
When is the last time you were wrong? Not with something small like mixing up what time you were meeting your friends for dinner, but with something that you had put a fair amount of thought into? Something you had defended and strongly believed in? When did you last change your mind on something you had a strong opinion on, and thought people who disagreed with you didn’t know what they were talking about?

For most people, I suspect coming up with an example isn’t quite so easy. This could be for several reasons. One, you’ve never made a mistake and have never had to change your mind (unlikely). Two, you are too stubborn to admit when you have made a mistake (possible). Three, you made a mistake, changed your mind, and then forgot about it (probable).

To help battle this (probable) selective memory, I find it helpful to make a list of all the large things I have been wrong about, as a reminder that just because I’m absolutely certain about something, in no way means I am right. Unfortunately, certainty and reality are not as related as we would like to think. Keeping a list helps with the selective memory, and hopefully will serve as a reminder to take a breath and ease up—because you might be wrong.1

----
What is something that is extremely important to you? Perhaps you identify as part of a certain religion, or political party, or some other group. If you were to fill in the blank, “I am a _______”, what would you say? Now ask yourself “what is more important: advancing the cause of this group, or advancing truth?” If you are a good member of your group, you will probably think “my group does have the truth… I wouldn’t be part of it if that wasn’t the case.” Fair. But pause for a second. You’ve just decided that your group determines the truth. What about when it’s wrong? If you associate truth with what your group says, how would you know if it ever was in error? No group identity, beliefs, values, etc ever stays the same—the conservatives of today are not the same as conservatives of a few decades ago. Heck, liberals of the past used to advocate for eugenics. Aligning yourself with a group and deciding that they have it right is very dangerous, as it puts you in a position where being part of the group is more important than what is true.

To battle this, you have to make a commitment to what is true—and not to a group you currently identify with. Truth has to be more important to you than your religious beliefs, your political viewpoints, etc. You might be thinking, “Zak, I’m intelligent enough to be able to see when something the group identifies with isn’t true.” Maybe… but research suggests otherwise. The psychologist Geoffrey Cohen wanted to investigate the link between an individual’s opinion and their group identity, so he rounded up two groups of volunteers who described themselves as either strong conservatives or strong liberals. Cohen asked the members of each group to review and give their opinion on a welfare reform policy. The members of the conservative group were given a policy proposed by a republican leader, and the members of the liberal group were given a policy that had been proposed by a democratic leader. No surprise, the conservatives all agreed with the republican policy, and then liberals all agreed with the democratic policy.

Cohen then swapped the policies and asked each member of both groups to review them. Again, to no one’s surprise, the members of both groups found that the policy proposed by the other party was terrible. The policies were described as immoral, impractical, unrealistic, etc. Finally, Cohen asked the members of both groups if they had reached their conclusions about the policies because of the details of the policy itself, or because of the group the policy was associated with? Everyone scoffed at the suggestion that their opinion of the policies would have anything to do with the group associated with it—their conclusions were based solely on the details of the policy. Both groups agreed, however, that the OTHER group would definitely fall for such group-think silliness.

Well, surprise surprise! The policy that the conservatives had first reviewed and agreed with was NOT from a republican—but from a democrat. Likewise, the liberals had actually reviewed and approved of a policy from a republican. Cohen had tricked everyone into doing what they both claimed they would never do—support a policy simply because of which group they viewed the policy to be associated with.

Your social identity is a HUGE source of bias. And the more you associate with groups that confirm your beliefs, the more difficult it will be for you to change your mind if you come across disconfirming evidence. The stronger you associate with the group, the more likely you are to dismiss evidence, rationalize it away, or just ignore it. 

There are two ways to try and prevent being misled in such a way. First, don’t identify with a group—just be you. Soon as you decide you are part of a group, you will want to defend that group, and defending the group, rather than defending what is true, is a huge misstep.2 Rather than being “Zak the liberal atheist”, it’s better to just be Zak, and agree that atheism reflects certain views I have, and that my political views tend to fall on the liberal side of things. Like with the Sam Harris quote, don’t join a tribe. That brings up to the second step: if you do find yourself in part of a tribe (group), such as being a registered democrat, remind yourself of aspects of that group that you don’t identify with (or even actively disagree with). This will help you keep a skeptical sense about you.

----
Next up is the concept of Illusory Superiority, which is the tendency for people to overestimate everything about themselves. Compared to others, people tend to view themselves as being healthier, better looking, better drivers, more popular, having happier relationships, etc. One study found that 90% of professors polled rated themselves as above average teachers. Of that 90%, 65% rated themselves in the top 25% for teaching ability. Something is clearly askew!3

To make matters even worse, people also tend to view themselves as being more rational and less prone to blind spots than others! In a study with over 600 participants, only one person stated that she felt she was more prone to biases than average. Some people thought they were just as biased as everyone else, but a whopping 85% claimed that they were less prone to bias than everyone else. You might think “I am calm, I don’t freak out, I listen to points being made and don’t fall for rhetoric. I am absolutely more critical and less biased than your average person.” Maybe you are. But it’s more likely that you’re falling for the Bias Blind-Spot and just don’t know it. Simply put, “I am not biased” is exactly what someone blind to their biases would say.

To add one more layer to this, when people differ on opinions, everyone starts accusing the other party as being biased, but view themselves as the rational deliberator. “You believe in Global warming!? That’s just because you listen to libtards and shill scientists. Go watch Fox News, check out these Breitbart articles and get off the fake news train.” On the other hand, “You think vaccines are GOOD!? Of course the scientists who are in Big Pharma’s pocket would say that. You should listen to what Natural News and the Food Babe say—they do their own research, and aren’t corrupted by the system.”

Everyone is biased—EVERYONE. It’s just a part of being human. But with a little practice, and knowledge of how blind spots work, we can take a step in the right direction of being a little less biased than before.

----
In conclusion of this section, remember: keep a list of things you’ve changed your mind about to remind yourself that you’re not as omniscient as you’d like to think. Reject the desire to identify as part of a social group—especially groups with strong ideological foundations (political, social, religious, etc)—just be YOU. Lastly, always keep in mind of how biased you are about everything!

----
1. The two things I have been most certain about—bet my life on certain—turned out to be wrong. Both instances involved women I thought I would eventually marry. It’s no coincidence that I was so certain, seeing that I was heavily emotionally invested in both relationships. The more emotion involved, the easier it is to be convinced of something, even if you shouldn’t be.
Of course, one might argue that relationships are a whole different ball game, seeing as they are based primarily on emotion. That’s fair. We talk about love in absolute terms, and I can attest that having those feelings completely makes you believe that you will be together forever, etc. Several years ago, the musician Katie Melua changed the lyrics to one of her songs to reflect more scientific accuracy. The results were quite amusing, simply because we never hear people speak about love in the language of science.

Don’t take this to mean that you should be emotionless when it comes to decision making. Contrary to what was once thought, some emotion can be very helpful in making decisions. People who are sociopaths and don’t let emotion factor into their reasoning are unable to make ethical decision. Likewise, people with brain damage to areas of the brain that regulate emotion struggle to make even the simplest of decisions, such as “when should I schedule my dentist appointment: Wednesday or Thursday?” They tend to get bogged down in the minor logical details (such as “what will traffic be like on both days? Will it be different?”) and can’t just say “Thursday will be fine.”


2. The ease of getting people to align with groups, even arbitrarily defined, is extremely easy. Researchers have found that dividing people by shirt color, or even a flip of a coin, will produce hostility towards the other group (this is called Minimal Group Paradigm). Research has found that groups will even do things that help create a strong definition between them and the other groups, even when doing so hurts their own group!


3. Most of the research on Illusory Superiority have been done in the United States. There is some evidence that such an effect may be caused by culture to some extent. Reason being, there is evidence that Asians tend to view themselves as lower in ability than the rest of the population.