Sunday, April 12, 2020

Stimulus Checks & The Veil of Ignorance


How do I decide what is ethical? And just as importantly, what will make me feel warm and fuzzy about my decision? It's not always easy, but my favorite ethical principle for trying to figure out such things is John Rawls’ social contract, The Veil of Ignorance. It's based on the following thought experiment:

Imagine you are creating a society, but you know nothing about yourself, and have no idea what your place in this society will be. You don't know if you will be a man or woman, tall or short, rich or poor. You do not know your race, your social status, sexual orientation, if you will have any disabilities, etc. To quote Rawls, “no one knows his place in society, his class position or social status; nor does he know his fortune in the distribution of natural assets and abilities, his intelligence and strength, and the like.” Everything about you is behind the veil.

In this society, should people without a sense of smell be allowed to leave their house? Well, I might end up not having a sense of smell, and I would possibly like to leave my house… so that is not a society I would want. Should all people with brown hair be thrown into the ocean? Well, I might end up having brown hair, and I don't want to be thrown into the ocean, so that would not be in my society either. 

More seriously, should gay people be allowed to get married? Should women have the right to vote? Should the color of your skin determine where you are allowed to live? All of these questions are easily answered via The Veil of Ignorance.

While I don't think there is a perfect ethical principle that works in every situation, The Veil of Ignorance has served me well to bring clarity to many (not all) difficult ethical dilemmas and social issues.

This brings me to a very current issue: the upcoming government stimulus checks.

My understanding is that because it was pushed through so quickly, there was little consideration given to who gets it and who doesn't. Regardless of speed, the government cannot possibly figure out who still has a job, who has a spouse that still has a job, who lost their job but won the Powerball last month, etc. As a result, there are many people who will receive the check who don't need it. I am one of those people, as I am lucky enough to still have a job with a steady paycheck. But many people, especially in the service-sector, aren't so lucky. Your barber, barista, bartender, handyman, massage therapist, personal trainer, server, etc. may nervously be hoping this passes before they are no longer able to pay their bills or buy food for their kids.

If I was creating a society, would I create one in which house cleaners were unable to pay their bills because of a global pandemic? When the veil was lifted, I could find that I was one of those house cleaners… so my answer is no.

But we do live in that society. As such, I will be doing a very small amount, and giving my stimulus check away. I want it, but I don't need it. If you are lucky enough to be in a similar situation, I encourage you to do the same. I realize how self-righteous this sounds, and posting this will certainly come off as “look how generous I am.” So be it.

Of course, if I wanted to pick holes in my argument, I'm sure it would be easy enough.* But I would have to ask myself: am I truly concerned about ensuring the argument is flawless from every angle, or am I just trying to justify keeping money I don't actually need? The answer would obviously be the latter. 

I am NOT saying that if you currently have a job and don't give your stimulus check away that you are making an unethical decision. Only you know your situation—and this stimulus check could come as a huge lifesaver, despite still being employed. But if you are in a situation where you don't need the money, seriously consider if it would do more good by going to someone who may not be able to pay their bills at the end of the month.

If you have ever complained about economic or wealth inequality, here is your chance to put your money where your mouth is. If you have ever argued that equality isn't good enough—we need equity, here is your chance to help balance the scales. If you have ever mocked the “thoughts and prayers” mentality, here is your chance to do something that actually helps. If you have ever railed against the failures of capitalism, here is your chance to show that you can resist the greed that drives it. Heck, if you have ever been frustrated about government handouts, now is your chance to reject it by giving it away, proving that your bootstraps are doing just fine.

But despite my moralizing, there is also a personal benefit to giving your stimulus check away. Research from both “correlational and experimental studies have shown that people who spend money on others report more happiness.” What better way to cure the quarantine blues than doing something that makes you feel warm and fuzzy? But don't take my word for it, try it and see for yourself :)

WWJD? (What Would John [Rawls] Do?)


*"I should keep my money. While I have a job now, with all of the uncertainty in the world, I'm not sure what will happen in a few months." But the people who currently don't have jobs are not sure what is going to happen next week. So this argument doesn't do much for me.

Sunday, January 12, 2020

Magic Bullet Arguments




There rarely are ever “magic bullet” arguments for any sort of complex issue. However, I have found through the “joys” of debating people on the internet that there are a handful of arguments related to topics I’m interested in that are extremely effective. These arguments are quick, easy to understand, and in my experience, devastating to the person arguing against them. Out of all the times I have discussed, debated and argued these topics, I have never once heard a reasonable response to any of the arguments that I will outline. 

Of course, it’s possible that I’m just so deeply biased, that no one could possibly give me a response that I would consider reasonable. While that is possible, I refer to a “reasonable response” as one that actually deals with the premises of the argument—instead of ignoring them, changing the subject, insulting me, or dismissing them by asserting some sort of conspiracy theory. However, the only way to know for sure if these arguments are unanswerable, is to try them out on people who disagree!

You might agree with me on some (or all) of the following three issues… if that's the case, I hope I can provide a simple argument that you might use if you are ever discussing the topic. If you don’t agree with me, that’s okay too, as now you know what I think are some of the best arguments for my views are. Heck, maybe someone will even have a good response to one of them!

As not to start with too controversial of a topic, let’s just go alphabetically…


Abortion

A standard pro-life argument is that since it would be wrong to kill a newborn baby, it would be wrong to kill a baby one day before it was born. And since that’s wrong, killing the baby two days before it is born is also wrong, and so on. It’s sort of a reverse slippery-slope, and you can follow this logic all the way back until the moment of conception, when, we are told, a new human life appears. “Life begins at conception.” The implication is that if you interrupt the development of a fertilized egg, it’s the moral equivalent to murdering a two-day old baby, a 36-year-old adult, etc.

The arguments have changed a bit in the past decade or so—it used to be a more religious argument, with people claiming that conception is when the soul is implanted in the body, which marked the moment that the embryo became a person, holding the same moral weight that you and I have. Over time, the religious aspect of this has been dropped, as no one is swayed by soul talk. The secularized argument is that conception is when personhood is formed, or when the embryo is ontologically different than moments before when it was just a sperm and egg.

However, the moment of conception is not actually a moment at all. The conception process takes between 24-48 hours—and a lot is happening during that time. To begin with, there are often multiple sperm cells that have penetrated the outer membrane of the egg, and it takes time for the egg to reject all but one of these sperm. Once this has happened, genes from the sperm and egg combine, creating a zygote with a new, unique genome. However, this doesn’t mean much yet, since the new genome doesn’t yet have control of the cell it is sitting in. Once the new genome takes over, the cell can start to divide, and a blastocyst is formed, which will eventually become a fetus and then baby. So what part of the process did the “new person” emerge? Was it when the first sperm reached the egg? Or when the egg rejected all but one sperm? Was it when the new genome zipped together? Or was it when that new genome took control of the cell and started to divide?

I have asked this exact question multiple times, and never been given an answer. It is usually dismissed, and I am told “okay, well however long it takes—at the end of that process, a new person exists, and it’s immoral to kill them.” Though, I doubt anyone will start to take the position that “life begins approximately two days after conception.”

Of course, things are still complicated. At this point, the embryo can split, making identical twins, triplets, etc. If the twin is now also a new person, where did this ontologically different person emerge from? Personhood (the self) can't be divided, so either one of the twins is not a person (which is absurd), or the idea that "a blastocyst is a person—no different from you and I" is wrong.

But the complexity continues. In some cases, two eggs can be fertilized, resulting in fraternal twins. In rare cases, these two fertilized eggs can combine, creating a single embryo called a genetic chimera. When this happens, the baby will have one genome in some cells, and another genome in other cells. According to the logic of “life begins at conception”, we had two unique people who combined into a single person. The second person didn’t die, so what happened to them? As is tradition, I have never been given an answer to this.

Of course, we haven’t yet touched on implantation—the process of a newly fertilized egg adhering to the uterine wall. This occurs around a week after conception, and often fails, as the woman’s body rejects the blastocyst. The reasons for the rejection are not entirely known, but regardless of the reasons, it is estimated that around 50% of all blastocysts fail to implant and are “spontaneously aborted” by the woman’s body. If we accept the pro-life position that a fertilized egg has the same moral weight as a one-year-old baby, 36-year-old man, etc., this means every year, approximately 4 million Americans die as a result of spontaneous abortion. The next biggest killer of Americans is heart disease, which pales in comparison, *only* killing 610,000 Americans each year.

If life begins at conception, failed implantation is the most common cause of human death, by a HUGE margin. There are research programs looking to help improve the chances of implantation for women wishing to get pregnant, but there isn’t a single “life begins at conception” politician, activist or organization who supports funding such research. Why?

I actually posed this question to the pro-life page on Reddit. The most common answer I received was “since death by failed implantation is natural, it’s not a moral concern.” A textbook case of the naturalistic fallacy. When I pressed the issue, asking “would you say that death by disease, starvation or other natural causes is also not a moral concern?”, I was never given an answer.

Of course, there is a lot more to discuss about abortion—but in my experience, these points work very well in showing the incompatibility of “life begins at conception” and a modern understanding of reproductive science.


The Afterlife

Discussing religion was my jam for many, many years. I eventually got pretty bored with it, but at the time, the most fun I had was bringing up neuroscience and observing what creative ways people would come up with wiggle around our modern understanding of the brain and its implications regarding consciousness surviving death.

If you are in a car accident and experience brain damage, there is no part of your conscious experience that can't be destroyed. You can lose the ability to perceive motion, the ability to perceive anything on one side of the visual field, or the ability to recognize faces. You can lose all of your long-term memories, or just your visual memories. You can lose the ability to create words, or the ability to put words into a coherent sentence. The conscious mind can be divided into a seemingly infinite amount of ways, and the more brain damage that occurs, the more aspects of your conscious experience you will lose. 

In a slightly different more extreme situation, a degenerative brain disease such as Alzheimer’s slowly erodes the brain. As it progresses, the patient’s ability to understand the world, as well as their own self, slowly slips away. As the disease progresses, the patient’s consciousness slowly eclipses until the self no longer exists.

To recap: minor brain damage will cause a person to lose small amounts of their conscious experience, and more severe brain damage will ensure that consciousness slips away even further. If we were to follow that trajectory, what seems more likely when the brain is entirely destroyed at death: that consciousness would completely slip away and that person would cease to exist—just like before they were born? Or, that consciousness would re-emerge, fully intact, in another dimension, understanding English and recognizing grandparents?

So far, I have never gotten anything close to an argument explaining why consciousness surviving death is more likely after the destruction of the brain.


Genetically Modified Organisms

GMOs are one of the best technological advances humans have ever conceived. The plant geneticist Norman Borlaug was credited with saving over a billion people from starvation by developing strains of wheat that could thrive in areas of the world that historically had struggled to grow such a crop.

Of course, when people think of GMO’s they often think of syringes injecting “toxins” into tomatoes, while evil Monsanto CEOs laugh maniacally in the background. We are told that we are playing god, that mixing and matching genes in ways we don’t understand could hurt us in unknown ways down the line.

In reality, GMO foods are created in a variety of ways—sometimes that means turning on/off a gene at a certain time. Other times it means adding a single gene from one organism into a specific location in another organism’s genome—and this is method is what people tend to have the biggest problem with.

When it comes to adding an additional gene to an organism, a well understood gene is placed in the exact location of another genome, where the exact outcome is well understood. As a result, we are told by anti GMO activists that this is unsafe. However, when we cross breed plants (as we have done for 160 years), hundreds of thousands (at least) of unknown genes are combined in ways without the slightest idea of how they will interact. This process, we are told, is safe and natural. Similarly, anti-GMO activists don’t have any problem with mutagenesis—the process of exposing plant DNA to radiation or certain chemicals in order to mutate the DNA to produce plants with (hopefully) desirable traits. 

If you change one single gene, the potential unintended consequences are so large that we shouldn’t risk it. But if we change or combine hundreds of thousands of genes, that is considered safe. Why?

I have asked this question tons of times, and per usual, I have never received an answer to why cross breeding or mutagenesis is considered safe, when the insertion of a single well-understood gene is not.

Saturday, February 2, 2019

The Explicit Data for Implicit Biases



In 1995, two psychologists found that if you exposed subjects to images of people of different races, associating them with words that were positive or negative, most people would be biased towards associating black people with negative words, and white people with positive words. This was done via a clever method called the Implicit Association Test (IAT), which you can take here. This was huge, as it seemed to be a window into people’s unconscious minds, and subjects could see exactly which types of people they held unconscious biases towards. Pretty cool! The thinking was that these unconscious biases would manifest as explicit biases in one way or another, and if you were made aware of your implicit bias, you could make an effort to reduce any explicit biases that might bubble up.

The research became very popular, with hundreds of other studies being done, looking for similar effects with various groups of people. Once the general public caught wind of this, an entire cottage industry popped up, with so-called implicit bias experts promising companies that they could educate and dismantle their employees’ implicit biases… for a small fee, of course.

There are four main premises that the IAT is based on:
  • IAT is a reliable psychological tool which shows that…
  • People often have unconscious biases towards certain groups of people
  • These unconscious biases can be used to predict explicitly biased behavior
  • Being made aware of these unconscious biases can help mitigate explicitly biased behavior 
Unfortunately, it turns out that every one of the above premises is incorrect--which I will show by citing numerous studies that contradict each claim. The research I will be referring are not just small, one-off studies that I found by combing through the data in an attempt to be a grumpy contrarian. Instead, these are often very large meta-studies that look at the trends of multiple research papers. And while there is always debate over complex scientific topics, these results are not controversial at all among researchers who study implicit biases. 

Premise #1: IAT is a reliable psychological tool

I once had a psychology professor tell me that “psychology is a soft science, but it’s also the hardest science.” She meant that because psychology studies complex human beings, that also makes it the most difficult science. If you kick 10 different soccer balls, you will get similar results, but if you kick 10 different people, your results may vary!

Because humans make things so complicated, researchers have to be careful with their tests, ensuring that their studies aren’t so vague as to elicit wildly different responses from a participant who takes the test multiple times. This idea is called Repeatability, or Test-Retest. If you take some sort of psychological test 10 times and get the same response 9 out of 10 times, researchers can be confident they have tapped into some sort of psychological reality in your mind. But if you get wildly different results each time you take the test, something is wrong. Or at the least, the test is not reliable.

You can read about repeatability and the scores that are used here and here. Though, the basic breakdown is that on a scale of 0 and 1, any score below a 0.5 is unacceptable. The IAT has a score somewhere between 0.44 and 0.5. On its best day, the IAT is unacceptably bad for producing any sort of clear picture of what is supposedly going on in a person’s mind.

This low score (high variation in test results) is mostly attributed to people getting better at the task as they take the test multiple times. Either way, the fact that the IAT’s repeatability coefficient is so low makes it incredibly unlikely that the IAT is telling us anything meaningful or useful about an individual’s mental processes. 

Premise #2: People often have unconscious biases towards certain groups of people 

People absolutely have biases towards different groups of people—this is not in question. The IAT, however, claims to be able to tap in to hidden, unconscious biases that we are not aware of. Though, when subjects were asked to predict the results of their IAT tests, their predictions were quite accurate! How could they be accurately predicting what their unconscious biases are, if the biases are unconscious? This calls into question the “implicit” part of implicit bias.

A 2006 study concluded that while people may not be aware of the origin of their biases, “there is no evidence that people lack conscious awareness of indirectly assessed attitudes.” 

Likewise, a 2014 study reported that “the research findings cast doubt on the belief that attitudes or evaluations measured by the IAT necessarily reflect unconscious attitudes.” 

Another 2014 study found that “there is compelling evidence that people are consciously aware of their implicit evaluations.”

The fact that IAT cannot discover unconscious biases is a problem for the IAT, but it does not mean people do not have biases they may not be aware of. People absolutely do--it is just that the IAT is not a reliable method to discover them. 

Premise #3: These unconscious biases can predict explicitly biased behavior 

This is a fairly intuitive, and very reasonable premise. Thoughts and actions are closely related, so it would make sense that if you had a bias towards a certain group of people, your behavior may reflect that, even subtly. However, this premise is claiming that there is a relationship between the unconscious biases discovered by the IAT, and biased behavior. The evidence for this is not good.

You can find studies here and there that will show predictive power between implicit and explicit biases. However, since the “replication crisis” in psychology started, small studies with small effects are no longer good enough. We need to use large scale, or meta-studies, to see what the larger trends are. In this case, several meta-studies have shown that there is essentially zero correlation between implicit biases and real life behavior or attitudes. Meaning, if the IAT shows you are biased towards a certain group of people, this has no correlation or ability to predict how you actually treat people of that group. 

A 2008 study found that (among other things) “The implicit association test (IAT) is the most widely used measure of implicit attitudes, and strong claims have been made about its ability to reveal high rates of unconscious racism. Empirical evidence does not support these claims.” 

A 2013 study looked at data from an earlier meta-study, concluding “across diverse methods of coding and analyzing the data, IAT scores are not good predictors of ethnic or racial discrimination.”

Another 2013 meta-study found that “The IAT provides little insight into who will discriminate against whom, and provides no more insight than explicit measures of bias.”

A 2016 meta-study found that “there is also little evidence that the IAT can meaningfully predict discrimination, and we thus strongly caution against any practical applications of the IAT that rest on this assumption.” They continue, “the overall effect of discrimination in the literature is virtually zero. There are only a handful studies that in isolation demonstrate clear levels of discrimination, and even fewer do so without having methodological problems that may plausibly have produced the result. Accordingly, there appears to be a very small amount of variance that can reliably be predicted from the IAT.”

The above studies are damming enough as it is. Though, the authors of the original Implicit Bias study stated in a 2014 study that “IAT measures have two properties that render it problematic to use them to classify persons as likely to engage in discrimination. Those two properties are modest test–retest reliability, and small-to-moderate predictive validity effect sizes."

The third premise, in my opinion, is the most important premise of all. If there is no relationship between supposedly implicit and explicit biases, the test is all but useless with regards to its stated purpose.

Premise #4: Being made aware of these unconscious biases can help mitigate explicitly biased behavior 

Since the third premise has failed, the fourth one also fails, as the idea that we can change explicit biases by learning about our implicit biases assumes there is a causal link—which we have seen there is not. However, there is research that looks specifically at the fourth premise, so I think it is important to cover it as well. 

A 2015 meta-study looking at 492 studies with over 87,000 participants found that “changes in implicit measures did not mediate changes in explicit measures or behavior. Our findings suggest that changes in implicit measures are possible, but those changes do not necessarily translate into changes in explicit measures or behavior.”

And with that, the claims of the IAT have completely failed, as none of them are supported by the data. 

Conclusion 

So now what? Probably nothing. This data is not new, is not a secret, and definitely is not sexy. No one is against evolution because they are interested in the debate between the level of selection, or at what point amphibians started to transition into lizards. People who are in denial about evolution are worried about the moral and religious implications.

Similarly, I doubt that many non-psychologists who are interested in the IAT are actually interested in the research. They are interested in eliminating racism, sexism, etc, which is a good thing to be working toward! However, if they have attached too much moral or ideological weight to the IAT, they might deny the evidence above, just like creationists with evolution. Similarly, people who make their living by running anti-bias training programs will never admit that the concepts they base much of their work on are not backed up by the data. To quote Upton Sinclair, “It is difficult to get a man to understand something when his salary depends on his not understanding it."

The IAT is often treated as a magic bullet to uncover unconscious biases and help eliminate them. Science is a wonderful tool—but it can also be a cruel mistress. If you are going to use the findings of science, you have to be willing to change your mind if the evidence later points in another direction.* Despite the IAT’s inability to expose unconscious biases and reduce explicit ones, that doesn’t mean people aren’t biased or bigoted—it just means we have to find other methods and tools to help combat those biases. In order to reach that goal, we need to be honest with ourselves, admit when we are wrong, and utilize methods that actually work.

*The same goes for me and these arguments. I am not an expert, and research could come out tomorrow that completely contradicts me. Double check everything, do your own research and come to your own conclusions!