Sunday, December 18, 2016

Convincing People You Are Right (And They Are Wrong) - Part Three


Alright, so we’ve worked through a few concepts to help figure out if our opinions line up with reality, so our next step is to figure out how to structure these ideas in a simple, logical manner. This might not seem important, but it will be something I return to in a later post. I also want to point out that this is a pretty basic overview of how to build a deductive argument—I realize there are other types of arguments (inductive), but I find deductive to be the most useful for our purpose, and I want to try and limit your boredom.

So how do we go about setting up an actual argument? What we need to do is write out a series of statements, called premises, which will lead to our conclusion. If our premises are valid, meaning that they are true, then our conclusion will also be true. For example:

Premise 1: Stubby is a cat.
Premise 2. All cats are mammals.
Conclusion: Therefore, Stubby is a mammal.

Nothing too crazy. Both premises are true, so we can be confident that the conclusion is also true. Now let’s add a little twist to it.

Premise 1: Stubby is a cat.
Premise 2. All cats are black.
Conclusion: Therefore, Stubby is black.

Like the first example, this argument is completely logical. It is absolutely true that if Stubby is a cat, and if all cats are black, then Stubby is black. But the obvious problem is that not all cats are black! As a result, the conclusion is false.

When formulating and analyzing arguments, break them down into their premises and conclusion. Not only will this help you better understand your own argument, it will give you the opportunity to really investigate whether or not each premise is true. Let’s take another example that at first seems simple, but then becomes much more complicated once we scratch the surface.

P1: Whatever begins to exist has a cause.
P2: The universe began to exist.
C: Therefore, the universe has a cause.

Easy enough. There are only two premises which both seem true, and they logically flow to the conclusion. You probably are convinced by this argument, as it seems fairly straight forward. However, let’s expand the first premise to make it a bit more specific.

P1: Whatever begins to exist in space and time has a cause.
P2: The universe began to exist.
C: Therefore, the universe has a cause.

When we add the clarification of “in space and time”, things suddenly don’t seem so simple. Of course, one might object, saying “Zak, you just changed the argument, so of course it doesn’t work as well anymore.” But notice I didn’t actually change the argument—I just exposed a hidden assumption in the first premise. When we talk about something beginning to exist, we are talking about things within the universe—within space and time. There are also a few more hidden assumptions.

P1: Whatever begins to exist in space and time has a cause that obeys the laws of physics.
P2: The universe began to exist independent of space and time, and not according to any laws of physics.
C: Therefore, the universe has a cause.

At this point, the argument is just getting crazy, and completely breaking down. You may have been convinced by the first version of the argument, but as we teased out more and more hidden assumptions, you may have become more and more skeptical, until the whole thing fell apart. We will return to the importance of this point in a later post.

The lesson I want to stress for the moment is that we need to be very careful with our premises! Make sure that we aren’t sneaking in any hidden assumptions, or overlooking something that, if pointed out, could cause our argument to collapse. As I explained in previous post, trying to argue for something that isn’t true doesn’t help anyone.

When constructing your argument, you want it to be as simple and solid as possible. You aren’t limited in the number of premises you can use, but I recommend trying to limit the premises to as few as possible, to try and prevent anyone finding something to make a fuss over and break your argument apart. Let’s look at another example for practice:

P1: Genetically modified crops can grow better in areas that other crops couldn’t survive.
P2: Having more crops will prevent more people from dying of starvation.
P3: Preventing starvation is will help make the world a better place.
C: Therefore, genetically modified crops are good.

Look over each of the first three premises, and consider whether or not they are true. Are there any hidden assumptions? And if so, what are they? 

Let’s take the first premise. This is absolutely not true. Yes, SOME GM crops can grow in harsh environments, but not all can. Some crops are engineered for other things. As such, we need to change that premise to something more specific, such as “GM wheat can produce much more grain than traditional strains of wheat.”

The second premise seems pretty straight forward and true, though, since we are now being specific, we should update the language to be just about GM wheat, and not GM crops in general. As for the third premise, more food means fewer people will be hungry, and the less people are hungry the better. However, “better place” isn’t very specific. In what way do we mean things are better? We should clarify that. At this point, our argument is:

P1. GM wheat can produce much more grain than traditional strains of wheat.
P2: Having more wheat will prevent more people from dying of starvation.
P3: Preventing starvation will help reduce pain and suffering in the world.
C: Therefore, genetically modified crops are good.

The argument looks pretty good—but we don’t want any room for error, and that conclusion seems pretty broad. The premises are discussing wheat, but suddenly the conclusion busts out genetically modified crops in general. Sure, many GM crops might make the world a better place as a whole, but we don’t want to leave the door open to anyone who may have an example of a GM crop doing harm. To prevent that, we need to update our conclusion so that it’s more precise and more accurate. Our final argument would look like this:

P1. GM wheat can produce much more grain than traditional strains of wheat.
P2: Having more wheat will prevent fewer people from dying of starvation.
P3: Preventing starvation will help reduce pain and suffering in the world.
C: Therefore, GM wheat is good for the world.

Are there any hidden premises left? Sure. The first premise is stating that GM wheat can produce more grain than traditional wheat in certain environments. However, if you were to add that detail in, unlike the argument regarding the cause of the universe, it doesn’t disrupt the rest of the premises or conclusion.

I realize that formulating your argument, breaking down the premises and conclusion, analyzing and critiquing your premises to make sure that they are true, and then reformulating your argument… isn’t very exciting. However, as I mentioned before, this process will help clarify your views and hone in on what you are trying to argue. If you are struggling to fit your argument into a handful of premises, your argument may be too vague, which will force you to rethink it. Either way, it’s an important step in helping determine if you are right or not—or at the least, if your argument makes logical sense. All of this will be extremely helpful when trying to convince others that you are right, and they are wrong.

----
While making arguments, we need to watch out for logical fallacies. If we accidentally slip a fallacy into an argument, a keen observer will pounce on it, and use that as a reason to completely dismiss the point you are making. As such, you must be diligent! 

There are tons of fallacies, and I have no desire to go over all of them. Instead, I am going cover one that I’ve seen pop up more and more frequently. In future posts, I will probably mention others, but in the interest of not boring you too much, I will try and space them out.1

The Genetic Fallacy. This is the mistake of going after the origin or source of topic or argument in question, rather than the topic or argument itself. Creationists have been doing this for years--they don’t accept arguments from biologists, because the researchers are a bunch of atheists. Similarly, many atheist mythicists (atheists who don’t think Jesus existed) will often discount what New Testament historians say, since “they are a bunch of Christians, so they are biased.” Some feminists don’t accept arguments from certain fields of science, because the researchers are often men. Holocaust Deniers won’t accept evidence from historians, since they are apparently a bunch of Jewish sympathizers. Every ideologically based group will try and dismiss dis-confirming evidence any way they can, and that often means by trying to discredit the source.

When criticizing an argument, the ONLY acceptable approach is to go after the logic of it, or the methodology of the research. If Big Tobacco funded a study which finds that nicotine isn’t addictive, it’s not enough to say “well of course that was the conclusion. I don’t believe it, since Big Tobacco had their stinky hands all over it.” While such a conflict of interest is certainly a reason to be skeptical, you actually have to look into the methodology of the research and explain why it’s wrong, before you can claim that the research or argument is false. 

The above examples discuss the mistake of going after the source of an argument—but there is also a very common mistake of people going after the origin of something (usually words) to try and argue against an idea. For example, “you shouldn’t say Happy Holidays, since holiday comes from ‘holy day’, and not everyone is religious.” Well, sure, holiday did originate as meaning holy day, but language is extremely fluid, and changes constantly. As such, saying “holiday” no longer means HOLY day. Likewise, “Christmas” did originally refer to the Christian holiday, but it’s changed so much in secular culture that to many people, it's about family, presents, decorating an evergreen tree, Santa Claus, etc—all things that have nothing to do with Christianity.

I find that people struggle with this quite frequently. If you tend to question if it’s okay to say a certain word, because of what it meant at one time or another, consider the fact that “bad” comes from the Old English word “baeddel”, which meant a hermaphrodite, or womanish-man. You must consider what a something means or refers to in its current context, rather than its origin.

----
To wrap up, remember that if your argument can be laid out in a logical manner that doesn’t commit any fallacies, you can be confident that you have a good argument. If you try to discredit someone’s argument based on who made it (a fallacy), you aren’t going to get very far. On the other hand, if you are trying to advance an argument on the basis of who said it (rather than on the logic and evidence of the argument), you won’t get very far either. Similar to the Richard Feynman quote, if your argument disagrees with logic, it’s wrong. That's all there is to it.

In the next post, we will discuss a few helpful concepts from psychology that we should always keep in mind when working towards to goal of convincing people that we are right, and they are wrong.

----
1. If you're interested, this site covers the most common fallacies.


Saturday, December 10, 2016

Convincing People You Are Right (And They Are Wrong) - Part Two



“If you would be a real seeker after truth, it is necessary that at least 
once in your life you doubt, as far as possible, all things.” 
–Rene Descartes 

Before we can start convincing people that we are right, we have to make sure that we actually ARE right! I imagine that all honest people would agree that spreading misinformation is extremely unhelpful. If we are trying to convince someone that vaccines cause autism, we better first make sure that they do! Convincing a parent that we are right, only for their kid to get whooping cough certainly isn’t a desirable outcome.

How do we know if you are right? Well, there are a few simple things we can do to help you get an approximate idea, which I will discuss. Of course, this will be a very simple overview with some basic tricks—I have no intention of getting into some philosophical debate about the nature of knowledge or anything like that. I do also want to acknowledge that not everything is necessarily right or wrong. In some cases, we might not have a clear enough understanding to determine the accuracy of a claim. For example, what is the cause of schizophrenia? There are a variety of potential causes, and to say the cause is genetic is neither right nor wrong—there appears to be a genetic component, as well as others. Regardless of that, the following tips should put us on the right track with most things, most of the time.

----
The first thing you should do is find out what the experts think. If you aren’t sure, search Wikipedia1 and check for a criticism or controversy section. For example, the article on evolution doesn’t have any such section. It does have a cultural response section, highlighting the religious objection to it, but there isn’t anything about scientists opposing it. From that, we can conclude that there isn’t a scientific objection to evolution.

When we look up the Wikipedia article regarding mirror neurons, unlike evolution, there is a section covering scientist’s doubts over some of the grandiose claims regarding them. From that, we can conclude that there certainly is some strong scientific criticism towards mirror neurons, or at the least, there is certainly no consensus one way or the other regarding them.

Finally, if we look up the New Age documentary “What the Bleep Do We Know?”, we find that the scientific criticism section is nearly as long as the rest of the entire article! After reading it, we can be fairly confident that What the Bleep Do We Know has no scientific credibility whatsoever. 

Once you get an idea of what the general consensus on a topic is (assuming there is one), ask yourself if your views line up with that consensus. If they don’t, you’re probably wrong. Of course, if you are an expert in that field, perhaps your disagreement will have some merit, but generally, the fact that you are in disagreement with the experts is an indication that you just don’t know enough about the topic, and have come to a mistaken conclusion.2 

If you find that the experts disagree with you, and you are still certain that you are correct, come up with the best argument for your position, and email an expert. See if the best argument you can put forward can stand up to scrutiny from someone who has spent decades studying the topic.

Of course, there is the issue of trying to decide if someone is an actual expert or not. My general test is that they have to have a PhD in a field relevant to the topic, have published academic books and papers, and presented at conferences. A quick look at their bio, CV, resume, etc will let you know all of that info.3

----
Sigmund Freud claimed that young children are sexually attracted to their parent of the opposite sex, and as they grow up, they repress these feelings and thoughts. So if Freud asked “were you sexually attracted to your mom/dad as a child?” you could either answer “yes”, which proves Freud's idea right, or you could answer “no”, demonstrating that you have indeed repressed those memories and feelings, and Freud is still right. Heads Freud wins, tails you lose.

This brings us to the concept of falsifiablity. In the case of Freud's Oedipus Complex, such a claim wasn't falsifiable. This means that no matter the outcome of investigating his claim, there was no way his claim could be shown to be false. That is to say, the Oedipus Complex is non-falsifiable.4

At first, it’s tempting to think that holding a position in which nothing can show it to be wrong is a strength. "If I am right about something, regardless of the outcome of any observation, go me!" Though, one quick reminder of the Oedipus Complex demonstrates why non falsifiable ideas don't help anyone in terms of getting to the truth. And how would you view someone with the opposing view point if they told you nothing would ever change their mind? That any result of any observation, no matter what, would only further demonstrate their position? I can't imagine you'd take such a person seriously.

Falsifiablity is an extremely important concept, and means that for every theory, fact, hypothesis, statement, claim, etc that you make about the world, it also must be able to be potentially false. To be clear, that doesn’t mean it IS false, just that it could be. More simply put, the idea has to be testable. For example, I might say “cinder blocks sink in water.” This is a falsifiable statement, because we can take some cinder blocks, put them in water, and see if they sink. If the cinder blocks float, we will have falsified my claim that cinder blocks sink. 

Another way to think about falsifiability is this: falsifiable ideas make predictions (“if X is true, then we should see Y. If we don’t see Y, then X is wrong”), while non-falsifiable ideas make excuses (“if X is true, then we should see Y. If we don’t see Y, then X is still true for Z reasons).5 Let's look at a couple of examples: 

Years ago, I was a reoccurring guest on a Christian radio show, and one time we were talking about prayer as evidence of God. The host, who knew I didn’t have a sense of smell, had an idea. He suggested that everyone listening to the show pray for my sense of smell to be returned. He then asked me if I woke up the next day and was able to smell, would I count that as evidence of God? I said yes, absolutely. But then I continued, if tomorrow comes, and I still can’t smell, the host has to count that as evidence against the existence of God. He immediately objected, stating that that’s not how God or prayer works. How convenient! If I get my smell back, it’s evidence of God, but if I don’t, it’s evidence that God chose not to answer the prayers. Heads the host wins, tails I lose.

I recently heard of an amusing discussion regarding safety on campus. One person stated that many students don’t feel safe on campus. Another person rebutted, pointing out that a recent, anonymous survey showed that in fact, the vast majority of students feel quite safe on campus. The first person responded that the people who didn’t feel safe probably didn’t state such a thing on the survey. Ah, of course! To the first person, any evidence that contradicts their position is flawed and wrong. The first person wasn't making predictions (such as "if we polled the student body, we would find that many students don't feel safe")—they were making excuses. Or to modify the Henry Morris quote from above, "when the data and my opinions differ, the data is clearly wrong."

Once you have confirmed that your idea is falsifiable, you then need to try and falsify it! Take a claim that you would like to convince people of, and write out a list of things that if true, would change your mind regarding it. For example, if cinder blocks float, then my claim that they sink is wrong, and I would change my mind. If your view on a position you hold is that "nothing would change my mind!”, please refer back to the third paragraph of this section.

Anyway, once you have your list of things that would change your mind, investigate those claims, and see if any of them are true. If any are, then your claim is falsified, and you should change your mind. However, don’t be afraid of being wrong! If you find that you are mistaken, just change your mind, and then you are right again. Yay! On the other hand, if you do find that the cinder blocks don’t float, then you can be fairly confident that your claim is true.

----
Next up is my favorite concept in all of psychology: The Dunning-Kruger Effect (DKE). The DKE is twofold. Part one states that the less someone knows about a topic, the more certain they are regarding that topic. So when people claim to be absolutely certain about something, it’s not because they know a bunch about the subject—usually just the opposite, they only know a small amount. College students are extremely prone to this, as they take a class or two on something, and then view themselves as experts.6 With the internet, people will watch a youtube video, or read a blog post, and walk away viewing themselves as having a strong understanding of the issue. This is why creationists who have never opened a biology book in their life are convinced that evolution is an atheist myth, why climate change deniers that have never heard anything on the topic outside of talk radio are sure it’s a liberal conspiracy, why people with gender study degrees can assure us that biological sex is a social construct, and why anti-GMO people (who don’t even know what DNA is) are certain that genetically modified food is dangerous. The DKE creates an unfortunate disconnect between competence and confidence that can be very difficult to correct for. 

Part two of the DKE states that the more we actually do know about a topic, the more we understand the complexity and nuance of it. As a result of this deeper understanding, we are less likely to be extreme in our opinion regarding it. It's harder to view the world as black or white if we understand that it is often different shades of grey.

There is a striking difference when comparing the language used by a novice and an expert. The novice tends to use language of certainty, such as “THIS is the way it is! It's a settled issue, and everyone who disagrees is either an idiot or shill. Maybe both!” Meanwhile, the expert, knowing the intricacies of the issue, tends to use much more reserved language, such as “we have reason to believe… our best evidence suggests… currently we think that" etc. Unfortunately, this can then lead other novices to side with the person who knows very little, but appears very confident.

Overall, there is a very strong negative correlation between our knowledge of a topic and our confidence in our knowledge of the topic. We should always reflect on things that we feel extremely certain about, and consider the possibility that we hold such strong views, not because we are experts, but because we actually know very little.

----
In conclusion, in order to convince people we are right, we first need to make sure we actually ARE right! This is not only the responsible and honest thing to do, but it’s MUCH easier to convince someone we are right when we have reality on our side. To try and help make sure we are on the right side of reality, there are four easy things we can do:

  • Try and find criticism sections on relevant Wikipedia articles.
  • Find experts in the relevant field and see if they agree with you.
  • Come up with a list of things that would make you change your mind regarding your position.
  • Realize that extreme confidence is generally a sign of a lack of expertise regarding a topic.

In the next post, I will cover how to make sure your argument will hold up to some basic logical principles, as well as cover a few other random concepts that will help us on our quest to convince people that we are right, and they are wrong.

----
1. For people skeptical of the reliability of Wikipedia, check out this previous post

2. For a more thorough discussion, check out this previous post.

3. It’s amazing how frequently academics feel like they have something relevant to say that is outside their area of expertise. It’s especially embarrassing when they are in contention with the actual experts. Here we have Jerry Coyne (a biologist) arguing that historical Jesus never existed. Here we have Ben Carson (a neurosurgeon) claiming that the pyramids were built by the Biblical character, Joseph, for storing grain. Here we have Michio Kaku (a theoretical physicist) saying all sorts of nonsense about the Yellowstone caldera. Smart people letting their brains go to their head… there are endless examples.

4. The theoretical physicist Wolfgang Pauli used to use the phrase “not even wrong” to describe ideas that weren’t falsifiable. He considered non-falsifiable ideas to be so bad, that calling them wrong wasn’t enough. Years later, the theoretical physicist, Peter Woit, wrote a book criticizing String Theory, titling it “Not Even Wrong.”

5. Two great parables that eloquently explain the problem with non-falsifiable claims: Theology and Falsifiability by the philosopher Antony Flew, and The Dragon in my Garage, by the astronomer Carl Sagan.

6. One of the most interesting classes I took in college was on the psychology of language. The class was taught by Roger Fouts, who was one of the main people behind Project Washoe, which taught chimps to use sign language. After taking one class, and reading only on book (written by Fouts, which is phenomenal and I highly recommend it), I was 100% convinced that chimps had the capacity for language, despite the fact that such a claim is HIGHLY controversial (at best). Several years later, I had the opportunity to meet Steven Pinker, and asked him what he thought about the issue. He said what basically all cognitive scientists say, which is that no, chimps don’t have the capacity for language. A couple years later, I asked a professor of psycholinguistics about it, and she also confirmed what Pinker stated. It was then that I realized I should probably reassess my views, and read a bit more on the topic. Shortly after, I discovered I was wrong. Yes, while chimps can use some basic signs to communicate in a simple fashion (sort of a proto-language), they aren’t capable of what we describe as language. 

7. As these quotes attest, the general idea of the DKE isn’t new. Though it is nice to have a name for it, as well as experimental evidence to back it up!