Latest News

Vaccines are progressing and a new, potentially powerful treatment for SARS-CoV-2
SARS-CoV-2 Vaccines Vaccines by Moderna, Pfizer, and AstraZeneca continue to make progress in their...
Read more
How to beat COVID-19
It was not on purpose, but Dane County did a good experiment that shows how simple public health measures...
Read more
A new weapon against SARS-CoV-2
Holy COVID inhibition Batman. Hung et. al may have found a powerful weapon against SARS-CoV-2. A study...
Read more
Interferon beta
News of a clinical trial out of the UK shows that interferon β may be the first promising treatment...
Read more
Moderna's mRNA vaccine passes important first hurdle.
Many of you probably already heard that the Moderna vaccine raised a strong immune response in all participants...
Read more
HTTP/1.0 200 OK Cache-Control: no-cache, private Date: Sat, 19 Sep 2020 02:52:34 GMT

Chapter 2 - Staying in the Real World

2 - 1 Misinformation matters

Science is a way to teach how something gets to be known, to what extent things are known (for nothing is known absolutely), how to handle doubt and uncertainty, what the rules of evidence are, how to think about things so that judgments can be made, how to distinguish truth from fraud and from show.

-– Robert Feynman

While the story I am about to tell is fictional, this fact is still true; a young, immunocompromised woman recently died from measles. This death was the first confirmed case of mortality from measles in over a decade.

She had so much to live for, her family, her fiancé, her future career as a chef, but then breast cancer interrupted those plans. Shawna was lucky, her diagnosis has come early, and the physicians told her that her chances of survival were good. It was a tough period, going through chemotherapy, but she had come out of the trial, and the cancer was gone! It has been a couple of months now, and it appeared she had beat it. There would always be that little concern in the back of her mind, but chances of recurrence became less and less with each passing checkup. Life was beginning to get back on track, and this Fall was the perfect time to get back to chef school and finally start making wedding plans.

The last checkup at the clinic had gone well, but this weekend Shawna started feeling just off and then began having trouble breathing. The coughing seemed to be non-stop, and Steve, her fiance, finally insisted they go to the emergency room. Shawna was feeling more and more tired and could not catch her breath.

It had been a slow day at the ER for Dr. Christine Smith, which wasn’t unusual for a Thursday. The weekend evenings were when things got crazy. She was glad not to be working that shift. Then an unexpected case came in; a young woman was having difficulty breathing. Her face was flushed, and she was sweating a little. After a few questions, Dr. Smith took her temperature, slightly above average, and then listened to her lungs. The characteristic rattle of pneumonia was present. The concern of Dr. Smith considerably increased as she learned of Shawna’s medical history and realized she was probably still immunosuppressed from the recent chemotherapy treatment. After admission to the hospital, Shawna underwent an aggressive treatment with the antibiotic Levofloxacin, which seemed the logical course. However, lab tests showed no evidence of bacterial infection. By this time, Shawna was barely breathing and quickly admitted into intensive care, but it was to no avail. In a matter of days, she was gone.

What had killed her? Most viral cases of pneumonia are not this aggressive, even in immunocompromised hosts. The medical examiner requested an autopsy, and an examination of the lungs showed multinucleated inclusion-bearing giant cells, a characteristic sign of measles infection. Shawna had died of measles! An investigation by the CDC revealed that the clinic where she had her checkup had also seen children who had measles that same day. Measles is one of the most contagious diseases known, and it had spread from these children to Shawna. But why? An aggressive campaign in the second half of the 20th century had eradicated measles from the United States, how could this deadly disease make a come back? In one word, misinformation. The population of unvaccinated children susceptible to measles was on the rise because a large enough group of parents misled themselves into believing that the risk of vaccination was greater than the potential harm the disease would cause. Herd immunity had dropped low enough in the population that the virus could spread again, laying the groundwork for an epidemic. They were wrong, so wrong that they cost a young woman her life. Measles cases continue to increase in the US, with 2019 having the worst outbreak in nearly three decades.

Not knowing the truth can be deadly and this chapter will teach you how to find the truth and how to distinguish it from fraud and from show. These are the most powerful skills that a scientist or anyone can have and it will serve you well.


2 - 2 The Age of Misinformation

We live in amazing times. I have been around long enough to know the world before there was a global network connecting us all. Before the internet, people got their information from newspapers, radio, and television, and books written by experts. If you wanted to dig deeply into a topic, it meant a trip to your local library and searching manually through something called the card catalog. In every library, there was a room full of cabinets with tiny little drawers, each of them packed with 3 x 5 index cards with subjects and locations written on them. You dug through it to find the topic of interest and then walked the stacks of books and periodicals in the library to obtain what you needed. It was a slow, arduous process. At best, you would find maybe a dozen articles of interest after a day of searching. What you could learn was restricted by the labor involved in locating it. It was also limited by who could author such information because publishing anything was expensive. Which voices had the megaphone was under the control of the publishing industry who served as gatekeepers.

The internet has changed our modern world. Here are two of the more significant impacts:

The rise of search engines, presently dominated by Google, made it trivial to find the answer to any question by just using a few search terms. No more heading to the library to locate what you want. In seconds you can have hundreds of articles on any topic. One caveat to this panacea is that the information desired has to be available digitally. There was some initial resistance to this. The previous stakeholders, traditional publishers of information, were fearful of the competition of the internet, and rightly so. Newspaper and magazine subscriptions have dropped precipitously as the internet has expanded. But the demands of the market won out, and all significant traditional publishers now have a web presence. Publishers still struggle today to find business models that allow them to publish high-quality work, yet generate enough revenue.

The second consequential change has a light and dark side. The gatekeepers have lost control. Before the internet, If you wanted to reach a broad audience, you had to convince a publishing company that your work was worth mass-producing. If you were not able to persuade them, you were out of luck. Even gifted authors, who have had great success, have stacks of rejection slips. Theodore Geisel’s (Dr. Seuss) first book, And to Think I Saw It on Mulberry Street, was rejected by at least 20 publishers. Alex Haley, most famous as the author of Roots, kept his rejection slips and received over 200 of them. Publishers passed over J. K. Rowlings Harry Potter and the Sorcerer’s Stone 12 times. I wonder if the editors at those publications who rejected Harry Potter ever think about that mistake?

With the rise of the internet, the barriers to publishing have fallen away, diminishing the power of the gatekeepers. Anyone who has access to the internet can begin writing, and if an audience finds their ideas appealing, they can garner a following. This distribution of publishing power has immense upsides. Different perspectives have bloomed from every corner of the internet.

But along with the many roses that are thriving, there are an awful lot of stinkweeds. Numerous bad, wrong, or dangerous ideas have found a following. In the past, experts could check written works for factual accuracy and prevent unworthy views from seeing the light of day. There are even folks who are pushing the idea that the world is flat and the moon landing never happened.

Another danger of the multitude of information and information sources is that you can wall yourself off in like-minded communities and never expose yourself to opposing views that will challenge your misconceptions. A great example of this is political communities such as DailyKos and Redstate, where folks of similar political viewpoints gather. Many in these communities are intelligent, fair-minded people, but it is far too easy to lull yourself into an alternate reality that doesn’t match the facts. In 2012, many conservatives were confident that Mitt Romney was going to win against Barack Obama. In 2016, many liberals were equally convinced that Hillary Clinton was going to triumph easily over Donald Trump. Both of these communities willfully ignored inconvenient facts that challenged their perceptions.

With the rise of simplified, ungoverned production and instantaneous access to information, we now live in an age of misinformation. How can you find fact from fiction? How do you expose yourself to alternative viewpoints? Is there a routine you can use to keep yourself honest? In this chapter, I will lay out a strategy to answer all of these questions. First, we will talk about who in this vast universe of knowledge you can trust. Then we will delve into scientific literature talking about the three levels of information. Next, I will then lay out a useful recipe to follow for truth and understanding. Finally, I will walk through three example topics and demonstrate applying these methods.


2 - 3 Who Can You Trust

I am going to give you the punchline first. Initially, you should trust no one. Those who want your trust have to earn it. How then does a source earn your trust? By the use of solid, fair arguments and verifiable facts. So, what is a good, fair argument? To answer that question, we are going to delve a little into logic and argument analysis. Don’t close the book! Trust me. I will try to make it entertaining.

Here is a logical argument:

  1. Socrates is human

  2. Therefore Socrates is mortal.

That seems logical enough, and I think most people would agree with it. But before we verify it, we need to find the hidden premise. What is it? Here is the argument again, including the hidden premise:

  1. Socrates is human

  2. All humans are mortal

  3. Therefore Socrates is mortal.

Now we have all the premises. How do we verify that the argument is valid and sound? And no, those are not the same thing. An argument is valid if the conclusion must be true if all the premises are true. There is no alternative. In our case, if Socrates is human, and humans are mortal, then Socrates must be mortal. If we assume one and two are true; then three must be true. An argument is sound if all the premises are true. In this case, Socrates was a real human, and humans are most certainly mortal. Therefore the argument is both valid and sound. Let’s look at another, more elaborate argument:

There are two options for this tax increase:

1 (a) Being in favor of the tax increase and funding research that has given us technological and medical benefits, children’s health through the CHIP program, and healthcare for the elderly

1 (b) not caring about these things and giving all our money to the rich.

2. Funding the tax increase will not put an imposing burden on the rich.

3. It is morally wrong to be callous and uncaring toward children and the elderly

4. It is also unwise to limit funding for scientific research because of all the benefits it brings

5. Therefore you should cast your vote for the tax increase.

So let’s go through and analyze the argument. Are there any missing premises? I can see one, Increasing taxes increases taxes most on the rich. Also, increasing taxes results in increased funding for the listed programs. So let’s put those in:

There are two options for this tax increase:

1 (a) Being in favor of the tax increase and funding research that has given us technological and medical benefits, children’s health through the CHIP program, and healthcare for the elderly

1 (b) not caring about these things and giving all our money to the rich.

2. Increasing taxes increases taxes most on the rich.

3. Funding the tax increase will not put an imposing burden on the rich.

4. But, it is morally wrong to be callous and uncaring toward children and the elderly

5. Increasing taxes, increases funding for research, CHIP, and healthcare.

6. It is also unwise to limit funding for scientific research because of all the benefits it brings

7. Therefore you should cast your vote for the tax increase.

Is the argument valid? If you accept every premise as being true, does that mean that the conclusion is true? It seems as though the premises support the conclusion. If they are all true, the conclusion is true, so the argument seems to be valid.

To determine if the argument is sound, we first are going to introduce a new concept, normative and empirical knowledge. A normative premise is one that relies on a value judgment to determine if it is true. It is an “ought to” premise. An empirical premise is free from subjectivity and can be measured. It is an “is” premise. For example, Frank Kaminsky, the remarkable former center of the Badger basketball team, is 7 feet tall. You can measure his height with a tape measure, so it is empirical. A normative premise would be Frank Kaminsky ought to play basketball. Whether he should play basketball is a judgment or a subjective opinion. So it is a normative statement. Normative statements can be true or false, but in these cases, you have to make a judgment. Let’s take our previous argument and label the premises as normative (green) or empirical (blue).

There are two options for this tax increase:

1 (a) Being in favor of the tax increase and funding research that has given us technological and medical benefits, children’s health through the CHIP program, and healthcare for the elderly.(empirical)

1 (b) not caring about these things and giving all our money to the rich. (normative)

2. Increasing taxes increases taxes most on the rich. (empirical)

3. Funding the tax increase will not put an imposing burden on the rich. (normative)

4. But, it is morally wrong to be callous and uncaring toward children and the elderly (normative)

5. Increasing taxes, increases funding for research, CHIP, and healthcare. (empirical)

6. It is also unwise to limit funding for scientific research because every dollar spent on scientific research generates many more dollars in economic activity (empirical)

7. Therefore you should cast your vote for the tax increase.

We can now test each premise.

1a. You can measure whether the tax money has flowed to these programs and it has, so this one is true.

1b This premise asserts that you don’t care about these programs if you are not in favor of the tax increase. Depending on the person this may or may not be true, and it is difficult to prove. So this one may or may not be true depending on your viewpoint.

2. If you increase taxes, those who make more money, have to pay more. Again this can be measured and is true.

3. Whether the tax will inconvenience the rich is a judgment, but it could be determined by surveying rich people and asking them how much of a burden it would be. You could also ascertain what percentage of their income is being taken away and make a judgment on whether this tax is a large burden or not.

4. Whether it is callous and uncaring to not be in favor of the tax increase is a judgment.

5. You can read the proposed bill and see where the money is going, so you can determine whether it will help CHIP and science.

6. You can measure the economic impact of funding research. Studies have shown that funding research does indeed foster economic activity above what the funding costs.

So the key here seems to be whether you accept that 1b and 4 are true. If you do, then the argument is sound, and you will be in favor of the tax increase. If you do not accept them, then you will oppose it. As you can see, many arguments come down to judgments about the normative premises.

One more final point. This argument about taxes is spurious because it sets up a false dichotomy. You should immediately reject statements 1a and 1b because it sets up an either/or situation, when in fact there can be many ways to generate revenue and spend it. The two choices presented are at each end of the spectrum. Political arguments often use this type of framing. If you accept the initial dichotomy, it becomes difficult to have an opposing view. If you ever detect a false dichotomy, walk away and ignore the argument. The author is arguing using unfair tactics and probably not presenting a balanced view. I hope this little foray into ethics can help you break down information. Getting good at breaking down arguments will help you make decisions, and we will use these methods when we examine several significant controversies in society today.


2 - 4 The Four Levels of Information

If you are going to test arguments, you need information from experts to measure the truthfulness of statements that you hear or read. Where can you get reliable information? One of the magical things about the world we live in today is that much of human knowledge that has been generated from antiquity until the present day is online and available using a smartphone or computer in a matter of seconds. Stop and think about that! That is wonderous. Humans today would be considered wizards and sorcerers by those living a hundred years ago. This information comes in many forms, but to simplify things, we are going to sort it into four groups: primary literature, secondary literature, tertiary literation, and the popular press. Let’s define each class.

Primary literature is research articles published in peer-reviewed journals. These articles will investigate rather specific empirical questions and contain data that supports or refutes them. The article will report the materials and methods that were used to generate the data, present the data in tables or graphs, and provide arguments for or against the question at hand.

Primary literature is the closest to the data since it will discuss data generated by the authors of the paper and analyze it. This interpretation is often of high quality because the scientists working on it are experts in their fields and have exceptional mastery of the subject. Also, peers reviewed the experiments and analyses and deemed them to be worth publishing. There is no filtering through interpretation and summation as in other forms of information. However, that doesn’t mean the primary literature is the unquestioned truth. Every human has bias and blind spots. Pet theories of the scientists can cloud their interpretation of the data, and flawed experimental designs can skew the data in ways the scientists doing the research may not understand. Peer review can help to mitigate these errors, but sometimes weak papers still get published. Despite these caveats, primary literature is the most trustworthy.

Also, be wary of what I like to call abstract readers. There are examples in many fields, but I have found them especially in the health and fitness field. These “experts” will cite articles they have “read” to back up behaviors they are promoting. In reality, they have only read the abstract, or even worse, the title of the article. Abstract readers will often misinterpret the data and make far-reaching conclusions that the authors of the study they are reading would never make.

Secondary literature is an interpretation of primary sources. The audience for secondary literature is often the group of scientists working in the field of study that it reviews. Secondary sources do not report data they generate, but rely on information in the primary literature and summarize it. Often, they will contribute commentary by experts on the current subject and discuss evidence. Most secondary literature is peer-reviewed and contains numerous citations to the primary literature. Examples of secondary literature include review articles, monographs, and in some cases, even books devoted to one topic. Non-peer-reviewed secondary literature includes editorials, opinion pieces, commentary, histories, or perspective papers.

Secondary literature is also highly trustworthy but comes behind the primary literature. While experts write these articles, they did not carry out every experiment and thus are farther from the data. Personal viewpoints may hinder honest interpretation in areas of dispute, and they can sometimes advocate for only one side of controversies.

Tertiary literature is a further distillation of secondary and primary sources and often has a broader audience that includes non-scientists. Tertiary literature usually does not give credit to any particular author, but may highlight those who have made significant contributions to the field. Often they will provide more generalized coverage and have a broader subject than secondary literature. Experts in the field of study still frequently write tertiary literature. Examples of tertiary literature include this book, textbooks, dictionaries, manuals, and Wikipedia. Yes, Wikipedia! This resource has grown up over the last decade, and much of the information there can be trusted but should be scrutinized like any other information that you read.

Tertiary literature comes in third in trustworthiness. While experts write these articles, they are covering broader topics, and they can't have the first-hand experience with much of the experimental data and methods. Interpretations can be incorrect, and the process of making complex, detailed experiments understandable to a target audience can distort the facts. A careful reading of the literature and thoughtful editing and writing can minimize these errors, and most tertiary literature is again, very trustworthy.

The popular press will often cover important scientific topics and attempt to inform the general public about them. They may refer to a specific primary research article or to a few, but they do not have the long citation list of a secondary research article. These articles are most often not written by experts.

When it comes to science in the popular press, it is the least trustworthy. Most popular press articles, while they may pass through an editorial process, are not peer-reviewed and thus have less credibility. Also, the authors of these pieces are more subject to bias. Finally, the popular press has the goal of attracting eyeballs to their work and will often sensationalize or lose the nuance of the original scientific research to make the findings more exciting.

The most dangerous place to get information on any topic is from blog posts or other internet sites written by amateurs. The authors often have no training in the matter they are discussing and a considerable bias. While it is possible to find exceptional content in a blog post, be wary of random opinions on the internet. They are no more reliable than that crazy guy you used to live by who was always shouting about lizardmen.

Where to find reliable information

In the early days of the information superhighway (yeah, that is what we old-timers used to call it), it was laborious to find information if you didn’t know where to look. Helpful individuals, very often working in academia and government, began posting long lists of excellent links on topics. These were valuable because they gave you maps and signposts to a subject you might be interested in, and a real human curated them. But curated lists have huge downsides. You were subject to the whims of that person and what piqued their interest. They also could not cover everything. Very rapidly, an obvious solution arose: search engines. It may surprise you, but scientists have been thinking about and working on search engines since just after World War II. Vannevar Bush published an article entitled As We May Think in The Atlantic Monthly in July 1945, where he laid down the first ideas on how human knowledge could be indexed and retrieved. I don’t have space to go through the long history of search engines, but their rise dramatically increased the ease of information retrieval. These engines work by sending out bots or spiders (little programs that crawl the web and collect the text of web pages, sending it back to a central server). The information on the web page is indexed and then ranked. How this ranking takes place determines where on a search results page the information appears. When you search, they first match what you ask for to various pages and then rank the pages based on a complicated algorithm. The specifics of this algorithm are secret (to prevent webmasters from gaming the system), but Google has a helpful set of pages describing what they deliver to you. Every search engine algorhytm does it differently, and that is part of their brand. Search engines evaluate the popularity of your page (how many other pages are linking to your page), how often your search terms appear on those pages, whether your pages has a good user experience, among many other factors.

The go-to search engine for most people is Google. You know you have made it when your business becomes a verb. In order of rank, the top other sites are Bing, Yahoo, Ask, and AOL Search. All of these sites index and search all web pages, unless the webmaster asks otherwise. Many sites track what you search and use that information to show you advertisements, but others make it a point not to (duckduckgo.com). By the way, besides duckduckgo, search engines also keep that information and may sell it to third parties.

No matter what search engine you use, the results returned can be authored by anyone, and while search engines do their best to provide relevant data, they cannot discern the truth or quality of any page. If you are looking for a place that searches only information written by experts that is from peer-reviewed journals, several search engines cater to that subspecialty. The dominant service provider is the National Center for Biotechnology Information (NCBI) that is funded by the National Institutes of Health (NIH). The NIH is a part of the U.S. Department of Health and Humans Services and funds scientific research. As part of that mission, they created the NCBI to facilitate the research process. The search engines of journals (PubMed and PubMedCentral) are a subset of NCBI’s mission, but a particularly important one. NCBI catalogs, indexes, and provide a search interface for over 27 million citations in biomedical literature. If you want to find primary and secondary literature about any topic in biology or medicine, PubMed is a great place to look. Search engines from Google and Microsoft also have available search functions that only search through scholarly articles, Google Scholar and Microsoft Academic. Many of the materials found through these services are open access, meaning anyone can read them, but others are behind subscription walls. Several organizations consolidate their collection of journals and make them available to the public for searching. I list a few of my favorites, but there are many more: Science Direct, Directory of Open Access Journals, Plos One, and Biomed Central. All of these organizations sponsor open access journals that anyone can read for free.

There are multiple ways to find information that interests you. When you want answers to a topic, pick the one you like and use it! Before, you had to trust authorities in positions of power, newscasters, doctors, scientists, and politicians who most often reached you through the news media and popular periodicals. Today, you can check the facts yourself, but how do you sift through the B.S. and find the kernels of truth? You have to discover for yourself what works, but here are a few suggestions.


2 - 5 A Recipe for Truth and Understanding

Our goal in this section is to help you find the information you can trust. Here you will learn habits that can help you to become and remain an informed citizen. Briefly, you should: read widely, verify references, be aware of data manipulation, know the difference between causation and correlation, identify the shams and flimflams, trust peer-review, consider the source, and be skeptical.

One significant pitfall of the wealth of information available online is that it is easy to fall into like-minded groups that agree with your worldview. You wrap yourself in a cocoon of self-assurance with friends that reinforce what you think and say, rarely challenging your ideas. The cure for this is to read widely. Look at publications and information that may present a point-of-view with which you don’t agree. Exploring other points of view will expose you to varying opinions, but will also help you to practice countering spurious arguments.

A critical part of a search for truth is using credible sources. I pointed above to places to search, but how do you verify the integrity of a source? First, identify the level of the source, primary, secondary, tertiary, or popular press. Remember that primary literature, which presents data generated by the author to support their ideas, is the most trustworthy and it decreases from there. However, any information source can have problems (see the examples at the end of the chapter). So what diminishes the quality of an article?

Do they have citations to support factual claims that they make? If there are none, be suspicious. If they do cite other literature, who do they cite? Do they cite a credible source you trust? Does an established and respected organization publish it? Again, linking to other blog posts doesn’t count for much compared to ideas obtained from the Center for Disease Control and Prevention. Follow a couple of citations from the article you are reading. Do the claims made in the article you are reading match the reference to which they link? If they do not, be skeptical. I have run across writing that cites scholarly articles to support its point of view. However, checking thses citations against the original article reveals that they have nothing to do with it or, in the worst case, actually refute the author’s claims.

Data can lie too

To take liberty with a phrase that was popularized by Mark Twain, “There are lies, damned lies, and statistics.” It is easy to manipulate information and use statistics and graphs in marginal ways to create convincing arguments. Misleading data can be notoriously hard to spot, and it sometimes takes some expertise, but I am going to try to equip you with a few rules that may be able to help.

Sample size and distribution

When presented with a statistic, immediately ask, what is the sample size, and how is the data distributed? All statistics, be they from experiments or surveys, are a collection of observations. The hope is that this sample is a good representation of the observed phenomenon. If you are looking at polls for an upcoming political race, remember that they don’t ask the entire electorate. They try to find a sample that closely represents the whole population. If they do a good job modeling the folks who eventually go out to vote, the poll will be accurate. If their model makes poor estimations of who will vote, they can be wildly off. The presidential election of 2016 was a great example of this. The likely voter screens that almost all pollsters use discounted infrequent voters who came out in higher numbers than expected.

Two of the more important measures of the quality of a data set is the size of the sample and its distribution. In general, the larger the sample, the more accurately it should mimic the entire population under observation. Second, you want a normal distribution. There are complex statistical formulas and ideas behind this, but in simple terms, a normal distribution is where values appear equally above the average and below it. An example you are probably familiar with is the bell curve for grade distributions. When a sample becomes skewed and does not follow a normal distribution, many standard statistical analyses are not possible. For example, imagine you are doing a study of weight gain when consuming the latest bodybuilding supplement, SuperSupp. (By the way, I made that up. Any real or imagined likeness to an actual supplement is coincidental) You put together a test group of 30 bodybuilders to try it to see if they gain muscle mass when they consume it. You also have a control group of 30 bodybuilders who are not taking the supplement. After three months, you measure their change in muscle mass. Here is the data that you generate:

Subject

SuperSupp Group

Control Group

Subject

SuperSupp Group

Control Group

1

-4

-2

16

0

-1

2

0

-5

17

0

2

3

-2

3

18

2

1

4

3

0

19

-1

0

5

4

3

20

25

-1

6

0

-4

21

-4

1

7

19

-4

22

2

-2

8

-3

5

23

4

4

9

2

-5

24

-4

-1

10

-3

5

25

-3

5

11

0

1

26

4

0

12

0

1

27

-2

-5

13

19

-4

28

4

0

14

5

2

29

4

5

15

5

-1

30

2

-3

Table 2.1 Comparison of muscle mass gain of SuperSupp users and a control group. A control group and a test group that consumed SuperSupp were observed for 12 weeks. Weight gain was measured at the beginning and the end of the study

If you take the average of these values, you will see that the SuperSupp gained 2.6 kg, and the Control group gained no muscle. If you do a statistical test, called a T-test, you find this difference is significant, having a p-value of 0.06. In this case, the p-value is the probability that the two groups, SuperSupp users and the control group have the same muscle mass gain. A p-value of 1 means that 100% of the time that you perform this test, the two groups would show identical muscle mass gain. A p-value of 0 means that there is no chance they would show the same muscle mass gain. Since 0.06 is very close to 0, there is a 94% chance that the two groups are different, meaning SuperSupp works! Well, not really because there is a problem. Figure 2.1 shows a graph of the data.

Super Supp and Weight Gain

Figure 2.1. Super Supp and Weight Gain. A graph of the data shows the muscle gain of SuperSupp users and a control group.

You can see that the data from the two groups pretty much overlap, but there are three individuals in the SuperSupp group that are far beyond the average and are skewing the data. The distribution of the SuperSupp group is not normal, and you cannot use statistical tests, such as the T-test, that assume normality. Also, the number of test subjects is quite small, a better study, using hundreds of participants, would be much more powerful.

Let’s take a real-world example to drive the point home. Suppose a politician is claiming that if a tax cut bill passes, the average American is going to get a $4,000 tax cut. While this is technically true, the reality is that there is no average American as far as income goes because wealth and income in the United States do not follow a normal distribution. Now before you call me a lefty socialist, watch this video and pay particular attention to the chart that appears at 4:24. Income does not follow a normal distribution and is heavily skewed toward the top earners Therefore a calculation of an average tax cut is unquestionably meaningless. A much better method is to calculate the median income and figure out the tax cut from tthat value. The median is the point in a number set where half the numbers are below and half are above. For income level, it would mean the income that half the population makes less and half the population makes more. If you do that, you get a much smaller number than the average tax cut.


2 - 6 A Recipe for Truth and Understanding Part 2

Graphics can lie to you

If you want to persuade people, display the data in a chart! Charts are great for taking complex, hard to understand data sets, and creating a visual representation of the data that is easier to interpret. However, honesty is crucial when converting numbers into a chart. The hucksters of the world have found all sorts of magic tricks to lie with graphs. This section will point out some of the more egregious methods.

If you want to persuade your audience that something significant has happened, but your data won’t cooperate, what is the charlatan to do? Expand the scale. You can make almost any chart look like something important has happened if you mess with the y-axis. Look at this chart that was published by the U.S. Department of education (Figure 2.2)

Graduation rate by year

Figure 2.2. Graduation rate by year. Graphic from the U.S. Department of Education. (Public Domain)

First, this type of chart is ascetically pleasing. Look at all those beautiful books, and it’s about education, fancy! It sure looks like there has been an enormous change in graduation rates during President Obama’s time in office. However, the scale doesn’t make any sense. If five books represent 75%, then that is 15% per book. The 2013-2014 stack represents 82%, and that should not be even one more book. Figure 2.3 is a more accurate representation of that data.

A more accurate representation of graduation rates by year.

Figure 2.3. A more accurate representation of graduation rates by year.. Note how the scale begins at 0, and while there is as much of an increase, it does not look as dramatic as in Figure 2.2

In this chart, the scale starts at 0, and while there is an increase, it is not nearly as big an increase as in the first chart. While there has been some movement, it is incremental, laudable, but incremental.

Unfair comparisons by ignoring population sizes

Another fun trick to prove a point is to compare the number of some behavior without normalizing for population size. For example, murders in the United States. If a politician wanted to make Chicago look bad, they could report the total number of murders in each city (Figure 2.4). Chicago’s homicide problem seems twice that of any other city listed. The leadership in Chicago needs to change! Or does it?

Total murders in U.S. cities.

Figure 2.4. Total murders in U.S. cities.. A comparison of the number of murders in select cities in America that, because of the choice of data, distorts the problem.

So what’s the problem? These cities are not the same size. In fact, Chicago is much bigger. A fairer comparison is to examine the homicide rate per 100,000, as that takes into account the population of each city (Figure 2.5).

Murder rates in U.S. cities.

Figure 2.5. Murder rates in U.S. cities.. A second graph that accurately compares the murder rates in the most dangerous cities in America. This graph is a fair comparison.

As you can see, while all these cities need to decrease their murder rates, Chicago does not have the biggest problem. In fact, it is not even in the top five.

Hiding things you don’t want people to notice

Ever have an unpleasant fact you needed to report to your boss? Did you spend too much of the budget on travel? You have to report the expenses of the department. Even worse you took over the travel booking in 2014. If you were honest, you might create a chart like this and take the wrath of your boss and maybe get fired.

Travel expenses 2011-2017.

Figure 2.6. Travel expenses 2011-2017.. This graph accurately represents the growth in travel expenses. Note the significant increase in costs in the period from 2014 to 2017.

Or you could hide the travel expenses within the entire budget and present a more opaque graph.

Increase in business expenses by year. 2011-2017.

Figure 2.7. Increase in business expenses by year. 2011-2017.. Note how the travel expenses are still increasing, but it doesn’t seem to be as significant due to all the other items listed.

You are still reporting the expenses of the business. You have just hidden the increase in all the other expenses of the business. The costs of running the business have increased $315,000 and 80% of that increase is in travel expenses, but it is much harder to see in the second chart.

Abuse of charts

Some folks simply love all the amazing things their graphing program can do. They can add volume and perspective. They can highlight certain aspects. Graphing programs give the user tremendous power in how the data is displayed. Unfortunately, users sometimes ignore all chart making conventions and do whatever they want to make a point. The graph in Figure 2.8 is a simply breathtaking example of chart abuse.

Energy subsidies by Industry.

Figure 2.8. Energy subsidies by Industry.. It is nearly impossible to get any useful information out of this chart. (Source: Tommy McCall/Environmental Law Institute)

Figure 2.8 doe does not come close to displaying a clear picture of where energy subsidies are going. It clouds the point, and this was probably done on purpose to obscure the money going to the fossil fuel industry. The “pie chart” shown does not clearly portray the proportion spent in each area, and the purpose of the subsequent inner figures is not apparent until you realize that they are breaking down each quadrant. If your reader has to work this hard to understand your chart, you have failed. Figure 2.9 shows a more accurate display of the data.

Federal spending on energy.

Figure 2.9. Federal spending on energy.. A more accurate representation of federal spending on energy subsidies. As you can see, fossil fuels gets the majority of government funding.

There are more ways you can cheat with graphs, but you get the picture; it is easy to be dishonest with statistics and graphs. If any of these considerations make you suspicious of an article, run! These tactics are the purview of charlatans and con-artists.

Correlation does not mean causation

Confusing correlation with causation is a common problem. Many people make the mistake of thinking that if two things occur at the same time, one must have caused the other. An amusing example of this is baseball players and their superstitions. Mark Teixeira of the New York Yankees before a game accidentally put on one of CC Sabathia’s sock when it had ended up in his locker. He didn’t notice until the game started that he was wearing one sock with a 25, and one sock with a 52, Sabathia’s number. That day he hit two home runs and had six runs batted in, a very productive day. From that point on, Mark has worn two different socks. I am sure even Mr. Teixeira knows that this has nothing to do with his skill as a baseball player, just as you know that wearing your team’s favorite colors isn’t going to make them play better.

A more serious, health-related example is the classic study by Armstrong and Doll that compared incidence rates for 27 cancers in 23 countries to the diet in those countries. Correlations were seen between colon cancer and meat consumption and between fat consumption and breast cancer. While these studies were the first to show trends in the rate of cancer and a particular diet, it was only a correlation. It took decades of work to find out why meat consumption increases cancer risk and fat consumption breast cancer risk. This work continues today.


2 - 7 Things to look for

Practice looking for the shams and flimflams. Always be on the lookout for things that don’t seem right. Remember, an argument is only valid if all its premises are true. If you find one untrue premise, then the argument falls apart. I once bought a book that claimed to have a sure-fire cure for acid reflux. It presented all sorts of background information on the illness, which was correct, and then changes to diet and behavior that seemed like they might help. It then started talking about how having Candida overgrowth can aggravate acid reflux and how you can determine this by taking a spit test to detect it. Being a microbiologist, I was immediately suspicious. Candida is part of the normal microbiota of the gastrointestinal tract of humans and causes no problems in a healthy human. The “test” was even more ridiculous. It proposed you spit in a glass and watch to see if your spit floats or sinks. Supposedly if it sinks, you have a Candida infection.

Dozens of websites will help you manage your overgrowth. They cannot cure it, but buy our magic pills, and you will feel much better. This spit test surpasses quackery and goes straight to outright fraud. So let’s ask some questions. How does having more Candida in your spit cause it to sink? If you come up with a hypothesis to explain that, how could it be tested? Has there ever been a study of the spit test to show that more Candida in the gut causes dense spit that sinks? Since Candida is always present in our gut, how much is too much? If you search PubMed looking for studies about Candida overgrowth, you will find them. However, they are all in patients with suppressed immune systems or those who have undergone antibiotic therapy. Candida overgrowth is not something that affects 80% of the human population, which some of these sites claim. You will not find anything on the spit test for the diagnosis of Candida. When the author of the acid-reflux cure treated this as legitimate information, he lost my trust. As a general rule, if you catch an expert or one who claims to be, in an apparent falsehood, it’s time to find other sources.

What type of review process was there for the article? The highest standard is an article written by experts that is reviewed by other experts in the field (peer-review). However, even in these cases, there are caveats. Not every study appearing in a journal is peer-reviewed, for example, news and commentary about the research or letters to the editor. Peer-review is also anonymous, except to the editor, and voluntary. The reviewer may not take the time to review the article carefully or may not have the depth of expertise that is needed. Finally, personal bias and social interactions may enter into the mix. Often reviewers may know the authors of the article and that opinion, positive or negative, may influence their decisions. Or if the research contradicts a pet theory of the reviewer, they may reject the article and take further steps to prevent the study from being published. Science is a human endeavor, and bad actors exist who make trouble. But, the self-correcting nature of science always exposes these frauds in the end.

What is the goal of the publication? Is it to describe a topic or do the authors want to convince you of something? Persuasive articles must pass a higher standard than descriptive ones. Another good rule to live by is that extraordinary claims require extraordinary proof. If someone is claiming a miracle cure for weight loss, they better have lots of data from the primary literature to back it up.

Is the article even-handed and does it stick to the facts? If there are important questions up for debate or interpretation, is more than one side presented? Is the treatment even-handed? Are pitfalls or caveats discussed and do the authors avoid exaggerating or overstating conclusions?

Does the article slander any group or use inappropriate language? When presenting the opposing view, is the treatment respectful and earnest, or is the other side denigrated.

Finally, what is their funding source? One should be more skeptical of articles written by authors with an apparent conflict of interest. For example, a discussion on the validity of climate change penned by an author who is funded by the American Petroleum Institute, would not hold as much credibility as a similar study written by a climate scientist funded by the National Institutes of Health. There is just too much financial incentive for the API scientist to bend the truth.

Predatory Journals.

I want to take a short sidetrack and talk about a recent, disheartening phenomenon that has arisen in the last decade in academic publishing. Ideally, a peer-reviewed journal is one that has an editor and editorial board made of scientists. When a paper comes in, two or more qualified researchers the subject (they are familiar with the methods and overall topic) review it and pass judgment on the quality of the experiments. They can recommend acceptance of the paper, ask for revisions, but accept, reject the paper but ask for corrections and resubmission, or reject the paper with no invitation to resubmit. The editor makes the final decision on acceptance. The point is experts review the article and only publish it if it is of sufficient quality and a contribution to the field. Part of the financing of many journals comes from the authors paying page charges to get their work published. You heard right; you have to pay to get your research published. It is one of the ways journals finance their operations.

The rise of the internet caused an unfortunate side effect in journal publishing, the appearance of predatory journals. These are journals that will happily publish your article on the web, and collect a hefty fee for doing it but have no standing in their fields. They first started to appear around 2008, and thousands are now available. Predatory Journals have the following traits.

  • The rapid acceptance of submitted articles with little or no peer review.

  • Academics who publish in the journal find out about fees only after the article is accepted.

  • Academics are sought for editorial boards using aggressive recruitment tactics. This include the addition of some prominent scientists to editorial boards without their permission. Once you are on a board, the journals refuse to let you resign.

  • The journals spam scientists Email in boxes begging you to submit papers.

  • Predatory journals will mimic the style of and use names that closely match legitimate journals.

It’s no wonder that these publishers have fooled some academics. Because of the flood of predatory journals, lousy research is being published and polluting the scientific space. Other journals, such as Science, Nature, and PLOS ONE, have taken up the mantle of exposing predatory journals with sting operations that can get quite comical. For example, in 2015 four researchers created a fictitious scientist named Anna O. Szust (oszust is Polish for Fraud). Dr. Szust was a terrible candidate for an editor, having never published a scientific paper and only written non-existent books from made-up publishers. Dr. Szust applied for an editor position in 360 journals. Some were suspected predatory journals, and some were control journals that were known to be legitimate. Of the 120 predatory journals she applied to, 40 of them accepted her. Of the 120 Open Access journals, only eight accepted, and none of the established journals did.

Academic groups have begun to catalog these predatory journals on websites that list them. Currently, an excellent one is Stop Predatory Journals (https://predatoryjournals.com/journals/) If you run across a research article published in a journal that you are unfamiliar with, check it against these lists to make sure it is legitimate. There are thousands of authentic publishers, and it is easy to check their quality. Make sure you do it!

Be a skeptic, it’s good for you

Skepticism has gotten a bad name in our culture. Being skeptical has come to mean doubting or being cynical about a person or idea when, in reality, it means not being easy to convince. That is not a bad thing! Any new piece of information you come across you should examine with scrutiny and test its validity. Most people fall into the trap that if something validates their current views, they accept it unquestioningly, but if it contradicts them, they reject it without investigating. It is essential to fight this, and it is easy to do. First, you need to investigate any new piece of information. Be the most skeptical of ideas that support your worldview and do the work to make sure the data is sound. Second, and this is a hard one, if something challenges your worldview, be willing to approach it honestly. Keep your ego out of it. Finding out that your thinking on a topic is incorrect will often make you feel stupid. Everyone is wrong sometimes, get over yourself and work to be a humble learner. The truly brilliant mind is willing to change when presented with evidence. Decide on what evidence you would need to change your mind. Make an honest effort, read up on this idea-challenging topic in trustworthy sources. If, at the end of your search, you find convincing evidence that proves you were wrong, don’t move the goalposts. In other words, don’t decide that you need another, often more rigorous, standard of proof before you admit defeat. Instead, change your views.


2 - 8 Some examples of ideas and the evidence for and against them

So, now you know that being awash in information is a double-edged sword. I have shown you the types of information at your fingertips, what’s most reliable and why, and I have given you a blueprint for how to manage it all and find the truth. We end this chapter looking at a few examples that show you how to use these methods.

Vaccines and autism

We began this chapter showing the harm that has been caused by a small proportion of the population losing their trust in the MMR vaccine. Where did this mistrust originate? Autism Spectrum Disorder (ASD) is a horrible disease, and its symptoms typically appear between one to three years, which is also the time when many children are getting their initial vaccines. A diagnosis of ASD at the time of vaccination makes many parents jump to the conclusion that vaccination had some role in the precipitation of autism. This concern was certainly worthy of investigation. No health authorities would want to ignore the possibility that vaccines cause harm.

In 1998, Dr. Andrew Wakefield published on his work examining children who had ASD symptoms and whose parents blamed the MMR vaccine for the problem. The paper dropped a bombshell. It suggested a link between the MMR vaccine, a new inflammatory bowel disease discovered by the research, and the onset of ASD. The research article added fuel to a movement against vaccination that was beginning to gain a following. Vaccination rates in Briton dropped from 92% to below 80%, resulting in the return of measles epidemics in England and Wales. Fear of vaccination has now spread worldwide, and Public Health Departments are in a battle with the anti-vaccine movement to get vaccination rates back up to protective levels.

None of this had to happen. The paper is remarkably flawed and should have never been accepted for publication. Let’s apply our recipe of truth and see if we can find the flaws.

First, what’s the sample size? The study group was small (n=12). A small group makes statistical analysis difficult, especially when drawing correlations between MMR, ASD, and a new bowel disease. This small sample size should have immediately raised concerns. In addition, after publication, investigators found that the authors cherry-picked the data included in the study to support their pet conclusions.

Second, this was just a correlation. While there was evidence for a new bowel disease, it was unclear how this could cause ASD. Even worse, the study claimed that ASD symptoms occurred immediately after the MMR injection. The original study trusted the parent’s recollection that symptoms of ASD appeared after the MMR vaccine. Later examination of the medical records indicated this wasn’t the case, with some ASD symptoms being mentioned by parents before vaccination and some occurring months later.

Third, look for flim-flam. GI inspections done by hospital pathologists reported the GI to be normal, but the study team later changed these to abnormal. These changes are very serious because they are an outright falsification of data.

Fourth, follow the money. The most damning revelation was that Wakefield had failed to disclose a financial conflict of interest. He had been funded by lawyers who were preparing a case against the manufacturer of the vaccine.

This minor paper in The Lancet medical journal will go down in history as one of the worst examples of scientific fraud. Scientists spent years, and a great deal of time and money, investigating vaccines and autism. The overwhelming consensus of numerous studies is that there is no link or any causal relationship. Governments wasted scarce funding for medical research on a wild goose chase. Thousands of parents were frightened into not vaccinating their children, and many of them suffered from preventable diseases. We are still fighting the effects of this paper, and the misinformation it spread decades later. Some of the blame falls on the public. We, as a society, have to work harder, finding truth matters.

Probiotics

Thirty years ago, scientists thought the bacteria that lived on humans were mostly harmless and somewhat beneficial to the host. However, this benefit was principally from helping to train the immune system and taking up space that pathogens might otherwise occupy. An important function, but thought not to be that consequential. A few researchers started to do experiments with specific strains of bacteria. These strains were demonstrating a positive effect, but scientists not involved in the research were skeptical, and the field seemed to belong with acupuncture, aromatherapy, and divination. While my view wasn’t that harsh, I will admit to not recognizing the importance of this topic to human health.

In 2007, a landmark experiment by Sinead Corr and coworkers demonstrated that a Lactobacillus salivarius strain could protect mice against infection with Listeria monocytogenes. Listeria infection can cause a disease that, once established, can have a mortality rate as high as 50%. In this elegant series of experiments, the authors first show that feeding mice L. salivarius for three days protected the mice from L. monocytogenes infection, decreasing the numbers of bacteria infecting the spleen and liver by over 10-fold. They then hypothesized that a small antimicrobial peptide that L. salivarius made, a bacteriocin, was the cause of the protection. In addition, a mutant L. salivarius strain that was unable to make the bacteriocin no longer conferred protection against L. monocytogenes.

Let’s again apply our recipe of truth to this experiment.

What’s the sample size? The experiments use five mice per strain tested. So for many of the tests, that is 10 to 40 mice - a small number. However, given the size of the effect, a 10 to 100 fold decrease in L. monocytogenes infection, that is sufficient to give the results statistical power. They used the T-test, which requires a normal distribution, and they certainly had a normal distribution if you look at the error bars. The resulting p-values were in the 0.001 to 0.05 range, meaning that at least 95% of the time, the difference between their tests and the controls is significant. So, the sample was big enough to show an effect. However, this is merely correlational. If L. salivarius is present, it inhibits L. monocytogenes. What makes this paper so good is they take it to the next level.

What’s the cause? It turns out L. salivarius is known to secrete bacitracin, an antimicrobial peptide, and the researchers hypothesized that it was the cause of the inhibition. To test this idea, they mutated L. salivarius and created a strain that could no longer make bacitracin and then used it in the same experiment they used with normal L. salivarius. It showed that the strain was no longer protective against L. monocytogenes. They also purified bacitracin from L. salivarius and showed the purified peptide could inhibit L. monocytogenes.

Follow the money. Who supported the research? At the end of the paper, they state that the government of Ireland provided funding for the work. I can see no reason that demonstrating this probiotic effect would be of any financial or other benefits to the researchers or the government of Ireland. There was no conflict of interest. In this paper, Corr and coworkers definitively demonstrate that microbes can have a probiotic effect and discover the cause of the probiotic effect through some elegant experiments.

A large body of further research supports the existence of probiotics. Other microbes can prevent or modulate certain infections. Probiotic bacteria also stimulate the immune system, aid digestion, and modulate the pH of the gut. The microbiome inhibits inappropriate inflammation (when the immune system attacks harmless factors in the body). The human microbiome’s effect on inflammation can even have a protective effect on bones.

As one example of an important bacterium, a newly discovered anaerobe, Akkermansia muciniphila, is a critical part of our gut microbiome. It is part of a handful of bacteria found in all humans and constitutes 0.5 to 5% of the bacteria found in the intestines. The presence of A. muciniphila has numerous protective effects. People in several unhealthy states (obesity, type 2 diabetes, inflammatory bowel disease, hypertension, and liver disease) contain lower concentrations of A. muciniphila. Also, several interventions that are known to help treat diabetes, such as metformin administration and bariatric surgery, also increase the population of A. muciniphila. There is a large body of evidence that suggests that the presence of this bacterium can protect against metabolic disorders and heart attacks. Your body even recruits this microbe to your gut by producing mucin, its preferred food. Thus, the bacterium does not rely on your diet for its nutrients.

Many of the positive effects of A. muciniphila stem from its ability to prevent weight gain. Feeding mice a diet that mimics a human western diet, high in fat and low in fiber, will cause them to gain excess weight much as humans do. However, if one group also consumes A. muciniphila, they have a 50% lower body weight gain when compared to a control group. The presence of A. muciniphila restores mucus production, causes the production of antimicrobial peptides, and increases the production of anti-inflammatory lipids. Also, killing the microbe by autoclaving eliminates its protective effects, which indicates that the bacterium has to be active and growing to have its positive impact. How A. muciniphila causes these responses in the intestine is still unknown, but the development of a probiotic may be coming soon.

A large body of work has shown that there are numerous probiotic bacteria. Proper administration of probiotics could bring potential health benefits. Food manufacturers have taken notice and jumped on the probiotic bandwagon hawking products of dubious quality. Lactobacillus and Bifidobacterium are the most popular genera today for formulating probiotic foods, but it is unclear if these formulations are beneficial. If you are interested in probiotics, and I do think there are reasons to be, be wary and choose those products that have actual scientific experiments behind them.

Exercise will not help you lose weight

In the United States, two-thirds of the adult population is overweight, with one-third being obese, making it a vital public health concern. It is not just the US; over 600 million adults, and 100 million children, are obese worldwide. Obesity increases risk factors for cardiovascular disease, type 2 diabetes, high blood pressure, and joint problems. Medical costs associated with obesity are estimated to be $140 billion annually. The military even sees the obesity epidemic as a security risk, turning away many recruits because they weigh too much. America needs to lose weight!

To combat this problem, health experts have recommended higher physical activity and a healthy diet containing more fruits and vegetables and less processed food, but is it that simple? Ironically, headlines in the last few years in Vox, The Washington Post, The Telegraph, and The LA Times scream that exercise does not help you lose weight. Several studies in the last decade have found a weak link between physical activity and weight loss. From this, many have concluded that exercise will not help you lose weight. What discouraging news, but is it true? It seems counter-intuitive that burning more calories does not help. What’s going on?

If you spend some time digging into the studies, especially looking at the methods used, you find some critical caveats. First, the research involves humans, and as such, controlling confounding variables is much more challenging than in animal models. Imagine you want to study the effect of adding a walking regime to a program where overweight individuals are trying to lose weight. You want to know each person’s food intake and then split them into groups where one group walks and the other does not. In most cases, surveys measure food intake. Using a survey has two issues. One, you depend upon the participants to be capable of measuring correctly the food they eat. Since food labels can have caloric measurement errors of up to 20%, this can be more difficult than many people anticipate. Two, you are trusting participants to be honest and chart everything. They may forget to log that cookie they had at a work meeting. That could have been an extra 100 calories and can make a huge difference. You are also counting on people to exercise, and the best studies will directly supervise the exercise sessions to verify participation. A perfect study would house the participants for the duration of the investigation and monitor every morsel they consumed. This type of experiment would be difficult due to its cost.

Another problem with many of these studies is that the definition of physical activity can mean anything from gardening to walking, to high-intensity interval training. Studies have used walking (> 8,000 steps a day), aerobic exercise (heart rate at 60% of maximum), high-intensity aerobic exercise (70% or greater of max heart rate), and resistance training (challenging your muscles to move things). Not surprisingly, how your body responds to exercise is dependent upon the exercise that you do. When many folks exercise, they will burn 200-500 calories and are hungry when they get done. If you immediately eat the wrong thing, a jelly donut, you have wiped out any gain the exercise gave you. Caloric compensation is especially a problem with moderate exercise. If you walk or jog lightly, your body will empty the contents of your stomach in preparation for more food. In contrast, if you exercise at a 70% effort, there is a delay in stomach emptying, and thus a delay in hunger. Most studies use moderate exercise and hence make it more difficult to resist eating.

If you chart each person in a weight-loss study, instead of focusing on the overall average, there is a wide variation in their success (Figure 2.10). In one study, the average weight loss was 3.8 kg (8.3 pounds) in 12 weeks. That is excellent progress. However, if you chart the results, you can see there is a wide range of success from losing 14.3 kg to gaining 3 kg. Even in closely monitored studies using intense exercise (>70% effort), some folks lose weight while others may gain weight! Why? Investigation of these non-performers found a series of compensatory behaviors. They would eat more calories to make up for the ones spent on exercise, or they would decrease their regular activity outside of the exercise sessions, causing their overall calorie expenditure during the day to not increase much.

Weight loss of individuals in a twelve-week weight reduction program.

Figure 2.10. Weight loss of individuals in a twelve-week weight reduction program.. The success in most weight loss programs is highly dependent upon the individual. In this study, researchers monitored the amount of exercise closely, but nothing else. Data adapted from Figure 1 of King NA, Hopkins M, Caudwell P, Stubbs RJ, Blundell JE. 2009. Beneficial effects of exercise: Shifting the focus from body weight to other markers of health. Br J Sports Med 43:924–927.

The take-home message here is that there is a variable success in losing weight with exercise. The popular press unfortunately latched on to the people who failed. The scientists and the scientific studies showed a range of weight loss, but this subtlety got lost in the translation of the research to the general public. To be fair to the press, many of the best articles did point out that exercise alone will not get you to your weight loss goals. Research shows that long-term, consistent efforts where a person both exercises, eats well, and works on the psychological aspects of eating are successful. A person who wants to lose weight and keep it off learns to resist the temptation of food and shows discipline and restraint in their eating habits. Indeed, you cannot outrun a bad diet. It is still an open question as to what exercise and what intensity is best for weight loss, but exercise does play an important role. No matter what the current findings indicate about weight loss, higher rates of physical activity have numerous physical and mental health benefits and should continue.

This example demonstrates that it is essential to see past the headlines and dig into the details. If you only read one press article on the topic, you may walk away, seeing little benefit to exercise. However, if you read more widely, you would find numerous well-written articles; the one in Vox is exceptionally good and gives a more balanced view. If you then investigate weight loss in the primary literature, you would develop a much more nuanced view and even learn behaviors that could make your weight loss journey a success. (Although, I am not saying you need to lose weight. You look great!)

I hope this chapter has shown you a toolset you can apply to any subject you are learning. Be curious, be skeptical, be open to change, read widely, and read critically.

Most importantly, think well and think for yourself. Don’t let anyone make up your mind for you. The world needs independent thinkers.