Statistics Essentials For Dummies (53 page)

Read Statistics Essentials For Dummies Online

Authors: Deborah Rumsey

Tags: #Reference

BOOK: Statistics Essentials For Dummies
8.09Mb size Format: txt, pdf, ePub

Identify Confounding Variables

A
confounding variable
is a characteristic which was not included or controlled for in the study, but can influence the results. That is, the real effects due to the treatment are confounded, or clouded, due to this variable.

For example, if you select a group of people who take vitamin C daily, and a group who don't, and follow them all for a year's time counting how many colds they get, you might notice the group taking vitamin C had fewer colds than the group who didn't take vitamin C. However, you cannot conclude that vitamin C reduces colds. Because this was not a true experiment but rather an observational study, there are many confounding variables at work. One possible confounding variable is the person's level of health consciousness; people who take vitamins daily may also wash their hands more often, thereby heading off germs.

How do researchers handle confounding variables? Control is what it's all about. Here you could pair up people who have the same level of health-consciousness and randomly assign one person in each pair to taking vitamin C each day (the other person gets a fake pill). Any difference in number of colds found between the groups is more likely due to the vitamin C, compared to the original observational study. Good experiments control for potential confounding variables.

Assess Data Quality

To decide whether or not you're looking at credible data from an experiment, look for these characteristics:

Reliability:
Reliable data get repeatable results with subsequent measurements. If your doctor checks your weight once and you get right back on the scale and see it's different, there is a reliability issue. Same with blood tests, blood pressure and temperature measurements, and the like. It's important to use well-calibrated measurement instruments in an experiment to help ensure reliable data.

 

Unbiasedness:
Unbiased data contains no systematic favoritism of certain individuals or responses. Bias is caused in many ways: by a bad measurement instrument, like a bathroom scale that's sometimes 5 pounds over; a bad sample, like a drug study done on adults when the drug is actually taken by children; or by researchers who have preconceived expectations for the results ("You feel better now after you took that medicine don't you?")

 

Bias is difficult, and in some cases even impossible, to measure. The best you can do is anticipate potential problems and design your experiment to minimize them. For example, a
double-blind
experiment means that neither the subjects nor the researchers know who got which treatment or who is in the control group. This is one way to minimize bias by people on either side.

 

Validity:
Valid data measure what they are intended to measure. For example, reporting the prevalence of crime using number of crimes in an area is not valid; the
crime rate
(number of crimes per capita) should be used because it factors in how many people live in the area.

 

Check Out the Analysis

After the data have been collected, they're put into that mysterious box called the
statistical analysis.
The choice of analysis is just as important (in terms of the quality of the results) as any other aspect of a study. A proper analysis should be planned in advance, during the design phase of the experiment. That way, after the data are collected, you won't run into any major problems during the analysis.

As part of this planning you have to make sure the analysis you choose will actually answer your question. For example, if you want to estimate the average blood pressure for the treatment group, use a confidence interval for one population mean (see Chapter 7). However, if you want to compare the average blood pressure for the treatment group versus a control group, you use a hypothesis test for two means (see Chapter 8). Each analysis has its own particular purpose; this book hits the highlights of the most commonly used analyses.

You also have to make sure that the data and your analysis are compatible. For example, if you want to compare a treatment group to a control group in terms of the amount of weight lost on a new (versus an existing) diet program, you need to collect data on how much weight each person lost (not just each person's weight at the end of the study).

Scrutinize the Conclusions

Some of the biggest statistical mistakes are made after the data has all been collected and analyzed — when it's time to draw conclusions, some researchers get it all wrong. The three most common errors in drawing conclusions are the following:

Overstating their results

 

Making connections or giving explanations that aren't backed up by the statistics

 

Going beyond the scope of the study in terms of whom the results apply to

 

Overstated results

When you read a headline or hear about the big results of the latest study, be sure to look further into the details of the study — the actual results might not be as grand as what you were led to believe. For example, suppose a researcher finds a new procedure that slows down tumor growth in lab rats. This is a great result but it doesn't mean this procedure will work on humans, or will be a cure for cancer. The results have to be placed into perspective.

Ad-hoc explanations

Be careful when you hear researchers explaining why their results came out a certain way. Some after-the-fact ("ad-hoc") explanations for research results are simply not backed up by the studies they came from. For example, suppose a study observes that people who drink more diet cola sleep fewer hours per night on average. Without a more in-depth study, you can't go back and explain why this occurs. Some researchers might conclude the caffeine is causing insomnia (okay…), but could it be that diet cola lovers (including yours truly) tend to be night owls, and night owls typically sleep fewer hours than average?

Generalizing beyond the scope

You can only make conclusions about the population that's represented by your sample. If you want to draw conclusions about the opinions of all Americans, you need a random sample of Americans. If your random sample came from a group of students in your psychology class, however, then the opinions of your psychology class is all you can draw conclusions about.

Some researchers try to draw conclusions about populations that have a broader scope than their sample, often because true representative samples are hard to get. Find out where the sample came from before you accept broad-based conclusions.

Chapter 14
:
Ten Common Statistical Mistakes

In This Chapter

Recognizing common statistical mistakes

How to avoid these mistakes when doing your own statistics

This book is not only about understanding statistics that you come across in your job and everyday life; it's also about deciding whether the statistics are correct, reasonable, and fair. After all, if you don't critique the information and ask questions about it, who will? In this chapter, I outline some common statistical mistakes made out there, and I share ways to recognize and avoid those mistakes.

Misleading Graphs

Many graphs and charts contain misinformation, mislabeled information, or misleading information, or they simply lack important information that the reader needs to make critical decisions about what is being presented.

Pie charts

Pie charts are nice for showing how categorical data is broken down, but they can be misleading. Here's how to check a pie chart for quality:

Check to be sure the percentages add up to 100%, or close to it (any round-off error should be small).

 

Beware of slices labeled "Other" that are larger than the rest of the slices. This means the pie chart is too vague.

 

Watch for distortions with three-dimensional-looking pie charts, in which the slice closest to you looks larger than it really is because of the angle at which it's presented.

 

Look for a reported total number of individuals who make up the pie chart, so you can determine "how big" the pie is, so to speak. If the sample size is too small, the results are not going to be reliable.

 

Bar graphs

A bar graph breaks down categorical data by the number or percent in each group (see Chapter 3). When examining a bar graph:

Consider the units being represented by the height of the bars and what the results mean in terms of those units. For example, total number of crimes verses the crime rate (total number of crimes per capita).

 

Evaluate the appropriateness of the scale, or amount of space between units expressing the number in each group of the bar graph. Small scales (for example, going from 1 to 500 by 10s) make differences look bigger; large scales (going from 1 to 500 by 100s) make them look smaller.

 

Time charts

A time chart shows how some measurable quantity changes over time, for example, stock prices (see Chapter 3). Here are some issues to watch for with time charts:

Watch the scale on the vertical (quantity) axis as well as the horizontal (timeline) axis; results can be made to look more or less dramatic by simply changing the scale.

 

Take into account the units being portrayed by the chart and be sure they are equitable for comparison over time; for example, are dollars being adjusted for inflation?

 

Beware of people trying to explain why a trend is occurring without additional statistics to back themselves up. A time chart generally shows what is happening.
Why
it's happening is another story.

Other books

Groomless - Part 3 by Sierra Rose
No Holds Barred by Lyndon Stacey
Tender by Belinda McKeon
The Star of Istanbul by Robert Olen Butler