How can a person with no particular training understand and evaluate complex research projects?
Evaluating & Interpreting Research
The Humane Research Council provides the HumaneSpot.org database to animal advocates for several purposes. Studies on animal well-being and behavior and human-animal interactions can be used to document campaign talking points. Marketing and psychology studies can help advocates understand how to best reach their audiences. We also include industry research, which advocates may wish to critique. But how can a person with no particular training understand and evaluate complex research projects?
First, let's briefly review the underlying process of research, scientific method. Generally speaking, the process goes something like this:
The Sections of a Research Article
An article about a study will usually include some variation on the following sections:
This section puts the study in context. It usually explains why it would be useful to answer the question that is being researched. It often discusses previous work that may have refined the current question or suggested the hypothesis that is being tested. External factors that precipitated the study, such as legislative or regulatory requirements, or community pressures and needs, may also be described.
This section describes each aspect of the study in detail, including study subjects, instruments and equipment used, measuring procedures, and any testing or analysis that was performed in preliminary stages of the study to help finalize the design.
This section provides the data that resulted from the study, often in the form of tables or graphs. It may also describe statistical tools that were used to correct, adjust, crosscheck, or otherwise validate the integrity of the data.
This is the section that interprets the data. A good Discussion/Conclusions section will briefly restate the original hypothesis and the outcome that was predicted. It will then review the data in that context. Alternate explanations for any part of the results should be noted. Limitations in the design of the study that became apparent during the research should also be considered. Finally, additional questions raised by the results may be suggested as areas for future research.
How Can I Tell Whether This Research is Credible?
Here are some things to look for:
Study Design. Was a specific question asked to begin with, and was this question evaluated in the conclusion? (This prevents researchers from reframing results in accordance with their biases).
Survey Questions. Generally speaking, all questions used in the study should be disclosed so you can review the language. Questions should be brief, unambiguous, grammatically simple, use neutral language that does not encourage one response over another, avoid absolute terms, and should only ask about one thing at a time. Multiple choice answers should not overlap, and should offer at least one option for everyone (i.e., choices for "What's your favorite color?" should include, "I don't have a favorite" as well as a list of colors). Watch out for embedded assumptions, in which any answer affirms a statement in the question (for example, "In the list below, please check all of the clubs that you went to on the day you robbed the bank.").
Subjects. If the study involved subjects, was the subject group a good match for the study goals? Many social science studies, for example, use college students, since there are lots of them handy where studies are performed. However, if you want to study how people handle retirement, students are not a demographically appropriate subject pool. If a study is exploring attitudes or behaviors across the general population, make sure the subject group includes a corresponding mix of age, gender, race, economic status, education level, or any other characteristic that is relevant to the question being posed.
Full Disclosure. Both the methodology and the results should be recorded in detail and shared in the published report. Any statistical adjustments made to the data and the justifications for them should also be fully disclosed.
Peer Review. This can be valuable, but it's not a guarantee. There are many factors that can influence how peers evaluate research, and not all of them are related to the integrity of the data. It can also be difficult to determine whether a study has been peer-reviewed or not - this article explains why. While the advent of online publication has broadened and complicated the world of research publication, it should be noted that print publication has not been exempt from similar issues.
Double-Blinds and Controls. A control group is a group that closely matches the subject group but is not subjected to the experiment, so that researchers have a baseline to compare their results to. In a drug test, for example, the control group will receive a placebo. A "double-blind" experiment is one in which information about the experiment is withheld from researchers who are performing it, so that their knowledge can't bias the results. If they were administering the above drug test, for example, they would not know who received the placebo, and who received the experimental drug. Both the control and double-blind concepts can also be used in experiments without subjects. Not all types of experiments can or should include these two components, but they are essential to many types of studies. Apply common sense. Are the results meaningful without a baseline for comparison? Was double-blinding possible? Who collected the data? Were the implications of measurements upon the outcome obvious at the time the measurements were made? Is bias adjusted for in statistical analysis? Are alternate interpretations offered in the discussions section?
Response Rate. If the response rate is low, the degree to which the respondents represent target subjects as a whole comes into question. It is possible for studies with low response rates to produce the same, or even more accurate results as studies with high response rates. However, this only occurs when those who participate are representative of the full range of possible responses, in the same percentage as in a larger group. Obviously, there is no way to know whether this is the case or not.
Cause or Correlation? If the study seeks to establish a connection between one thing and another, consider what kind of connection has been established. Cause and correlation are often confused. A causal relationship is where A causes B. Correlation is where A and B occur together, but tells us nothing about why. It could probably be documented that a high percentage of people with dental cavities have chairs in their homes, but would that prove that chairs cause cavities? Or even that cavities and chairs are related in some way? This is where the control group comes in. If a comparable percentage of similar but cavity-free control group members also have chairs in their homes, we can see that chairs neither cause nor are correlated with cavities.
Follow the Money. Obviously, it is relevant to check how research was funded. Businesses or industry advocacy organizations that fund research have an interest in the outcome that may influence the research, intentionally or otherwise. You may choose to give more weight to studies that are funded by those with no apparent commercial interest in a particular outcome, but bear in mind that such interests aren't always obvious. Many universities receive large corporate grants from private industries. Government grantmaking can reflect political climates. It is even possible that overzealous animal advocates can commission research which may not stand up to close scrutiny, something to be conscious of when selecting documentation for your positions.
Researcher Bias. Researchers, like other human beings, bring their personal beliefs to anything they do. Having opinions should not be confused with producing biased results, however. Researchers of integrity, whose top priority is to produce an accurate answer (whatever it turns out to be), may be found on every side of a debate. Such researchers will use techniques like double-blinding and control groups, discuss discrepancies between the prediction and the outcome, and consider alternate interpretations of the data.
Statistical Significance. You will often see a "p value" in an article abstract, or in the methodology section of an article. The "p value" number represents the percentage of likelihood that the same results could be produced by probability (p). Therefore "p value=0.05" (or 5/100ths) means there is only a 5% chance of the same results occurring randomly, or a 95% chance that the results are not random. In the social sciences, a result with a p value of less than 0.05 is therefore considered to be statistically significant. The lower the p value, the higher the probability that the results are not by chance. Where risks are high, as in medical research, p values may be much lower than in social science research.
What If I Just Don't Know Enough to Judge?
It may be difficult for readers without training in statistical analysis to evaluate the mathematical methodology of a study, which can be at the core of its validity. This critique of a bird predation article, by feral cat advocate Peter J. Wolf, demonstrates how familiarity with figures used in a particular field can be essential to accurately evaluating research. When you have to rely on someone else's judgment, look for someone who's been following the subject for years, who questions assumptions, and who acknowledges well-researched findings (and criticizes poor ones) regardless of whether they align with his/her position.
Return to Animal Rights Articles