Summary of Weisberg (2010), Chapter 1: What is Bias? in: Bias and Causation, Models and Judgment for Valid Comparisons

 

Summary of Weisberg (2010):

Chapter 1: What is Bias? in:

Bias and Causation, Models and Judgment for Valid Comparisons

            Causes?  What causes?  What causes what? Why should we be concerned? As (aspiring) social work researchers, our concern for the well being of our clients leads us to invest passion, time and energy into examining outcomes of social phenomenon and into investigating possible reasons and causes for these. We conduct research that examines theoretically aligned variables and analyze the acquired data to report derived conclusions about the phenomenon. But what happens when the analysis shows significant associations between “A” and “B” (indicating a possible causal relationship) but to us, on a practical level, something seems ‘off’ ? What if something else also had an impact? Weisberg’s (2010) chapter, What is Bias?, discusses the meaning and significance of bias, the complexities surrounding its detection, and some of its implications within existing studies.

Weisberg refers to bias as “… the extent to which a particular measure of a causal effect has been systematically distorted”. In other words, doubt is cast about the validity of the expressed causal effect. This doubt may be the result of problems with the design of the study, in the way in which it was conducted and/or the way in which data were analyzed, what Weisberg refers to as methodological bias. As these biases are systematically rooted, the statistical procedures used to help manage random variability, such as significance testing, confidence intervals, and regression modeling, are not useful. A study that is biased is one whose “…methods employed have resulted in systematic error in the estimation of a causal effect”.

Weisberg’s chapter discusses how classical statistics methods were originally devised to address random error and it highlights some assumptions that are inherit in the current statistical paradigm: a “…hypothetical infinite population”, a normal distribution of variables, and most importantly, “the probability distribution is regarded as stable, reflecting a fixed set of underlying conditions”.  This assumed stability allows us to predict conditional probabilities i.e. One could predict future satisfactory quality of life based on our current understanding of the conditions that positively impact quality of life. Prediction also plays an important role in causal inference however causal inference does not rely on stable conditions but on what “…systemic alteration would occur if the circumstances were to change in a specific manner”. The main point of Weisman’s discussion is that although “…traditional statistical methods can … reveal… association”, they cannot determine how change in one variable can cause changes in another. Weisberg’s intention for the chapter is not to “solve the problem of bias…” but to make the point that what are needed are “…data-analytic tools… that will facilitate the exercise of logic and scientific judgment to reach conclusions that are supported by the weight of the available evidence”.

Conducting research that validates causation is complicated, yet the implications of these studies carry great weight and are hugely important in informing policy, programing and service delivery to vulnerable members of society. Weisberg’s chapter offers examples of studies that contain fundamental methodological biases and that, in my opinion, highlight some grave concerns and hence, responsibilities of researchers:

1) Unless one has an understanding of the complexities behind designing, conducting and analyzing research, we will unfalteringly take research results at face value, as did the general population who were aware of a study on Evaluating the Efficacy of Antityphoid Vaccine. Furthermore, given that the general population does not have research design knowledge (my assumption), it is easy (although highly unethical in my opinion) for researchers to omit a discussion on known biases that might alter their desired research outcomes.  Although no harm resulted from the continued and undisputed use of the antityphoid vaccine, given that it is often the (uninformed) general public who is the intended audience for research, researchers have the ethical obligation to provide research that is as free of biases as possible.

2) Statistically sophisticated evidence does not an argument make.  Such was the case of the US Supreme Court that failed to overturn a death sentence deemed to result from disproportionate and discriminatory biases.  In Furman v. Georgia, described in the study Racial Disparities in Death Sentencing, “…issues of potential bias were dissected in great depth from both a statistical and a legal perspective…”. However, despite the in-depth “dissection”, important selection and measurement biases remained and these created enough doubt that following several other appeals for his case, Mr. McCleskey lost his legal battle and was executed.

3) Big names equate truth.  Despite tenuous epidemiological evidence about the causal effect of phenylpropanolamine (PPA) on hemorrhagic stroke, the Food and Drug Administration (FDA) commissioned “ a large-scale case-control” study from Yale University.  Following the results of the ensuing Hemorrhagic Stroke Project (HSP), that “ PPA in appetite suppressants, and possibly in cold and cough remedies is an independent risk factor for hemorrhagic stroke in women”, the FDA deemed that PPA was unsafe for “over-the-counter use”, all manufacturers were urged to remove it from their products and lawsuits from individuals who had strokes ensued. However, several biases were discovered and raised by “biostatistics experts” and these cast doubt on the results of the initial study, but not before some damage was done.  The product has since been removed from the market but perhaps this case highlights how easily ‘we’ believe ‘big, reputable’ names.

The bottom line is that biases have great impact on validating causal effects yet they are difficult to detect and they are present in all research.  Given the great impact research has on social work intervention and resources, and on the general public, it is imperative (although likely impossible) to conduct research that is as bias-free as possible.

 

 

 

 

 

Leave a Reply

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.