Week 6: Understanding and Interpreting Effect Size Measures (LeCroy & Krysik, 2007)

Understanding and Interpreting Effect Size Measurements 

Introduction

Effect Size: index of magnitude not directly affected by sample size. (strength of relationship).

-Used for a) Power Estimation b) Sample Size Determination, c) interpret findings.

No known SW journals require reporting Effect size.

Different measures have different reporting of effect sizes.

Hence, this paper will help the reader understand:

a) What the effect size means

b) How they differ

c) How to present outcome for easier interpretation.* Special focus on this section.

Because SW researchers are asking for them,

The Basics

E.S: Magnitude of the effect. (practically significant).

Error to assume P Value alone (likelihood that a finding is due to chance or sampling error)

We should use P Values as “guidance rather than sanctification”.

Often (when sample is small, and author falsely concludes a finding of “no difference” when indeed a large effect size is present, showing a meaningful difference”.

Instead, one should replicate the study with a larger sample size.

OR, Significant result (with large sample), yielding a small effect size (little practical importance).

Different from P value (significance test), a) Effect size is independent of sample size, b) written in standardized units so easier to compare across studies, c) show magnitude of difference

Different Measures of Effect Size: 2 Types. 

1) -Cohen’s d: Standardized Mean Difference: Most common: Cohen’s D. The difference between the two means expressed in terms of their common standard deviation.

Eg: d= .66  = 2/3 of a standard deviation separates the two means.  + = improvement; – = deterioration.

Most often in meta-analyses

The difference between the treatment and control group, in SD or unit scores.

CAN be above 1. CAN be calculated from r by rX2  / Square root  of 1-r2.

2) Point-Biserial r = Effect Size Correlation for Intervention Research

Compute between dichotomous IV (yes/no) and Continuous DV.

3) r2: The proportion of the variance in the dependent variable explained by the independent variable is obtained.  = Strength of the effect size correlation.

e.g.: r=.3; r2=.09, or, 9% of variance explained.

Goodness of Prediction: How much variation is attributable to in the predictor scores.

-Omega squared (w2): Use with ANOVAs

-Binomial Effect Size Display: 2X2 contingency table  showing meaningful effect sizes.

Note: 1) Even small effect sizes are important. 2) Even when effect sizes are similar to other studies, this is often overlooked due to lack of knowledge about effect sizes.

Rows= dichotomous independent variable (Treatment and control)

Columns:dependent variable displayed as a dichotomous outcome (for example, improved compared with not improved).

To Solve this problem,… (see below).

Improving Effect Size Interpretation : The Binomial Effect Size Display

BESD: Helps to interpret effect size when r2 is small.

Is a 2X2 Contingency table.

Rows=dichotomous IV (treatment or control)

Columns=Dichotomous outcome DV (improved vs. not improved).

OR, continuous DV can be presented in Dichotomous categories as well.

Results indicate The r-based BESD illustrates the difference in treatment success if one-half of the population received one condition and one-half received the other condition. The BESD assumes a 5()*>i base rate for both experimental and control groups.

Q the r-BESD wants to answer: “What would the correlationally equivalent effect of the treatment be if 50% of the participants had the occurrence and 50% did not (equal group sizes assumed).

Answer: NO difference between the 2 groups? each cell =50%

Correlation = (A minus B) = Difference in rate of outcome between Experiment  Group and Control Group

Rows: (left to right) = IV (dichotomous, predictor (belonging to a control or belonging to the experiment).

Columns (Top to bottom) = DV (dichotomous, outcome) (worked well/did not work well).

Now, Row and Column totals = 100.

(from elsewhere) cell %ages are all standardized;

In their example with body image, they show that “What is useful about the BESD is that it provides the difference between success rates, whereas r’ as a measure of the strength of the effect size correlation is not very intuitive.” BESD “increases understanding, interpretability, and comparability”.

Some final comments about interpreting effect size measures

Providing and interpreting effect size is important.

Problem: still unknown meter stick to determine what is “meaningful” and what is “not meaningful”.

Answer: become proficient with the studies in your chosen field to understand how small = meaningful.

Problem: What is “clinically significant” and “reliable change” in

Eg: of Aspirin preventing heart attack study: Because of a large p value, ethically, the study was cancelled and both groups received drug; however, due to a small effect size, many scientists are uncertain.

So, they used the BESD, and reanalyzed, that study showed a 3.4% fewer heart attacks (aspiring group : 48.3% compared to 51.7% control group), showing that findings are indeed meaningful.

Conclusions and Recommendations

Providing effect size also allows researchers to conduct meta-analyses, provide outcome expectations for future studies, allow comparisons between studies.

So, provide both Significance Test (p value) as well as Interpretation of Meaning (Effect size).

Always provide Cohen’s d, even if using other effect size methods, as is the most well-known.

Calculate BESD for even more interpretation.

Go a step further, and compare your effect size to what other studies have found.

-A Paper that helps to better understand how to Use and Interpret BESD Effect Sizes can be found at

http://pareonline.net/pdf/v10n14.pdf

Leave a Reply

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.