« Older Entries | Newer Entries »

Summary of Green et al. (2010): Enough Already about “Black Box” Experiments: Studying Mediation is More Difficult than Most Scholars Suppose.

Summary of Green et al. (2010): Enough Already about “Black Box” Experiments: Studying Mediation is More Difficult than Most Scholars Suppose.

            Understanding that vitamin C mediates the outcome between the consumption of lime and the lowered incidence of scurvy, like numerous other scientific discoveries, highlights the importance of understanding causal effects.  But understanding causation is not only relegated to scientific experimentation.  We, social workers (and other social scientists), are also a curious people, not content with mere associations and outputs but continually wanting to understand more: when an intervention or program has worked (or failed to work) we want to know why.  I may (in the future) conduct research that indicates that parents of children with neurodevelopmental disorders (NDD) have an improved quality of life (QOL) once they receive support services, or that the children themselves report higher levels of QOL following their parents’ receipt of such supports, but I want to understand what other factors may (or may not) have also impacted that outcome.  In short, I want to open the “black box” however, is it possible to do so effectively?

Green et al. state that social science journals “abound” with articles purporting “… well established…” mediating results with a “… growing enthusiasm for regression models…”.  However, their view is that these models “…rest on naïve assumptions… that it is a relatively simple matter to establish the mechanism by which causality is transmitted”. In Enough Already about “Black Box” Experiments: Studying Mediation is More Difficult than Most Scholars Suppose, Green et al. highlight complex issues to consider about mediation and causation:

1) Mediation (which grew in popularity following the publication of Baron and Kenny (1986), is conventionally tested using regression approaches that are flawed and that assume many factors, i.e. that “… M be independent of unmeasured factors that affect Y; researchers often examine several mediators (“one at a time or in different combinations), as establishing “causal pathways” and/or their causation direction is difficult which leads to two critiques: concerns for omitted variables and poor measurement that leads to the underestimation of M’s effect. According to Green et al. even the use of structural equation modeling, albeit a “… step in the right direction insofar as it addresses the problem of measurement error… does nothing to address the problem of omitted variables”.

2) Although “… experiments are the gold standard for estimating causal parameters…”, designing an experiment that will only manipulate the M we are interested in rather than other Ms that may also mediate the effects is very challenging.  In addition, conclusions of mediation effects that are based on “single interventions” cannot truly be generalized to a larger population unless “enough experimental interventions” are conducted, which is “a formidable undertaking”.

3) “Unobserved sources of variation in effect size can throw off any attempt to draw inferences about mediation”.  Green et al. state that it is possible for subjects within the same study to be ruled by different “causal laws” and this counters the usual assumption that all observations within a study will be structured by the same parameters. Green et al. demonstrate this using 4 different models (that I understand with great difficulty).  What I easily understand is that as a result, it is possible to have very different outcomes.  Again, Green et al. suggest that “… multiple experiments- maybe decades worth- will be necessary”  to deal with “heterogeneous treatment effects”.

Contrasting with Guo (2014), who strongly advocates for the use of quantitative statistical methods in social work research, Green et al. seem to question whether it is necessary to open the black box at all.  They state that many theoretical and practical contributions are derived by establishing significance in the effects of variables on each other without having to understand their causal pathways. In fact, they suggest that researchers “… measure as many outcomes as possible when conducting experiments” rather than invest time and resources into trying to manipulate mediators “… as there is no guarantee that the experimental intervention will produce a substantially interesting average effect on the outcome”.

To return to my opening examples, in order for me to promote the importance of support services effectively, is it enough to demonstrate that parents and children have improved QOL following receipt of support or would understanding the mediator effects provide a sounder argument? Should we simply be content that lime consumption reduced the incidence of scurvy or as the saying goes, “When life gives you limes, rearrange the letters so it says smile” (Unknown). Perhaps the answer lies with whom our audience is and what we are trying to accomplish in our studies.

Reflection: Thinking about your future study: a) When and for what reasons do you think it might be wise to open the black box? b) When and for what reasons might it be wise to keep the box shut?

References:

Baron, R.M., & David A. K. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51:1173-82.

Guo, S. (2014). Shaping Social Work Science: What Should Quantitative Researchers Do? Research on Social Work Practice, 1-12. DOI: 10.1177/1049731514527517

Summary of Stone and Rose (2011): Social Work Research and Endogeneity Bias

Endogeneity bias is a concern in making causal inference in the field of social and behavioral sciences research. Stone and Rose (2011) in their paper “Social Work Research and Endogeneity Bias” discussed different sources of endogeneity bias, and different approaches to address this bias from the perspective of social work research. They mentioned that social work as a discipline is lacking control-capable knowledge – knowledge intended to directly inform practitioners’ change related strategies- generated by research using experimental design. They also mentioned about another school of social work philosophy which assumes that persons live in a complex system or environment comprising of biological, psychological, social and other sub-systems, which continuously generates the bi-directional relationships between persons and the relevant systems, and produces a given set of outcomes. This perspective of social work underscores the reliance of experimentation indicating the threat of ecological validity of findings. To my view, this reality of social work domain increases the threat of endogeneity bias in social work research, but it does not discourage experimentation.

To explain endogeneity the authors refer it to a causal relationship between any two variables in a given system of variables (Stone and Rose, 2011). It is really a complicated definition of ednogeneity. We can explain it in a different way. In any experimental design, we expect that our dependent variable will be endogenous or insider, and our independent variable will be exogenous or outsider. In reality, if the independent variable becomes an insider or a part of dependent variable to some extent or co-exists with dependent variable, the independent variable suffers from endogeneity. In this case, the research design suffers from endogeneity bias.

With reference to the studies in the field of social work, the authors discussed three different sources of endogeneity bias: measurement error, omitted variables, and simultaneity. In the reality of measurement, we are rarely capable of observing the independent variable “x” rather we observe the indicators of independent variable “x*”, which is not a perfect measure  of “x” and is vulnerable to measurement error. This measurement error may become a source of endogeneity bias. Omitted variables might source endogeneity when both or any of the dependent and independent variables is related to some other variables, which are not included in the model, and when they have influence on the outcomes. Simultaneity occurs when one or more independent variables are jointly determined with multiple outcome variables where none of the outcomes is capable of being expressed solely as a function. Thus, reverse causation arises, and it might source endogeneity. The authors explained these sources of endogeneity very well with diagrams.

The authors briefly discussed several statistical and econometric tools that address endogeneity bias. They listed a number of approaches: propensity score matching, fixed effects, interrupted time series, regression discontinuity, instrumental variable technique, and natural experiments, which they found in use across key social work journals. In this paper, the authors urge the social work researchers to become familiar with the concept of endogeneity bias, to become aware of the conditions under which such bias occurs, and to develop suitable approaches to address this bias. Most of the approaches discussed in this article are developed in statistics or in econometrics to deal with very field specific issues. Social work researchers should come forward to developing approaches suitable in dealing our field specific edogeneity bias, which will enhance control-capable knowledge base of social work.

– Stone, S. I.,and Rose R. A. 2011. Social Work Research and Endogeneity Bias. Journal of the Society for Social Work and Research. Vol. 2, Issue 2, p. 54-75

Summary of Guo (2014): Shaping Social Work Science: What Should Quantitative Researchers Do

Summary of Guo (2014):

Shaping Social Work Science: What Should Quantitative Researchers Do?

             Social work has not contributed greatly to scientific knowledge, lagging behind other professions such as nursing, clinical psychology and psychiatry, and this, despite the recent national and international “…movement towards shaping the science of social work”, and also despite the fact that “…the call for strengthening the scientific base of social work practice was declared 50 years ago”.  This is Guo’s (2014) opening message in Shaping Social Work Science: What Should Quantitative Researchers Do? As a social worker, it really hits home for me. My first instinct is a defensive rant, as social work has long been considered a “soft science” and I often find myself defending the profession’s integrity and ‘honor’. As an aspiring researcher, I read on as Guo borrows from Brekke (2012), to bring forth questions he feels have not been addressed in literature, and proceeds to try and answer them:

What role should quantitative methods play in shaping the science of social work?

Is social work different from other scientific disciplines? If so, in which ways?

To enhance the level of scientific research, what should quantitative researchers do?

Guo introduces the reader to the notion of striving for a scientific social work by discussing economics, a field that experienced similar challenges for attaining “respectability”, an ongoing journey that for it, began in the 1940s with a focus on the advantages and disadvantages of incorporating mathematics in the field. Advocates of mathematics believe (d) that elaborate usage of quantitative techniques are necessary in the “development of any scientific discipline”, while mathematic rivals fear(ed) that math “…has made contemporary economics less relevant to the social problems that formed the subject matter of classical political economy”. It is mathematic techniques that Guo suggests need to be at the basis of social work inquiry, and it seems to be these two voices that are currently present in social work discourse; the seeming battle between the concreteness hence ‘coldness’ of numbers and the humanity and ‘warmth’ of ‘doing’ social work. How can these two polls coexist?

To ease the transition into empirical social work research, Guo recommends that social work researchers engage in three main actions:

1) Social work researchers should “Follow the positivist tradition and use empirical data to test theories.” Evidence-based practice as the new buzz words in social work practice has pushed a greater usage of quantitative research methods, which aligns with the positivist paradigm of using “…empirical data to test theoretically derived research hypotheses”. Utilizing techniques for observation, experimentation and comparison, social work does not differ from the ‘natural sciences’, where empirical data are used to test theories. Guo states that fundamentally, “…any theory remains to be hypothetical and cannot be termed as theory if it is not tested by empirical data” and social workers should not stray from empirical data because of its potential complexity.

2) Social work researchers should “incorporate the latest developments of methods from adjacent disciplines”. Adopting the understanding that determining causation is at the basis of most sciences, and that the “gold standard for research” randomized clinical trail is not always possible, Guo suggests borrowing from other disciplines that have “recognized the need for more efficient, effective approaches for assessing treatment effects when evaluating programs based on quasi-experimental design”. Namely, Guo discusses the propensity score analysis (PSA), a technique for “estimating causal effects from observational data” a contribution from Paul Rosenbaum and Donald Rubin (statisticians) and James Heckman (economist)[i].

3) Social worker researchers should “address the most pressing and challenging issues of social work research and practice”. Despite “lagging behind its adjacent disciplines”, Guo states that social work research has evolved and has added to the rigor in its research in 3 significant ways: a) The Society for Social Work and Research, created in 1994 has promoted social work research through annual conferences and other scientific activities; b) Advanced quantitative methods are being used and are published in social work publications; c) The quality of social work research has been enhanced by the “information technology revolution”.

Despite these accomplishments, Guo states that quantitative methods are unevenly distributed among substantive areas and there remains the need for more “rigorous quantitative research”. To illustrate, he presents the following results from in a review of 213 articles published between January1, 2012, to December 16, 2013[ii]:

1) Quantitative methods exceed all other research methods: 173 (81.2%) quantitative; 24 (11.3%) qualitative; 16 (7.5%)    mixed methods.

2) The mathematical methods used in social work research have been for “producing knowledge  or … hypothetic-deductive tests as opposed to deriving theory.

3) A wide variety of statistical methods have been used in social work research, most using descriptive or bivariate methods (22 studies), the other 167 using at least one multivariate statistical model.

4) Of 14 areas of social work research, only 3 (health/mental health, 50 studies; social welfare and poverty, 35 studies; and child welfare, 30 studies) used quantitative methods extensively.

5) More rigour is required as most studies use non-probability sampling methods and “low-level design” such as cross-sectional (41.3%).

Play nice now. Until recently, I lived on that very fine line between holding onto (what I consider to be) ‘human’ social work practices, and transforming lived experience into analyzable numbers; seeing the benefit of both yet not wanting to ‘sell my soul to the devil’ by venturing from interacting with people to quantifying them.  Guo’s push for empirical social work research has not been gentle, but although his usage of statistical language and detailed explications of analytical procedures may have been enough to scare any social worker away from ‘crossing over’ into the world of empirical data, which is counterproductive to his goal, I welcomed the brain-twisting linguistics and concepts, as I find the more I sit back and allow myself to ‘make friends’ with these concepts, the less overwhelming they are and the easier they are for me to wrap my head around.


[i] Many other statistical methods, sometimes used in conjunction with PSA have also been developed: hierarchical linear modeling; robust standard error estimation; SEM analyzing latent variables ; methods for analyzing categorical and limited dependent variables; and methods for analyzing time-to-event data as well as its marginal approaches to clustered event data.

[ii] Articles were selected from these journals: Social Work; Social Work Research; Research on Social Work Practice; Social Service Review; and the Journal of the Society for Social Work and Research.

Summary of Weisberg (2010), Chapter 1: What is Bias? in: Bias and Causation, Models and Judgment for Valid Comparisons

 

Summary of Weisberg (2010):

Chapter 1: What is Bias? in:

Bias and Causation, Models and Judgment for Valid Comparisons

            Causes?  What causes?  What causes what? Why should we be concerned? As (aspiring) social work researchers, our concern for the well being of our clients leads us to invest passion, time and energy into examining outcomes of social phenomenon and into investigating possible reasons and causes for these. We conduct research that examines theoretically aligned variables and analyze the acquired data to report derived conclusions about the phenomenon. But what happens when the analysis shows significant associations between “A” and “B” (indicating a possible causal relationship) but to us, on a practical level, something seems ‘off’ ? What if something else also had an impact? Weisberg’s (2010) chapter, What is Bias?, discusses the meaning and significance of bias, the complexities surrounding its detection, and some of its implications within existing studies.

Weisberg refers to bias as “… the extent to which a particular measure of a causal effect has been systematically distorted”. In other words, doubt is cast about the validity of the expressed causal effect. This doubt may be the result of problems with the design of the study, in the way in which it was conducted and/or the way in which data were analyzed, what Weisberg refers to as methodological bias. As these biases are systematically rooted, the statistical procedures used to help manage random variability, such as significance testing, confidence intervals, and regression modeling, are not useful. A study that is biased is one whose “…methods employed have resulted in systematic error in the estimation of a causal effect”.

Weisberg’s chapter discusses how classical statistics methods were originally devised to address random error and it highlights some assumptions that are inherit in the current statistical paradigm: a “…hypothetical infinite population”, a normal distribution of variables, and most importantly, “the probability distribution is regarded as stable, reflecting a fixed set of underlying conditions”.  This assumed stability allows us to predict conditional probabilities i.e. One could predict future satisfactory quality of life based on our current understanding of the conditions that positively impact quality of life. Prediction also plays an important role in causal inference however causal inference does not rely on stable conditions but on what “…systemic alteration would occur if the circumstances were to change in a specific manner”. The main point of Weisman’s discussion is that although “…traditional statistical methods can … reveal… association”, they cannot determine how change in one variable can cause changes in another. Weisberg’s intention for the chapter is not to “solve the problem of bias…” but to make the point that what are needed are “…data-analytic tools… that will facilitate the exercise of logic and scientific judgment to reach conclusions that are supported by the weight of the available evidence”.

Conducting research that validates causation is complicated, yet the implications of these studies carry great weight and are hugely important in informing policy, programing and service delivery to vulnerable members of society. Weisberg’s chapter offers examples of studies that contain fundamental methodological biases and that, in my opinion, highlight some grave concerns and hence, responsibilities of researchers:

1) Unless one has an understanding of the complexities behind designing, conducting and analyzing research, we will unfalteringly take research results at face value, as did the general population who were aware of a study on Evaluating the Efficacy of Antityphoid Vaccine. Furthermore, given that the general population does not have research design knowledge (my assumption), it is easy (although highly unethical in my opinion) for researchers to omit a discussion on known biases that might alter their desired research outcomes.  Although no harm resulted from the continued and undisputed use of the antityphoid vaccine, given that it is often the (uninformed) general public who is the intended audience for research, researchers have the ethical obligation to provide research that is as free of biases as possible.

2) Statistically sophisticated evidence does not an argument make.  Such was the case of the US Supreme Court that failed to overturn a death sentence deemed to result from disproportionate and discriminatory biases.  In Furman v. Georgia, described in the study Racial Disparities in Death Sentencing, “…issues of potential bias were dissected in great depth from both a statistical and a legal perspective…”. However, despite the in-depth “dissection”, important selection and measurement biases remained and these created enough doubt that following several other appeals for his case, Mr. McCleskey lost his legal battle and was executed.

3) Big names equate truth.  Despite tenuous epidemiological evidence about the causal effect of phenylpropanolamine (PPA) on hemorrhagic stroke, the Food and Drug Administration (FDA) commissioned “ a large-scale case-control” study from Yale University.  Following the results of the ensuing Hemorrhagic Stroke Project (HSP), that “ PPA in appetite suppressants, and possibly in cold and cough remedies is an independent risk factor for hemorrhagic stroke in women”, the FDA deemed that PPA was unsafe for “over-the-counter use”, all manufacturers were urged to remove it from their products and lawsuits from individuals who had strokes ensued. However, several biases were discovered and raised by “biostatistics experts” and these cast doubt on the results of the initial study, but not before some damage was done.  The product has since been removed from the market but perhaps this case highlights how easily ‘we’ believe ‘big, reputable’ names.

The bottom line is that biases have great impact on validating causal effects yet they are difficult to detect and they are present in all research.  Given the great impact research has on social work intervention and resources, and on the general public, it is imperative (although likely impossible) to conduct research that is as bias-free as possible.

 

 

 

 

 

Summary of Duncan. et al., (2004) The Endogeneity Problem in Developmental Studies

In social science research, removing bias in inferring the casual relationship is always difficult matter. In this article, the authors focused on the endogeneity problem in developmental studies which is well-know but inadequately addressed in most empirical studies. The endogeneity problem occurs when the resulting correlations between the outcomes (dependent variable, Y) and the determined or influenced variables (independent variable, X) may in fact be the result of unmeasured characteristics of the individuals themselves or their parents (Z). In the experimental study, the endogeneity is not a problem because the subject of the research is randomly assigned, we can get an unbiased result. However, in most nonexperimental developmental studies, the potential endogeneity problem is unavoidable. This article describes the nature of the endogeneity problem in theory and practice, and how to solve it.

Although developmental theories inform the linkage between developmental outcomes and their family and contextual determinants, it often cannot explain processes by which family and contextual condition arise. As a result, omitted variables (unmeasured determinants) may bias estimates of coefficient of independent variable (family and contextual determinants). Therefore, to address endogenetiy bias problem and infer the causal relationship precisely, developmental studies often conduct the multivariate regression procedures to control all relevant covariates. If the problem of endogeneity can be thought of as one of unmeasured variables, the measure-the-unmeasured approach is attempted as an alternative. However, as the author of this article argues, there may be still serious bias left in this approach. One obvious problem of a measure-the-unmeasured approach is the question of what to measure because measuring all relevant would be impossible. Moreover, especially with cross-sectional data, some measures are themselves endogenous, so it could lead to over or underestimate the other coefficients in the regression model depending on the inter-correlation among all the explanatory variables and their separate correlations with the outcomes. Because of these limitations of non-experiments, the author of this article argues that the experimental study using random assignment is the “best practice” to overcome the endogeneity. The author also argues that there are possible ways to implement the randomized experiments in the area of human development excluding the practical or ethical issues. However, I wonder about these arguments. For example, in randomized experimental study to evaluate welfare-to-work program (page 71), once the control group realize the research situation, they could resist because they feel that they are deprived of the opportunities to participate in the program. On the other hand, the experimental group could act differently not as usual, and it could distort the result. And in this i-phone era, it is totally infeasible to control people from knowing the research situation.

Besides the randomized study, this article suggests various ways to remove endogeneity problem in developmental studies. First, with individual fixed-effects models using longitudinal data, unmeasured variables that are constant over time can be removed, and with sufficiently long panels, more elaborate methods may be used to control the unmeasured variables whose values change over time in specific ways. Another set of methods for reducing bias exploit within-family variation such as using difference of siblings. Key to the success of sibling models is an understanding of and statistical adjustments for the process by which children from the same family end up in different contexts of interest. However, it requires considerable time for researcher. Other developmental research has been able to take advantage of novel natural experiments involving family and extra-familiar contexts. However, finding out the analogous natural research design could be so difficult and seem to be too rare in practice. Moreover, even we can find it, we should be careful about using it. Because the result of the analogous natural experiment research could be turned out not to be the same one we really want to search for, and even the research that we believe as a natural experiment could be biased.

All in all, the author emphasizes the importance of implementing the natural experimental study to deal with endogeneity problem. I partly agree with this idea, but also worry about it because it could restrict from expanding researcher’s imagination for more affluent study. Therefore, I think it should be promoted discussions among social science researchers on strategies toward a more rigorous research with observational data such as propensity score or diference-in-differences estimates.

 

Summary of Saunders et al., (2006) Imputing Missing Data: A Comparison of Methods for Social work researchers

For social work researcher, dealing with missing data is always challenges. Often, missing values are just ignored, but it could distort the accuracy of data analysis to make valid and efficient inferences about a population. In this article, Saunders et at.,(2006) present six methods of data imputation to replace missing data and apply them to two data sets. The most common and easiest method of dealing with missing data is listwise deletion. When it is used, the computer program automatically deletes any case that has missing data for any bivariate or multivariate analysis. However, this method induces sample loss, so it may be appropriate only with a large sample and relatively small amount of missing data. The second method is mean substitution which uses the mean of the total sample as substitution for all of the missing values in that variable. It may be appropriate if only a small number of cases are missing values, because it reduces the estimate of the standard deviation and variance, and results in biased and deflated standard errors. Third method is hotdecking. With this method, the missing values for variable X are replaced with a value from a case that has similar characteristic. Hotdecking method is better than mean substitution to approximate the standard deviation, but bias is still likely to occur in regression equations. Fourth method is regression imputation or conditional mean imputation. To do this, the first step is to select the best predictors with complete data, the variable highly correlated with the variable having missing values. In regression equation, the predictor is used as independent variable and the variable with missing value is used as dependent variable. For example, if the income variable has missing values, its missing values could be predicted through the regression equation with the other variables such as age, education or occupation which have complete data set and high correlation with the income variable. Because this method assumes that there is a linear relationship among the variables used in the regression equation, it may result in overestimated model statistics and lower significance values. The last two imputation methods in this study are more sophisticated ways than other models mentioned above.

For me, it was hard to understand these methods with this article, so I need a lot of search for understanding these methods (honestly, I am still not sure about it). The EM algorithm is a method using the relationship between missing value and parameter, because missing value and parameters can provide useful information about each other. EM algorithm consists of two steps. First, step E, parameter is estimated from observational data, and with estimated parameter, missing value is imputed. Next, step M, using the observational data and imputing missing value in step E, new parameter is estimated again. EM algorithm iterates these steps until the variation of estimated parameter is minimized. Even though the estimated parameter is unbiased and efficiency, generally EM algorithm tends to underestimate the standard error and overestimate the accuracy of precision of the inference (Kang & Kim, 2006). Final method is introduced as multiple implicates in this article, but in the other articles, it is called as Multiple Imputation (MI). Among the articles that I found about the definition of MI, the explanations by Weyman (2003) and Graham (2009) are the most understandable. Weyman (2003) says like this. “In multiple imputation, missing values for any variable are predicted using existing values from other variables. The predicted values, called ‘imputes’, are substituted for the missing values, resulting in a full data called an ‘imputed data set.’ This process is performed multiple times, producing multiple imputed data sets. Standard statistical analysis is carried out on each imputed data set, producing multiple analysis results. These analysis results are then combined to produce one overall analysis.” Graham (2009) presents the key point of multiple imputation method step by step. “The key to any MI program is to restore the error variance lost from regression-based single imputation. In order to restore this lost variance, the first part of imputation is to add random error variance. The second part of restoring lost variance relates to the fact that each imputed value is based on a single regression equation. In order to adjust the lost error completely, one should obtain multiple random draws from the population and impute multiple times.”

To compare the results of these data imputation methods, Saunders et at.,(2006) use variables with missing values from two data sets and conduct statistical analysis. Despite of some variations in terms of F value or slope of coefficient criteria, I think this study fails to reveal significant difference among five imputation methods due to the characteristic of the examples, a large sample size with only a small percentage of missing values. According to Graham (2009), in the case of small amount of missing values in the data set (i.e., under 5%), multiple imputation could be applied, but not essential. Therefore, in my opinion (and as the author admitted), this study should be conducted in a more sophisticated way such as Monte Carlo simulation to compare the results and use the other data sets which can generate statistically significant results.

 

< List of Reference >

Graham, J.W. (2009). Missing data analysis: Making it work in the real world. Annual Review of         Psychology, 60, 549-576. doi: 10.1146/annurev.psych.58.110405.085530

Kang & Kim. (2006). Review for imputing missing data methods in public administration and policy research. Korean Public Administration Review, 40(2), 31-52.

Wayman, J.C. (2003). Multiple imputation for missing data: What is it and How can I use it? Paper    presented at the 2003 Annual meeting of the American Educational Research Association,    Chicago, IL. Retrieved from

http://www.csos.jhu.edu/contact/staff/jwayman_pub/wayman_multimp_aera2003.pdf

Summary of Singleton(2010), Survey Research

This chapter is basic guideline about how to design survey research which is the most widely used method of collecting data in the social sciences (Bradburn and Sudman, 1988). According to this chapter, survey offers the most effective means of social description, but it is less confidence in causal inference compared to the experimental research. That is, while experimental research is good at eliminating plausible rival explanations through randomization and other control procedure, survey must first anticipate and measure relevant extraneous variables and then, exercise statistical control over these variables in the data analysis. However, in social science research, since to manipulate the situation as required by experiment designs is limited, the observational study with survey is more commonly used and feasible than the experimental study. Therefore, to enhance the precision of inferring cause-and-effect relationships and remove bias, designing and doing survey could be one of the biggest challenges for the social science researcher.

Singleton offers three broad steps in doing surveys: (1) planning, (2) field administration, and (3) data processing and analysis, and this chapter is focused on planning and field administration. Planning a survey is consist of several key decision points such as formulating research objectives, selecting unit of analysis and variables, developing sampling plan, and constructing instrument, and these decision points require the simultaneous consideration rather than a linear series of decisions. In planning a survey, the main concern of researchers is minimizing four types of errors that threaten the accuracy of survey results: (1) coverage error, (2) sampling error, (3) non-response error, (4) measurement error, especially the coverage error and non-response error. The various survey modes such as face-to-face interviews, telephone interviews and computer-assisted self-interviews have its own advantages and disadvantages in dealing with errors. To offset the weakness of one mode by the strengths of another mode, sometimes mixed-mode survey is suggested as an alternative combining the various survey modes sequentially and concurrently. Even though the mixed-mode is good at reducing the non-response rate and research cost, we should be careful to use it because of its uncertain comparability of respondents collected by different modes. There is no perfect way in selecting the survey mode, but this chapter suggests us a guideline how to find out a best way to plan a survey design depending on the research objective. And this guideline is adopted as criteria to find a secondary data resource as well.

Once the planning is completed, the next step is field work. Survey’s fieldwork phase is illustrated as the flow of interviewer selection and training, gaining access to respondents, interviewing and follow-up efforts. To establish the reliable survey data, it is very important to select experienced and qualified interviewers, and it should be considered as a criterion to choose a secondary data resource. Depending on the skills of interviewers, the level of precision and bias of the data could vary considerably, and the researcher should be careful of employing interviewers, training and controlling them even though it accompanies high cost.

Summary of Remler & Van Ryzin (2011) Ch. 6: Secondary Data

Secondary data is the most commonly used form of quantitative data for the purposes of statistical analysis in the fields of social and policy research, with the majority of published quantitative studies focusing on the analysis of this type of data. This makes sense, since it is data that already exists, is much less time consuming and costly than collecting original primary data, and is also widely available through published data tables from various national (and sometimes international) surveys and databases. It is important to note that most published datasets are often aggregated and thus not as detailed as the original datasets. However, according to Remler and Van Ryzin (2011) the aggregated (and abbreviated) data are much more manageable and are still useful to address certain research questions, such as trends over time (assuming that the dataset is longitudinal in nature). An example would be examining classroom disciplinary climate trends at the classroom or school level over the course of several school years. The authors caution that it is very important for researchers to familiarize themselves with the dataset prior to analysis by reading any notes and codebooks carefully, and to be aware of any updates made to the dataset after obtaining it.

Secondary data can also include administrative data, which is often collected by government agencies, non-governmental organizations and private firms for the purposes of planning, managing and monitoring programs and service delivery performance. However, these data are often not readily available for public use due to substantial privacy issues, and have to be de-identified by the agency prior to releasing the data to the researcher. The data also have to be cleaned, coded and reformatted by the researcher once obtained since the databases in which they are stored (Management Information Systems) are not formatted for statistical analyses purposes but more so for record keeping and case management purposes. Some administrative data, even to this day, are still recorded in paper file format (e.g., court proceedings); this can pose even more of a challenge for the researcher since the data have to also be computed into a quantitative format and are very difficult to adapt for statistical analysis. However, despite these potential barriers, Remler & Van Ryzin emphasize that the analysis of administrative data is crucial for researchers to inform policy decisions and reform as well as for practitioners to monitor certain client trends and outcomes pertaining to the interventions they deliver.

The authors make a distinction between survey and data collection tool design and research design, as the latter has to do with what you actually do (analytically) with the data rather than the former; this is often a common mistake that researchers make when outlining their research design. They also outline the various forms of data that can be obtained through secondary datasets, which vary depending on their level of aggregation (individual/micro or group level/macro) and time dimension (snapshot or longitudinal). Aggregate (macro level) data can contain information about groups such as households, schools, classrooms or geographical areas such as neighbourhoods, regions, provinces and even countries. Time dimensions of data can include snapshot data of one point in time (aka cross-section data), or longitudinal data which gathers data over defined periods of time. Longitudinal data can be collected for various purposes, including pre- and post-treatment comparisons (paired-sample data), repeating measures on the same people over time (panel data for a repeated measures study), examining longitudinal outcomes that happen only after a certain amount of time has passed (panel data for a cohort study), or repeating the same measures on new cohorts (pooled cross sectional data). Table 6.1 outlines the various types of quantitative data in a easily understandable format.

The authors also mention that linking various types of secondary data can be useful to provide a broader and more unique picture of the issue or phenomenon being observed. For instance, linking Geographic Information System (GIS) data with Socio-economic Status (SES) data can allow researchers to create SES maps by province, region and even neighbourhoods. This can be helpful when trying to analyse certain region-specific social issues. Linking quantitative and qualitative data can also be useful; for instance, collecting additional focus group information on the reasons underlying high school drop out rates could be pertinent in explaining the phenomenon more clearly.

However, there are limitations in using secondary data for the purposes of statistical analysis, and these should be kept in mind when deciding your research design. For instance, secondary data availability can distort the social work research field, since researchers often have to settle with what they can get or have access to as opposed to going directly in the field and obtaining the data that you need. Public secondary data can also be outdated if not collected longitudinally, can lack certain vital information and variables and often times cannot be narrowed down to smaller units of analysis such as cities, neighbourhoods or individuals. Often times privacy issues prevents the release of pertinent data such as postal code, which could assist with creating SES indicators.

It is important to note that Remler and Van Ryzin’s (2011) book and chapters contained therein are based on American examples; thus we should be familiarizing ourselves with public secondary databases and datasets available in Canada such as census data  and the National Longitudinal Survey on Children and Youth (NLSCY) via Statistics Canada,  or those made available to university students via Research Data Centres (RDC) and various university research centres such as the Canadian Incidence Study of Reported Child Maltreatment (CIS) through the Centre for Research on Children and Families (CRCF) at McGill University.

Here are some examples of useful links to available public secondary databases and administrative data:

http://leddy.uwindsor.ca/adc/guides/health/canadian

http://publicrecords.searchsystems.net/Canada_Free_Public_Records/Public-Records-Canada-Provinces/

http://www23.statcan.gc.ca/imdb/p2SV.pl?Function=getSurvey&SDDS=4450

http://www12.statcan.gc.ca/census-recensement/index-eng.cfm

 

 

 

 

 

 

Spurious Correlations et al.

Did you know that the divorce rate in Maine has a strong positive relationship with the US per capita consumption of margarine (r = .99)? Discover a new spurious correlation every day! A law student from Harvard created an amazing website that helps emphasize the fact that correlation does not equal causation.

Also the following infographic really drives home the difference between type I and type II errors…

Type-I-and-II-errors1-625x468

From Marginal Revolution: http://marginalrevolution.com/marginalrevolution/2014/05/type-i-and-type-ii-errors-simplified.html

Article about publishing replication studies

Came across this call for replication studies in the Public Finance Review (link below). It discusses the importance and challenges to publishing replication studies and offers guidelines/standards. Granted, the authors are referring specifically to publishing in the Public Finance Review, a journal that most of us are not likely to target, but it does contain valuable information and guidance that certainly has broader application.

http://www.sagepub.com/upm-data/36845_Replication_Studies11PFR10_787_793.pdf

 

« Older Entries | Newer Entries »
Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.