« Older Entries | Newer Entries »

Summertime

Winter classes have ended, and today is a semi-holiday here, so the campus is quieter than normal. Summer is a great time to catch up on projects, of course, but it’s also a great time to get extra training, or even a jump on coursework (for those of us still taking classes). So, I thought I’d quickly post some great methods training opportunities here, as I was inspired by this excellent essay to maximize summer.

Free Online Course: Evaluating Social Programs

MITx (in partnership with the online education program EDx) is hosting a free online course called Evaluating Social Programs. Here‘s some information on the course:

“This four-week course on evaluating social programs will provide a thorough understanding of randomized evaluations and pragmatic step-by-step training for conducting one’s own evaluation. Through a combination of lectures and case studies from real randomized evaluations, the course will focus on the benefits and methods of randomization, choosing an appropriate sample size, and common threats and pitfalls to the validity of the experiment. While the course is centered around the why, how and when of Randomized Evaluations, it will also impart insights on the importance of a needs assessment, measuring outcomes effectively, quality control, and monitoring methods that are useful for all kinds of evaluations. JPAL101x is designed for people from a variety of backgrounds: managers and researchers from international development organizations, foundations, governments and non-governmental organizations from around the world, as well as trained economists looking to retool.”

You can sign up for the course (running from 1-30 April 2014) here.

If you are considering taking the course, please let me know (email bree.akesson@mail.mcgill.ca), and maybe a group of us can meet at some point for discussion.

Qualitative data sharing

The Center for Qualitative and Mixed Methods Inquiry at Syracuse University (only a few hours away from McGill!) hosts a well known Summer Institute for Qualitative and Multi-Method Research. I recently found out that the center also houses a qualitative data repository (QDR). The topic of whether or not to share qualitative data has come up in brownbag discussions in the past. Undoubtedly there are drawbacks to sharing qualitative data. But the website of the QDR outlines some interesting rationales for qualitative data sharing. It states:

QDR provides leadership and training in—and works to develop and publicize common standards and practices for—managing, archiving, sharing, reusing, and citing qualitative data. QDR hopes to expand and improve the use of qualitative data in the evaluation of research, in scholarly production, and in teaching.

Qualitative data are used by social scientists to advance a range of analytical, interpretive, and inferential goals. Yet in the United States, traditionally such data have been used only once: social scientists collect them for a particular research purpose, and then discard them. The lack of a data-sharing custom is due in part to an infrastructure gap – the absence of a suitable venue for storing and sharing qualitative data.

QDR hopes to help to fill this gap. First, the repository expands and eases access to qualitative social science data. This access empowers research that otherwise would not be conducted, and promotes teaching and learning about generating, sharing, analyzing, and reusing qualitative data. Further, the repository contributes to making the process and products of qualitative research more transparent. This increased openness facilitates the replication, reproduction, and assessment of empirically based qualitative analysis. Finally, by increasing researcher visibility, the repository induces intellectual exchange, promoting the formation of epistemic communities and serving as a platform for research networks and partnerships.

It will be interesting to see if the data sharing in qualitative research will become seen as a best practice as it increasingly is in quantitative research.

Social work dissertations in Canada: Preliminary findings

Below I present selected findings from our (Lucy Lach, Anne Blumenthal, Bree Akesson) review of social work dissertation research in Canada.

This work is now a working paper at the Social Science Research Network.

Method

The main objective of this study is to describe the nature of doctoral social work scholarship in Canada over the ten year period from 2001 to 2011. The research is guided by two sets of research questions; the first set uses overall data across the ten year period and provides an overall representation of the output of doctoral dissertations. The second set of sub-questions includes a time component to examine trends. This set of questions offers information about social work knowledge production has fluctuated overt the first decade of the 21st century.

The study is is a scoping review of publicly available dissertations (Arksey & O’Malley, 2005). The dataset was created by Rothwell, Lach and Blumenthal (2013) and is available on the Dataverse Network under the name Social Work Doctoral Scholarship in Canada. The dataset is free and can be accessed by writing to the authors for permission.

Findings

Production of PhD dissertations.

The first step of the analysis revealed the production of dissertations across Canadian Schools/Faculties of Social Work. Results are shown in Figure 1. By far, the University of Toronto has produced the most PhD graduates in the country (n=76). Calgary follows with n=44. McGill and Wilfred Laurier University tied for the third most graduates (n=30) . The average number of graduates per school was 24.8. The productivity of schools is related to the age of the program. UT is the oldest program started in 1952. The University of British Columbia has one of the newer programs.

Figure1

 

Dissertation topic.

Second, we wanted to understand the diversity of topics studied in social work. We accomplished this by applying the research topics used by the Society for Social Work Research (SSWR)  for abstract submission in their annual conference.

Figure2_PNGConsidering the diverse research topics under study in social work, we next examined research method differences across research topics. Each dissertation was coded as qualitative, quantitative or mixed methods. We selected the five most prevalent topics in our list of SSWR research categories (international social work, health and disability, child welfare, race and ethnicity, and mental health) and reported the proportions of method used. The results revealed considerable diversity (see below). Child welfare and mental health had a much higher than average proportion of studies that used quantitative methods. For dissertations examining race and ethnicity qualitative method dominated. Mixed methods were most common in international studies.

Figure3

Dissertation by research method over time.

The next part of the analysis examined trends over time. Figure 3 below shows how research method varies across year. Across years, qualitative methods are the most common by far. The ratio of qualitative to quantitative to mixed studies ranged from highest in 2008 with 23:4:5 to lowest in 2005 7:2:2. The average number of graduates per year in Canada who conducted various dissertations was 15 qualitative, 4 quantitative and 4 mixed. The number of graduates who conduct a quantitative thesis in any given year was never more than 6.

Figure4c

Discussion

A few points stand out.
1. Canada has produced 248 doctorates in social work between 2001 and 2011 (*publicly available dissertations). The University of Toronto is by far the most productive program. It is also the oldest program.
2. Social work dissertations focused on a variety of topics. And there was a strong relationship between research methods and topic studied. Methods employed to study mental health and child welfare are relatively balanced. Race and ethnicity and health and disability are almost exclusively studied using qualitative methods.
3. Across time, qualitative methods dominate over other research methods. There is a very limited supply of quantitative social work researchers being produced in Canada.

Future directions

This work opens several lines of inquiry.

1. Diversity of qualitative methods used

2. Diversity of quantitative methods used

3. Robert Oprisko et al’s work on placement efficiency.

4. Analysis of research methods / institutions by tri-council funding award.

We welcome your feedback. Stay tuned for the final paper.

Thoughts on participatory research

CRCF Workshop on Participatory Research

some unpolished thoughts by David W. Rothwell

December 4, 2013

Thinking about the topic of participatory research allowed me to reflect on my path to this current position. My motivation for engaging in a career of research and teaching on poverty related issues started in a community-based research setting. I was working at HACBED and had the opportunity to work on the Hawaii site for the Family Independence Initiative. My gut instinct at the time was that the FII program had some real impacts on participants but the data collection and evidence gathering was not very good. There were a lot of holes in the data. As I worked with several community partners I started to realize a tremendous gap between what I was learning in the PhD program and the reality of doing applied and relevant social science research. This continued as I worked with ALU LIKE for my dissertation examining Individual Development Accounts. Later I worked on community-based research on cash transfers in Singapore and am now working on community-based reseach projects at the Old Brewery Mission and in Khanawake.

In these experiences I have struggled with at least four issues. For these reasons and others I am somewhat skeptical on the implementation of participatory research.

  1. The role of the university “expert”. In most community based research settings where I’ve worked, the agency administration and leadership are not familiar with a participatory model. They have approached the university or others because of perceived expertise in research and the belief that this perceived expertise can help the agency achieve it’s goals. Often, the agency stakeholders look to the researcher for the expert opinion. Re-negotiating the roles takes considerable time and skill.
  2. Negotiating ownership and responsibility. A second issue is the idea of ownership. I dont mean ownership of data. I mean ownership of activities. To be truly participatory the users of the information must participate in all levels of data collection, analysis, writing, etc. This is contradictory to most organizational structures that are not flat.
  3. Expectations around time and resources. A third issue is time and resource expectations. Agency stakeholders, understandably, have often not had the experience of taking a research project from start to finish. Their is often a lack of appreciation for the level of effort and amount of time that goes into conducting quality research. Certain activities that take longer than expected include: pilot testing instruments, database creation, data cleaning, validity & reliability checks, writing and proofreading.
  4. Underlying motivation/purpose of the research. Often times organizations need research to obtain certain objectives. Sometimes a project is initiated under the frame of participatory, but this may not be the most important motivation for engaging in a collaboration. It’s not “Dope on the table” as in the Wire; see 0:30, but charts and graphs on the Wall.

Divergences

Thinking about the differences between doing “academic research” and “community-based research” I want to start off by talking about the differences, assymetries, and divergences. When I think about the differences culture comes to mind.

Culture

Culture is a surprisingly difficult concept to define. For a reflection on cultures in community agencies, I reflect on a recent paper on the culture of poverty written by Small et al (2010).

  1. Values
    • Rodman (1963) explained the poor don’t have different values but a wider set of values and are less committed to them.
      –> When it comes to community research, I think the values are not different (see convergence below).
  2. Frames – how people act depends on how they cognitively perceive themselves, the world and their surroundings
    • Different people perceive the same events differently based on prior experiences and understandings.
    • Important cultural heterogeneity within poor
    • Frames allow more than a cause-effect relationship, to what Small calls a “constraint and possibility” relationship. Rather than causing, frames make it possible or likely.
      –> in community-based research how people act depends on how they cognitively perceive their role in the research process and the value and product of the research
  3. Repertoires
    • Repertoires of action
      • People have a list or repertoire of strategies and actions in their minds
      • People are unlikely to engage in an action unless the strategy to perpetuate is part of their repertoire
      • need to explain why some repertoires are chosen while others are not.
        –> in community-based research, “research” or “data analysis” is rarely owned as a repertoire for workers, similarly “practice” or “intervention” is not seen as part of the repertoire of the researcher
  4. Narratives
    • People interpret their lives as a set of narratives or stories that have beginning middle end and contain linked sequence of events.
      –> more specifically, people interpret their professional careers as a set of narratives and stories.
  5. Symbolic boundaries – conceptual distinctions that we make between objects, people and practices
    • Guide interaction by affecting who comes together
    • See Lamont (2000) for comparison of US v. French labor views.
      –> there are meaningful symbolic boundaries that divide research and community/practice. University and agency. etc.
  6. Cultural capital – institutionalized widely shared high status cultural signals.
    –> peer-reviewed publications as a signal of status and cultural capital in the university
  7. Institutions – formal rules of behavior that are codified through laws/regulations, norms of appropriate behavior enforced through informal sanctions, taken-for-granted understandings that structure or frame how actors perceive their circumstances.
    –> the incentive systems in practice settings rarely reward research and knowledge creation. Similarly the university tenure and promotion system does weight community engagement equally with other types of research and knowledge production, (although this is different depending on the field)

Convergences

Valuing knowledge

Both settings are committed to understanding the complex social reality better.

Commitment to better service – improved conditions

Necessary conditions for more meaningful collaboration

  1. openness to self-critique; humbleness
  2. willingness to engage with new ideas
  3. comfortability with uncertainty

How I reduced the clicking

I’ve recently been working on analysis of a large scale complex survey to understand asset poverty in Canada. The analysis takes place in the QICSS datalab and requires vetting before being released. The analysis produces a massive volume of information across two years of survey. My goal with that information is to transform it into understandable format that can be digested by me and eventually others. I’ll describe the great improvement in my workflow to reach the goal.

Awful workflow:
SAS –> .rtf –> Excel –> Microsoft Word
(1) SAS output to .rtf. For each year of the survey SAS would do the analysis and generate an .rtf file containing hundreds of tables that was 237 pages long (think poverty rates calculated several different ways across numerous demographic categories). (2) A very capable research assistant would then manually enter the data in an Excel workbook with 5 sheets corresponding to the tables. (3) Myself or the RA would then copy the information to a Microsoft Word document and format the tables for presentation.
* Problems: There are a number of obvious problems here. First, it’s horribly inefficient moving from format to format to format. Second, it takes a long time to coordinate the labor involved. Third, and most importantly, the process is error prone. The probability of errors in the final product rises proportional to the number of mouse clicks required to copy paste or type data manually. I confronted these errors and corrected them when I found them, but this was also time consuming and I was constantly wondering what I mised. I thought “there must be a better way”.

Slightly less awful workflow:
SAS –> Excel1 –> Excel2 –> Microsoft Word
Same as the awful workflow without step one. Via the ODS output statement I learned to bypass the .rtf. This put all the output into one Excel file directly. That Excel file still has over 4,000 lines of information that must be reduced to a manageable number. Moving from Excel1 to Excel2 requires copy and pasting and deleting: moving from a lot of information to a little information. Importantly that information does not change (hopefully). Then the tables are analyzed and prepared at Excel2 stage and moved to Microsoft Word for presentation. At this last stage much work goes into adjusting column and row heights and widths, text allignment, etc. A lot of clicking happens.
* While the likelihood of error has been reduced it’s still a long process getting from SAS output to Microsoft Word. When you finish all this work and discuss at the end you recoded a variable wrong and have to redo you want to pull your hair out.

A much improved but not perfect workflow
SAS –> Excel1 –> Excel2 –> R –> .pdf
The first three steps are the same. The big difference comes at the R stage. With the help of another research assistant (thanks, Chris) I learned the basic of a few packages in R (sweave, ggplot, xtable). These allowed me load the data from Excel2 saved as a series of flat .csv files (not Excel worksheets) into R. From R, I could do two important functions all by writing code. No clicks required! First, I could generate tables to accompany the manuscript. Once the data is in R, it takes less than 5 seconds to run the .Rnw file and generate the .pdf ouput (via LaTeX).
* Assuming no errors happened in Excel1–> Excel2 there will be no errors in the output. Further, when I realize my next coding error the time to reproduce the results will be a fraction of what it would be in the Slightly less awful scenario. All I need to do is upload the new data and run. Further, when the next cycle of the survey is released I can, again, just upload the new data output and rerun. I estimate my production time will be cut by about 75%.
* An important feature that I will save for another post is that R also allows me to generate presentations (beamer class) that include tables and bar charts from the data imported into R. Again, all in one click I generate an entire presentation that integrates text, tables, graphs, images, and charts. The best thing about it: no more clicks! (well at least a lot fewer)

Statistics One MOOC

Andrew Conway from Princeton psychology teaches a massive open online course on statistics. The course is meant to be comprehensive. I see they are using R and covering many of the concepts that would lay the foundation for our discussion of research design in the phd 724 class. For example, the course covers from null hypothesis significance testing to multiple regression. At least one first year PhD student is taking the course.

Statistics One by Andrew Conway

Another option on the edX MOOC is Stat2.1x: Introduction to Statistics offered in January 2014.

 

On the benefits of cone figures

conesCone shaped objects can be found everywhere—ice cream cones, wizard hats, … and presentation slides! A number of software programs offer a variety of graph types, including ‘doughnut’, ‘candlestick’, and cone charts. Indeed, a quick look at the charting feature in Microsoft Office reveals a choice of clustered, stacked, and 3-D columns, cylinders, cones, and pyramids.

At a recent conference I participated in, one presenter chose to present his data using 3-D cones. I commend the presenter for sparing us tables of unreadable numbers yet his choice of cones was far from an improvement from presenting data in tabular form. Flash [features], color, and random alternation of horizontal, vertical, and upside-down cones added to the distraction. The difficulty to stay focused and understand what the presenter was trying to convey got me into thinking about graphic representations of data and the importance of honing our skills to deliver effective presentations.

A lot has been said on the importance of using the right charts to convey presentation idea(s). For example, pie charts are useful tools for proportions but not for rankings. Some of the last entries in this blog have commented on the power of certain figures to rapidly summarize or compare rich data and effectively communicate an idea. In principle, cone charts are similar to bar charts. Some claim that they help “achieve a better visual appearance of your data.” Personally, I find it difficult to read cone graphs so as to be able to compare values across categories. Only stacked cone charts such as the nutrition pyramid (may) have a positive effect. And to round up your presentation skills, check out the Potent Presentations initiative of the American Evaluation Association. Cool!

Social Statistics Speaker Series 2013-2014

The list of speakers is now up. The series is a great opportunity to hear innovative methodologies on a range of topics from different disciplines. Be sure to jot some notes down and report back to us at the next Brown Bag meeting.

And, I’ll be speaking on asset poverty October 30.

Visualizing Data: Tips & Resources

huygens-graph


The first graph of statistical information (continuous distribution function) ever published (Huygens 1669) taken from http://www.datavis.ca

During the ICPSR summer stats camp, I had the honour and pleasure of taking two courses with William Jacoby, a political scientist well known for his contribution to the fields of measurement theory and data visualization. One of the courses I took with him was a short course on statistical graphics for visualizing data. In this post, I will briefly share some of the resources and takeaways I garnered from him.

This course focused a lot on analytical graphs (e.g. the graphs we use to help us gain insight into our data by making sense of patterns or relationships). Why might a researcher go to the trouble of coding a graph that they had no intention on including in a publication? Graphics prevent mistakes. Using graphics to analyze data was extended and popularized by statistician John Tukey (developer of the boxplot, among other innovations).

Presentational graphs, in contrast to analytical graphs, communicate the researcher’s main point to the intended audience. Thus, the purposes and uses of these two kinds of graphical displays of quantitative information are very different. However, often researchers treat them as the same thing, which can be a problem. According to Cleveland, the components of interpreting or decoding presentational graphics are: detection (can you see the data?), assembly (can you put things together into a structure?), and estimation (to what extent does the graphic facilitate accurate estimation?).

In all fields of social research, and particularly in social work research, presentation of results is highly important. Often we research phenomena for and with communities that may not be familiar with scientific or statistical methods. We can and should make the salient points of our analysis easier to understand through graphical representation. If we fail to do so, our research is likely not to have the kind of individual, community, and social impacts that we would like it to.

Resources for Information on Data Visualization (not at all exhaustive):

New Directions for Evaluation has special issues on Data Visualization. see Autumn (139), and Winter (2013) issues.

Anscombe, F. J. Graphs in Statistical Analysis [An amazing paper]

Whatever you do, do not do this.

Example of a very bad graph

Cleveland, W. S. Statistics Research Homepage [An excellent resource for using and understanding the trellis package from S-PLUS (lattice in R)]

Glenn, R. W. Data Graphics [A basic overview of data visualization history and theory]

Jacoby, W. Statistical Graphics for Visualizing Data [Slides, code, and lecture notes from the ICPSR course]

Tufte, E. R. The Graphical Display of Quantitative Information [A bible of sorts]

« Older Entries | Newer Entries »
Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.