COMPARATIVE CASE STUDIES: A BRIEF DESCRIPTION
A case study is an in-depth examination, often undertaken over time, of a single case – such as a policy, program, intervention site, implementation process, or participant. Comparative case studies cover two or more cases in a way that produces more generalizable knowledge about causal questions – how and why particular programs or policies work or fail to work.
Comparative case studies are undertaken over time and emphasize comparison within and across contexts. Comparative case studies may be selected when it is not feasible to undertake an experimental design and/or when there is a need to understand and explain how features within the context influence the success of programs or policy initiatives. This information is valuable in tailoring interventions to support the achievement of intended outcomes.
Comparative case studies involve the analysis and synthesis of the similarities, differences, and patterns across two or more cases that share a common focus or goal. To be able to do this well, the specific features of each case should be described in-depth at the beginning of the study. The rationale for selecting the specific cases is directly linked to the key evaluation questions (KEQs) and, thus, to what needs to be investigated. An understanding of each case is important in establishing the foundation for the analytic framework that will be used in the cross-case comparison.
Comparative case studies often incorporate both qualitative and quantitative data. Given the focus on generating a good understanding of the cases and case context, methods such as fieldwork visits, observation, interviews, and document analysis often dominate among the various data collection methods employed. While the strategies used in data collection for single and comparative case studies are similar, comparative case studies require more extensive conceptual, analytic, and synthesizing work. The synthesis across cases extends beyond the comparison of similarities and differences to using these similarities and differences to support or refute propositions as to why an intervention succeeds or fails. While some degree of comparison is at the heart of any design in which multiple cases are used, the distinguishing feature of comparative case studies is the emphasis on examining causality. As is the case for experimental and quasi-experimental designs, comparative case studies are time and resource-intensive. This is mostly due to the inclusion of iterations between propositions, evidence collection, and synthesis.
As a design option, comparative case studies are suitable in the following circumstances:
• When ‘how’ and ‘why’ questions are being posed about the processes or outcomes of an intervention.
• When one or more interventions are being implemented across multiple contexts, and there is little or no opportunity to manipulate or control the way in which the interventions are being implemented.
• When there is an opportunity for iterative data collection and analysis over the time frame of the intervention.
• When an understanding of the context is seen as being important in understanding the success or failure of the intervention.
• When experimental and/or quasi-experimental designs are unfeasible for practical or ethical reasons, or to supplement evidence from such evaluation designs.
As with other evaluation designs and methods, it is essential that the evaluation team undertaking comparative case studies have the necessary knowledge and skills to implement a comparative approach.
Skills are required in qualitative and quantitative methods, concept development and theory testing, and synthesis, and in particular the team must be able to systematically investigate causal questions using techniques such as qualitative comparative analysis and process tracing.
HOW TO CONDUCT COMPARATIVE CASE STUDIES
Comparative case studies essentially involve six steps, which should ideally be undertaken in the order shown below (see also figure
1). The program’s theory of change and the KEQs guide the focus and selection of the cases and the dimensions that will be studied. Steps 4, 5, and 6 are likely to involve several iterations and further data retrieval or analysis.
1. Clarify the KEQs and purpose of the evaluation to determine whether the use of comparative case studies is an appropriate design.
2. Identify initial propositions or theories (see below) to focus the comparative case studies, drawing on the program’s theory of change.
3. Define the type of cases that will be included and how the case study process will be conducted.
4. Identify how evidence will be collected, analyzed, and synthesized within and across cases, and implement the study.
5. Consider and test alternative explanations for the outcomes.
6. Report findings.
The sequence of these steps allows for explanatory evidence to be collected and tested iteratively (a major difference that sets comparative case studies apart from experimental and quasi-experimental designs for understanding causality). The cases included in the study can be conducted concurrently or sequentially, depending on how the program is being implemented (e.g., linking the cases to staggered program implementation) and the evaluation time frame and budget.
ETHICAL ISSUES AND PRACTICAL LIMITATIONS
Ethical issues
A range of ethical issues must be addressed when undertaking comparative case studies. The most pressing issue is that the level of description required to portray the richness of the cases may mean that the cases, and the participants within the cases, are identifiable. This is not necessarily a problem, but it must be clearly discussed and negotiated with those participating. Details that are not essential to an understanding of the case (e.g., level of school education, income) may be modified and an explanatory note included in the report to indicate this has been done to protect participant identities.
Practical limitations
Comparative case studies require a range of skills and expertise. As with other designs, it is important to assess the match between the required skill set and the evaluation team. Firstly, if the comparative case studies are to include quantitative and qualitative evidence, the evaluator or evaluation team must have the requisite skills in both methodologies. Secondly, the evaluator/team must possess strong synthesis skills and the capacity to integrate convergent and divergent evidence. Thirdly, the evaluator/team must be able to embrace the complexities of each case and to employ critical reasoning in making sense of the evidence and presenting a coherent argument. A good synthesis relies on good description and analysis; the evaluator/team requires skills in constructing propositions that build on these elements.
Comparative case studies have disadvantages in some contexts. One key issue is that they are often highly resource-intensive, particularly if extensive fieldwork is required. Depending on the purpose of a particular study, it may be better to purposively select a small number of cases or cluster cases of a particular type if the study budget prohibits more extended or in-depth study of a larger number of cases. Comparative case studies can be based entirely on secondary data analysis, and thus require no fieldwork or primary data collection at all.
The quality of available evidence must be suitably strong for this to be an appropriate option, however. Findings can be less reliable if there is too much of a time lag between cases in the comparison activity. The time lag may make comparisons across cases problematic due to the likely influence of other historical, social, and/or programmatic factors.
WHICH OTHER METHODS WORK WELL WITH THIS ONE?
A comparative case studies design may be a stand-alone design or it may be nested within other designs. If nested, the comparative case studies may be used to examine a specific element or the context in more detail. For example, within an experimental design that aims to ascertain the effectiveness of an intervention, comparative case studies may be used to explain why and how the intervention did or did not work across the intervention and control group.
The nesting of comparative case studies within other designs generates evidence about how the context has influenced patterns of outcomes. Where causal mechanisms have already been identified or proposed, comparative case studies may test the scope, applicability, and transferability of those mechanisms in other contexts. If comparative case studies make up a single component of a larger study, it is likely that the cases will be supplementary to the core design.
Cases may be used to illustrate similarities and differences in outcomes across the contexts in which a program or policy was implemented. In this scenario, there may be a focus on the description of similarities and differences rather than on the development and testing of causal propositions. In terms of data collection methods, comparative case studies are likely to use a combination of qualitative and quantitative data, including data from project documentation, performance measures, surveys, interviews, and observation. Comparative case studies might include two particular data analysis methods: qualitative comparative analysis and process tracing.
Qualitative comparative analysis documents the configuration of conditions associated with each case, usually in the form of a ‘truth table’. The analysis identifies the simplest set of conditions that can account for all of the observed outcomes.
The method can be used when there is a single cause for an outcome and also for more complicated causal relationships, for example, where there are multiple ways of achieving an outcome. Process tracing focuses on the use of clues within a case to adjudicate between possible explanations. It uses a series of different causal tests to see if the outcomes are consistent with the theory of change and whether alternative explanations can be ruled out.
PRESENTATION OF RESULTS AND ANALYSIS
There are no set rules or defined requirements for the presentation of comparative case studies. Negotiation and agreement on the format and presentation of the cases should occur early on in the process, however. Case study reports often have the advantage of being more accessible to a broader range of audiences than other types of evaluation reports. One of the dilemmas in crafting an account is what to include and what to leave out; choices about what will be communicated need to be made.
The evaluator will be synthesizing the evidence gathered in the report, so it’s important to ensure that they provide sufficient information to communicate succinctly but in a transparent manner (see also Brief No. 4, Evaluative Reasoning). A good report should include examples from the data to provide evidence for claims made. Given the diverse sources and methods of data collection that typically characterize a comparative case studies approach, a technical appendix may be used to supplement the more succinct formal report.
EXAMPLE OF GOOD PRACTICES
The Knowledge to Policy study5 looked at the effects of research in international development, exploring what kind of research is most effective in informing decisions in policy councils, and the conditions under which it is most effective. The study selected multiple cases representing a series of research studies that were designated as influential in some way to decision-making around the policy. Although cases were selected to represent a good cross-section of research studies funded by the host agency, the International Development Research Centre (IDRC), the cases were selected as a ‘positive’ sample – i.e., cases, where the researchers were involved, claimed that the study had influenced policy.
The rationale was that examination of the factors that supported influence across a relatively small number of cases would shed light on how influence was produced. The identification of commonalities across a range of contexts and content areas was seen as reinforcing the influence of these factors on decision-making. At the beginning of the study, 16 cases were selected, but over time key stakeholders identified additional cases as influential.
A final sample of 23 cases across a range of countries and content areas (such as poverty, water management, trade and finance, and resource management) was selected for in-depth examination and comparative analysis (with most of the cases conducted concurrently). A range of domains was identified to examine each case. For example, the study authors explored how the research studies were initiated, described the politics inherent in the context at the time, reviewed the proposal process, and examined the conduct, findings, and dissemination of the research results.
The study drew on both qualitative and quantitative data collection, with an emphasis on qualitative interviews and consultation approaches. Fieldworkers adopted a common data collection instrument or protocol to collect consistent data across all cases. Matrices and typologies were used to illustrate the dimensions of the cases that were compared and contrasted. Regional analysis workshops were conducted to build stakeholder ownership of the process and to generate insights into the features that facilitated and/or inhibited the influence of research on policy.
The findings from each of the 23 cases were described according to the common domains of interest, with learnings presented at the end of each case description. The rich, descriptive narrative indicated a thorough understanding of the cases. This is particularly impressive given that, in several instances, the fieldworkers had to rely on the recollections and reflections of the project participants.
The study authors triangulated different information sources to address the inherent weaknesses of retrospective memory and self-report. The approach to analysis was iterative; the initial data collection and analysis informed subsequent phases of data collection and analysis. The report evidence description, interpretation, and judgment about the factors that support research influence in a development context. The study authors do not claim that there were single causes for research influence, but instead identified multiple, intersecting factors that support policy influence.
The rigor of the methodological approach and the level of engagement and ownership of stakeholders in both the design and analysis stages of the comparative case studies were identified by the study authors as contributors to organizational (policy) changes within IDRC (the funding agent).
The study paid careful attention to case selection and ensured iterative processes of data collection and analysis, guided by a consistent data collection protocol that aided comparability. While the study could have been strengthened by a greater focus on searching for alternative explanations about research influence, the examples and the robust data generated through the iterative process increased confidence in the findings. The cases were positive examples where research influence was apparent (as identified by the researchers). A useful extension of this study, to strengthen causal attributions, would be to select cases where the causal features were present but where a positive outcome (influence) did not occur.