Welcome back. In this session we're going to talk about the third step in the program evaluation research process, which is focusing the design and gathering credible evidence. What I want to do in this session is go through some jargon with you. I got some really important terms that are used in research designs for the evaluation of interventions, and I need to make sure that these are terms that you understand well and are comfortable with. So in this session, we're just going to define some really important terms. One, research design; what do I mean when I say that? Also, we're going to talk about independent and dependent variables in the context of evaluation research. Then also a really important concept in evaluation research, which is the counterfactual. Research design, what is that all about? Well, a research design is the planning of some scientific inquiry. Research designs, first of all, specify research questions. What is it that people want to ask and then answer about some phenomenon? This cuts across all different kinds of science, obviously. The research design also then specifies based on the research questions, how the research will actually proceed. There are a standard set of components to any research design. Whether we're talking about laboratory research and microbiology, or we're talking about anthropological research. Then certainly when we're talking about the art research, which is policy and program evaluation research. The first component is the purpose the to just explore; is it to describe a phenomenon or is it to explain relationships? Here we're talking about causal relationships between two or more things. More on that in a minute. Research designs also have units of analysis. What's the units that the research is focusing on and unit sounds very technical, abstract. In our case, when we're thinking about government policies and interventions, the unit of analysis is almost always people, individual people, but it also could be communities, it could be populations. The unit of analysis for program and policy evaluation also could be businesses, schools, churches, non-profit organizations. The unit of analysis needs to be clear. Often individual people, but it could be groups of people or other organizations. Also, what's the topic? What are we looking at? Are we looking at in our unit of analysis, actions, orientations, which means things like knowledge, attitudes, behaviors, practices, conditions, environmental conditions. Again, the topic needs to be clear, could be one or more things in one specific research design, but we have to be clear about what is it that we care about and we're focusing on. Then also, every research design has a time dimension which becomes very important when we're laying out in more detail the design. Now, back to the issue of purpose. Program evaluations are doing what is the hardest thing in science and we're trying to explain what happens when a new policy or program is implemented. That means we're actually trying to establish a causal relationship between the implementation of a program or policy and exposure to it. Then something that happens afterwards. Causal inference is what we're talking about here. Again, in science, establishing causal relationships is very hard to do. It requires a specific research design that makes comparisons between what happens when something is exposed to an intervention versus what would be the situation without exposure to that intervention. That's the counterfactual. The counterfactual is, what would the world look like or what would our situation look like if we didn't have this new policy program intervention. What would the world look like without it, what does the world look like with it, and then we compare the two and see if it's different. Some of you might have experienced working on research and laboratories. The counterfactual in laboratory research is usually what's referred to as the control condition, and we use that term control in program evaluation research as well. Now, the control condition is what happens in the lab when there's no introducing of the experimental treatment, or again, the exposure. In the real-world of program and policy evaluation, the real messy world. We're not in a controlled environment of a laboratory. That makes us work much more hard. We're trying to do the same thing. We're trying to establish a causal relationship between some change, intervention exposure and something that happens afterwards. But again, it's so much harder in the real-world. Let's now also talk about independent and dependent variables. These are terms used in every science as well. Now I'm going to introduce a framework to you that uses a bunch of x's and o's. When we talk about program evaluation research designs, we actually try to write them out, map them out on paper. In this case, a PowerPoint presentation so we can actually see the research design and see where we have a counterfactual and see where we have our independent and dependent variables. The x in this case is our intervention, which is exposure to a program, service, or policy. O is an observation of variables that the intervention is attempting to influence. Again, we're trying to establish a causal relationship between x, the intervention in o, observing the things we care about. Let me show you an example. Here we have research design that's called the pre-test, post-test research design. We have an observation of our units of the topics that we care about, pre-test, then there's exposure of this group to an intervention. Then we look at them again. In this design, we're going to compare the pre-test to the post-test to see if there is a change. And then we might attribute it to the intervention. Again, 01 is the first data point pre-intervention, x is a new policy implemented, and o2 is the second data point post-intervention. Variables are characteristics or attributes of units under observation that can vary and they're of interest to us. A dependent variable is something that depends on exposure to another variable and independent variable. A dependent variable is the outcome variable or the thing that we're trying to explain or predict in program evaluation. It's those o's, it's the outcomes that we care about and we think that intervention is going to affect. The independent variable, is the explanatory variable, the variable that we think is causing changes in the outcome or the dependent variable. Now, we have other kinds of independent variables we might in addition to that explanatory variable, which in our case is the intervention, we might have some other independent variables that are correlated with the dependent variable that we might have to control in our analysis. If you've taken statistics, here's where you want to think about control variables. If you've taken statistics before, these terms, independent and dependent variables should not be unfamiliar to you. Let's visualize this. Again, here's our pre-test, post-test design, and the o's are a dependent or outcome variables. We're looking at these dependent variables pre and post intervention. X is our independent variable. It's the thing we believe is going to cause a change in that dependent variable. We look at the dependent variables at two points in time and then know that in-between those two points in time, there was exposure to the independent variable, a program or policy change and then we look to see if there is a change in this outcome variable. A strong research design provides a valid assessment of the counterfactual, or again, what would have happened in the absence of the intervention policy or program. The counterfactual is what the world, or are part of the world that we're looking at would have looked like without the intervention. In this design, what do you think the counterfactual is; is there even a counterfactual here? Well, yes, in this design, the first observation point is the counterfactual. We in this design are making the assumption that the world would look the same as it did at time 1, at time 2 without the intervention. Now that's baby is a leap that maybe is too strong of an assumption. But again, in this design, we look at pre-post and assume that changes in post 2 might be the result of exposure to that intervention. The design I showed you, we might worry a lot, there's a lot of things to worry about in that design is assuming that the point in time one, the observations there is a good counterfactual. We'll talk in more detail. There are some concerns about that. Here's another research design where we actually have a control group. We have a group that we're looking at at the same two points in time, but they're not exposed to the intervention, and so this is a stronger research design. In this design, our counterfactual is actually the control group. What happened in that group? Here we're assuming these two groups should look alike at time 2, because they do look alike at time 1 or close enough, and then if there are differences in the two groups at observation point number 2, we might think that part of that or all of that is an estimate of the effect of the intervention. I'm sure you have a lot of questions about this. We're going to talk a lot more detail about the strengths and weaknesses of different research designs and their counterfactuals.