Alright, let's talk about experiments. We've been talking about randomized control trials Now they can look different ways. I've presented this graphic to you before, this is your classic randomized control trial where there's two groups, they were randomly assigned to get exposed to an intervention or not. And comparisons are made at two points in time. Now, randomized controlled trials can have a lot of theme and variation in their designs. Here's another one for you to look at and I hope you're getting more comfortable with looking at these graphic depictions of research designs and just intuitively knowing what happens. In this case. We also have two groups randomly assigned to get an intervention, a new program, policy change and they are both looked at pre exposures to the intervention group getting the intervention, but then they're also observed at three points in time after the intervention a lot of times. In, especially in the public sector, we want to know not just what the short term effects of an intervention Might be, but how they kind of play out over time. And so, intervention effects are looked for perhaps immediately after an intervention, but then six months later and 12 months later or one and three and five years later. So, again, there's a time dimension to these research designs, we need to be very specific about. But just generally, I want you to be aware that there are research designs where there is more than one observation afterwards and people are followed up over time. Now, it also is the case that randomized controlled trials are used to test the effects of more than one intervention at the same time. I hope when you look at this, you easily see that this is a research design where there's three groups randomly assigned to either get intervention number one Or intervention number two or nothing status quo for that third group, all groups are observed pre Intervention exposure for groups one and 2. Then there's two different interventions and a lot of times there may be slight differences in intervention that are being compared to each other because we really don't know if these slight variations matter or not. And then the groups are followed in this case for two times after intervention exposure. So the key to this, what makes something a randomized control trial is the random assignment of groups to either be in an intervention group or a control group. And this is the key thing that is controlling the threats to internal validity. So in the research design, we're creating a situation through the random assignment. We're a pretty good ground assuming that these groups are the same except for their exposure to an intervention. This is our counterfactual. Alright, well, what are the strengths and weaknesses of a research design that uses random assignment? A randomized controlled trial? The strengths are many. It is the gold standard of research design and it's what's used in laboratory research. It's what's used in almost all clinical and medical research. It is the gold standard because it does control for the threats to internal validity. It does that by having this strong counterfactual, it's a control group that gives us a very good sense of what would have happened without the intervention. So in this case we can be pretty sure about both the direction positive or negative, but also the size of an intervention effect and using this design. We not only can say, you know, did an intervention work. That general question that is just way too vague. We can also say it worked and actually it worked by increasing something by X percent or decreasing a negative thing by X percent or we can be pretty sure this happens a lot. There's no difference. The intervention didn't do anything now what are some of the weaknesses? They're very expensive to conduct. Don't really need to say more about that. But for lots and lots of reasons they're expensive to conduct. Also, especially with their applications in the public sector, there are ethical issues in random assignment. People don't like to feel like the government is experimenting on them. Communities don't like to think that they were being used as a control group. That data were being used about people or organizations or institutions in their community for no benefit because they were the control group and also from the point of view of external validity or generalization. It's sometimes hard to generalize from randomized controlled trials. Everything is so controlled in this environment and oftentimes interventions are implemented with very there's process evaluations that go on there implemented with very high quality from an implementation point of view. And then if we were going to generalize to the real world outside of the research environment, the effects might not be the same. So there are trade offs between internal validity and external validity in the case of randomized controlled trials now because of their expense. Some of the ethical issues when when should governments think about doing randomized control trials? Well in cases where you really want to be certain about both the direction and the magnitude of an intervention effect. Really good to be thinking about using an experimental design, a randomized control trial. Also in cases where there's an intervention and there actually are concerns that this intervention might harm people. A lot of times interventions are designed to serve purposes to meet aims and they might be meeting those aims but they also might cause harm to people in unintended sorts of ways. And so if there's concerns that an intervention might have some downsides and some unintended but you know, potential negative side effects, then it's really good to do a randomized controlled trial to be very, very sure about all the intervention effects, both positive and negative. Also, a lot of people say, you know, if there's going to be a pilot program or a demonstration project that has resources for an evaluation. This is the time to use as strong of a research design as possible because you don't want to have a, you know, a week research design on a pilot program that doesn't have good internal validity. You make erroneous assumptions about the effect of the intervention and then it gets spread and it gets scaled and it's really not working. So it's really good to be certain, you know, pilot situation by using a strong design doesn't happen all the time, but that's a good situation for it. Also, you know, for people like me who do a lot of this kind of research and we're really interested in publishing the results in academic journals because we want to share what's learned with you know, other jurisdictions, both the positive and negative effects of evaluation research. If you want to publish the results in an academic journal, they're going to want strong research designs. And so from a scientific standard, it's good to think about using as strong of a design as possible. And randomized control trials. Are that gold standard? Alright, so let's go through a few designs and think about their strengths and weaknesses. Now again, some theme and variation. Here's a different design. What's different about this one? Give me a second to look, there's no pre test in this one. So again, it's a randomized control trial. We have two groups, one group gets the intervention, the other one doesn't and then there's observations only later. What do we think about this design? It's actually a strong design, but only in certain circumstances. So this is a strong design when you're positive. Absolutely positive that both of these groups, the intervention in the control group are equivalent regarding the main outcome or dependent variable. At the baseline point of observation, let's do a couple of examples. So there's an example Of intervention done with smokers at public health clinics that used that design and in this case it's a smoking cessation intervention. And so what's being observed at .2 is cessation rates. The only people who are going to get this intervention are people who are already smoking. And so we know at baseline we're dealing with smokers and then they're randomly assigned to get the cessation intervention or not. So we don't really need the baseline data because to get into the study, they have to be smokers already. As another example, there are a lot of interventions that have been aimed at teen pregnancy with the goal of helping teens who have babies achieve some of their life goals including completing high school. So in this case, everyone in the research design is pregnant and they are in high school and the goal is to see if the intervention helps people complete high school. And so we don't really need the pre observation. The post observation is completion of high school that's not relevant at the pretest. So, again, they're opportunities for having theme and variation on randomized control trials. If there's randomization between the exposure to the intervention and the control group were on pretty strong ground in terms of internal validity.