Moving on to the Trial Execution Phase and elements of quality assurance that can be incorporated there, there are numerous different ways to promote quality during the trial itself. One of the ways is to conduct site visits. This is often required for heavily regulated trials and is a good idea for almost all trials. Site visits can be conducted either virtually or in-person or sometimes through a combination of the two. During a site visit, study monitors will review extensive amounts of information about that site's conduct of the trial, often including training materials and certification materials, elements of the protocol adherence. Making sure that the site is actually conducting the trial the way that it is designed to be conducted. Reviewing individual data collection instruments to make sure that they're completed properly. That there are not elements of those instruments that don't seem to have been completed properly, that there aren't missing pieces of information. An important element of a site visit for quality purposes is to ensure that consent documentation is being completed fully and properly, and that other study documents are being organized and retained appropriately. This can be especially important for regulated studies. For example, those that might be subject to subsequent Food and Drug Administration Review if being conducted in the United States. Data monitoring is another important element of quality assurance for a trial. This can help detect systematic errors or misunderstandings. The study design team thought that a particular element was going to be collected in a certain way, and through examination of the data, find that the data element is being collected or understood in a completely different way. Frequent examination of the data can help detect these problems, whether they're study wide or situated only at a particular site, or sometimes only with a specific person at a particular site. Therefore, data monitoring both across the whole study but also across the different sites, and comparing the sites one against another can help identify misunderstandings or other anomalies in the way that the data are being collected or entered. Unfortunately, sometimes it's also necessary to be aware of the possibilities for data manipulation or fraudulent data entry. Another element of quality assurance in data monitoring for a trial might be to try to identify suspicious data or patterns of suspicious data within a particular site or across an entire study. One example of this would be what is often referred to as digit preference. An example of a digit preference problem might be a study that collects resting heart rate. It is seen that at one of the sites there seems to be a pattern in which all of the collected heart rates end in a zero or a five, whereas at other sites it's much more randomly distributed. Or perhaps the proper procedure for collecting a resting heart rate is to measure the pulse for 30 seconds and then multiply by two, in which case the end digit should never be an odd number if that procedure was being followed. Digit preference can help sometimes eliminate either outright fraudulently collected data or misunderstandings as to how the data are supposed to be collected. Another potential way to identify suspicious patterns in the data is to look for unlikely outcomes or outcomes that are inconsistent with one another. Symptoms that should be related to the study outcome or the use of the study intervention, but yet patterns seem to suggest that at one site those symptom profiles are very different from other sites. Not always true that these are patterns of misconduct. They may just be patterns of misunderstanding, but either way, they should be identified whenever possible so that the quality of the study can be preserved. In the discussions in other modules in this course, we've also talked about various ways to identify what are sometimes referred to as data quality queries and different approaches to managing data quality queries. An important quality assurance part of a DQQ system is rapid feedback of potential problems in the data to the sites that have collected and can potentially address those queries. Then monitoring that those queries have actually been addressed, and perhaps even monitoring the timing with which those queries have been addressed. Another element of the clinical trials execution susceptible to good quality assurance processes include safety monitoring. This could include keeping careful track of both, the number and the nature of events as separate from serious events. Serious events are typically susceptible to expedited reporting, and therefore, will often be closely tracked. But non-serious events also can help ensure that the data are being collected properly, that the safety profile for the intervention or interventions being studied is being described appropriately. This another example in which overall and by site data can help illuminate potential problems and looking at the results of one site compared to another. Another important area for quality assurance is what I will refer to as intervention monitoring. That is, keeping track of numerous different characteristics of the study intervention in the trial. One element of this would be monitoring and detecting any potential problems with adherence to the randomization table. Tracking the treatment assignments are occurring according to the predetermined schedule, if a predetermined schedule is being used. If not using a predetermined schedule, then careful monitoring to make sure that the accumulating intervention assignments match what was expected. Another element of intervention monitoring would relate to the labeling of masked study interventions when using masking. For example, in a study using masked study drug, some form of verification that participants assigned to a particular treatment group are actually getting the intervention that was expected. In our group we'll often take study drug bottles randomly selected from the distribution center, send them for independent third-party analysis to make sure that the study drug preparation and labeling is correct. In some studies, we collect blood samples and they are analyzed for blood levels of the study drug. This is often not a quality assurance procedure, but can help also serve quality assurance. Usually, it's being done for scientific purposes. Same thing could be done for metabolites either on blood or urine or some other form to verify that participants receiving, for example, an active study drug actually are receiving it, and that participants expected to be in the placebo group do not show levels of that same study drug. Then another element of intervention monitoring would include careful tracking of distribution of study drugs, storage conditions of study drug. If a study intervention, for example, is supposed to be stored within a certain temperature range, then the clinical site ought to be able to document that they have, in fact, maintained the intervention within that temperature range through a daily temperature log or some other form. If a study intervention required refrigeration, then there should be the ability to document that that refrigeration has been maintained even over holidays or weekends. These are often records that would be reviewed during site visits, but could also be reviewed centrally without in-person site visits by having the documents scanned and emailed or in some other way transmitted to a central facility for monitoring. Many elements of quality assurance in a trial are collected together in what's often referred to as a performance report or a performance monitoring report. We'll spend a few minutes talking specifically about that element of a quality assurance plan. The performance report we think should often be quite transparent. How the metrics are determined and the results should be shared widely within the study team. This can help sites understand their performance better and allow them to see how they're doing compared with other sites. Sometimes sites can become sensitive to this, they don't want their performance shared with other sites, and we work hard to try to minimize those sensitivities, helping everyone understand that this is how a trial is best situated in order to improve its performance or to maintain high-performance. Everyone needs to understand that there's always room for improvement and that identification of issues and potential problems is something that's good for a trial, helps the trial achieve its goals and in ways that are most protective of the participant's time and burden. Some sites that are accumulated in a performance monitoring report are pretty obvious and are almost always included. For example, recruitment, data completion metrics, what percentage of data collection instruments that are expected by a certain date have actually been collected, visit completion, what percentage of expected encounters has actually been reported as having been completed, and protocol deviations is another fairly obvious metric. Although I noted here as perhaps, many trials will accumulate large numbers of relatively unimportant protocol deviations. For example, a visit that occurred outside of the protocol prescribed timing for that visit a day late, because the participant couldn't get to the clinic on time. Those protocol deviations are common and can overwhelm the ability to discriminate important protocol deviations. One protection against that is to try to write a protocol that doesn't require the reporting of a deviation because for example, visit timing is left as a guideline instead of as a protocol required specific timing schedule. Or, to categorize protocol deviations when reported as important or less important, and then cataloging and performing metric analysis on them separately so that it's easier to identify actual potential problems. Not to suggest that a site that has many more protocol deviations for visit timing than others is not important and perhaps it's quite important depending on the study. But there are other protocol deviations that might be more important. Perhaps some less obvious performance monitoring metrics that could be considered as part of a quality insurance plan. Including some statistical analysis such as box plots, or mean, and median for certain elements of data collection that might be really critically important so that it can be understood whether the sites are collecting and reporting information the same way , potentially adherence rates. Are the participants in the study adhering to the protocol or taking the study intervention? How well is that happening across the study and how well is that happening one site compared to another? Early on, if adherence rates are low, perhaps there are things the study can do to improve adherence rather than waiting until the end and then finding that adherence was low and there's nothing that can be done to correct for that. It's generally considered okay to share baseline demographics of the participants who are enrolling in a study with the study team, typically not separated by treatment group, but just overall. But this can help make sure that the population being enrolled is appropriate to the study, and that there will be no threats to the potential generalizability of the study once the study is completed. Including information about certification status of staff that have performed study procedures, for example, a report that shows how often a particular study procedure was performed by someone who was not certified by the study to perform that procedure. Maybe it's appropriate that that happens occasionally because staff were away and someone had to complete the procedure. We're keeping track of that reporting, it can help ensure that this does not become a threat to the study's quality. Here I show an example of a table of contents from a performance report that is included in a much larger report to the steering committee of a study. You're on a regular basis so this report was compiled once every quarter, or every three months and shared with the entire study team. And each of these elements showed information overall and by site. There's an example of a full performance report redacted for the particular sites that participated in that study, in the reading materials associated with this module.