How do you identify where your stewardship program is doing well and where there are opportunities for improvement, and how can you tell what sort of impact your stewardship program efforts have made? Collecting and reporting stewardship metrics is not just a matter of good practice, but will soon be requirement of government and regulatory bodies. We're going focus on two questions in this section. The first is what your stewardship program should measure. Well, there are many important potential areas to measure to assess your stewardship program. There are two that are a focus of recent guidelines. Then we'll spend the rest of the time discussing the specifics of how to perform these measurements by working through examples. The world of antimicrobial stewardship has finally been noticed by key regulators of healthcare system quality, including the Joint Commission and the Centers for Medicare and Medicaid Services. These groups have recently promulgated guidelines for antimicrobial stewardship activities for participating institutions. These guidelines along with recent ones from the Infectious Diseases Society of America, address issues of measurement and reporting and antimicrobial stewardship. The two common elements across these guidelines are recommendations to measure the aggregate use of antimicrobials and the patterns of antimicrobial resistance. In future years, these data elements may even become publicly reportable, so let's focus on these two metrics. The first step in measuring antimicrobial use is determining what you will actually be measuring. Most clinicians would consider use of antimicrobials to be administration of a drug to a patient, but it turned out that most studies that report antimicrobial use are actually measuring a different step in the antimicrobial use process. From as far removed as the drug being purchased by the pharmacy, through various steps of ordering and delivering the drugs, to steps that occur after the user has actually occurred, to billing data. It's important to know what step in the process you are measuring since the data may exist in different places depending on the step, and because comparisons are most valid when performed at the same step. After you decide the step and source of your data, you'll want to consider what numbers you actually want to measure. This depends, not surprisingly, on what you want to know. If you want to know how often patients are getting any or a particular antibiotic, you'll be interested in a point which occurs at a particular time such as an ICU admission or period for example over the course of an admission prevalence. If you're less interested in the start of antibiotics, and more in their finish, you can examine the mean or median duration of antibiotics. For all causes or for a particular infection. Obviously, both of these contribute to the total amount of antibiotic use, and so a commonly used metric is the incidence density rate of defined daily doses or days of therapy per thousand patient days. Adjusting for patient days allows comparisons between time periods and across institutions and services with different numbers of patients and different lengths of stay. There are two primary metrics for measuring aggregate antimicrobial use, which are meant to allow valid comparisons across institutions and over time. Historically, the Defined Daily Dose methodology which was endorsed by the World Health Organization has been the most common metric reported in the literature. However, DDDs as they are called, are very sensitive to a number of influences, including the mix of pediatric patients, patients with renal dysfunction, use of milligram per kilo dosing and trends in the dosing of antibiotics. The end result can be substantial and unstable variability between the number of doses used and the DDD results. Days of therapy is an alternate metric that is less sensitive to these influences. It is the dose methodology endorsed as the standard metric by the CDC's antimicrobial utilization module, and will likely be the primary metric in the US going forward. Defined Daily Doses can be measured on a variety of data sources, and involves summing the total grams of drug used during the period of interest and dividing by a number set by the World Health Organization, representing an average or Defined Daily Dose. Days of Therapy involves summing the total number of days that a patient received any number of doses of a drug. Both are then adjusted for some measure of time at risk such as patient days, bed days, admissions, et cetera. These numbers are then typically multiplied by 1,000 simply to avoid small fractions. This table is an example on the level of an individual patient of how the two measures, Defined Daily Doses and Days of Therapy are calculated. For DDDs, the total number of grams of drug is summed and divided by the Defined Daily Dose, which is supposed to be the standard dose given. For Days of Therapy, you simply count the number of discrete days the drug was given on. Depending on the drug, the dose given, and the WHO's definition of a daily dose, sometimes the DDDs and DOTs give the same answer as illustrated in this case by vancomycin and moxifloxacin. Sometimes they don't, as illustrated by Ampicillin/sulbactam and cefazolin. So, while either it can be a valid measure, they really shouldn't be compared to each other. Next, let's discuss measuring antimicrobial resistance. Now, a particular institution, the Infection Control Departments for Microbiology may have assumed responsibility for aggregating and reporting antimicrobial resistance data. However, it is essential for the Antimicrobial Stewardship Program to understand how the data was gathered, filtered and reported. As we'll see, decisions about what isolates to include can have substantial impacts on an institution's apparent level of antimicrobial resistance. So, let's talk about what we're discussing when we talk about aggregate antimicrobial resistance, with the very simple example looking at the results of culture and susceptibility testing for four patients and a single drug. There are two primary ways to quantify this data in the aggregate. The first is by expressing it as a period prevalence, the percentage of resistant or susceptible isolates over a defined time period. This is most useful for clinicians selecting empiric therapy based on past patterns of susceptibility, as on an institutional antibiogam. Alternatively, resistance can be expressed as a rate, the number of resistant isolates divided by the number of admissions. In large data sets, this is typically multiplied by a thousand. This is most useful for surveillance as it captures the number of infections as well as the burden of resistance. Because period prevalence is most commonly used in the context of Antimicrobial Stewardship Programs, we'll concentrate on that here. A key consideration in understanding the reporting of aggregate antimicrobial resistance is the variety of rules that may be applied to these reports before the end users see them. It is rare that reports of aggregate Antimicrobial Susceptibility simply reflect the sum total of all bacterial specimens tested by the microbiology laboratory. Whenever you're reviewing an aggregate antimicrobial susceptibility report, it's important to ask which rules were applied to the data. Common examples include; inclusion of only isolates submitted from inpatient locations, which can lead the emergency department as a gray area for decision in terms of whether or not they should be included in an inpatient antibiogram. The exclusion of isolates taken for surveillance purposes only, for example, nasal swabs for MRSA colonization. The use of only the first isolate per patient per time period as for inclusion in the antibiogram, and we'll talk more about this later. Finally, the use of very specific location data like ICU only isolates. To attempt to standardize the application of these various rules and reporting formats, the clinical laboratory standards institute has developed a set of recommendations for reporting of aggregate antimicrobial susceptibility data in antibiograms. For example, it's recommended that reports are aggregated on a yearly basis. More frequent reports are possible, but for meaningful conclusions, there should be a minimum number of isolates in that time period. Only routinely tested antibiotics should be included to avoid bias when only a subset of drugs are reported. There can be substantial differences in results based on whether the isolates will represent inpatient, ICU or outpatient isolates, and so the source of the isolates in an antibiogram should be clearly described. Similarly, in some cases, there may be value in reporting out specific infection sites such as the urine, which should also be specified in the antibiogram. The report should reflect cultures taken for diagnostic purposes only and be quantified as percentage susceptible instead of percentage resistant. Finally, patients may contribute multiple isolates of the same organism over time. The recommendation is to include only the first isolate that each patient contributes of a particular organism, over the time period under study. Let's take a further look at why this last point is important. Let's take a look at a small group of patient isolates collected over a one month time period. Consider all of these isolates to be of the same organism, maybe something like pseudomonas. Some patients have multiple isolates of the organism over the time period. If the susceptibility results from all of the isolates were included, then we'd find that 50 percent of the isolates were susceptible. But some of the patients contribute multiple isolates, which can result in a biased number if the goal is to get the best idea of how to select empiric therapy for a new patient. Applying rules to exclude repeat isolates in a given time window such as seven days or 30 days can give conflicting results depending on the window chosen. Thus, the CLSI's recommendation is to recommend including only the first isolate per patient per time period for aggregate reports, meant to guide empiric therapy. In this case, you can see a substantial difference from susceptibility report compared to when all isolates are included. In summary, the place to start with antimicrobial stewardship metrics is in making sure you can collect and report two measures; aggregate antimicrobial use and aggregate antimicrobial susceptibility. In terms of the best methodologies for doing this, days of therapy is becoming preferred over defined daily doses for measuring aggregate antimicrobial use, and methods of reporting aggregate antimicrobial susceptibility should conform to guidelines put out by CLSI. Acquiring and manipulating this data can consume a substantial amount of time, so a key component of a successful stewardship program is the funding and personnel necessary to report out this information. In part two, we'll discuss how to put the data that you've gathered into context for actionable use.