Activity 6: Establish performance measures/outcomes/system scorecard.
Performance measures are tools for managing the performance of an agency, organization, or even a system. Performance measures provide benchmarks about whether or not optimum performance by the criminal justice system (and the entities within it) is being realized and, more importantly, whether the system is achieving what it intends to achieve under the evidence-based decision making (EBDM) framework. The use of performance measures provides a way to understand quantitatively the business processes, products, and services in the justice system. In a nutshell, performance measures help inform the decision making process by ensuring that decisions are based on clearly articulated and objective indicators. Moreover, undertaking and institutionalizing performance measurement throughout the criminal justice system allows policy discussions and decisions to be "data-driven," which in turn helps build the foundation for additional local evidence about what works.
In general, performance measures for the justice system fall into four categories:
Performance measurement is often confused with program evaluation because both attempt to capture quantitative information about desired goals and outcomes. Some key differences should be noted. First, program evaluation involves the use of specific research methodologies to answer select questions about the impact of a program. Performance measurement, on the other hand, is simply the articulation of performance targets and the collection/analysis of data related to these targets. Second, program evaluation is designed to establish causal relationships between activities and observed changes while taking into account other factors that may have contributed to or caused the changes. On the other hand, performance measurement simply provides a description of a change, but cannot be used to demonstrate causality. Third, program evaluations are usually one-time studies of activities and outcomes in a set period of time, whereas performance measurement is an ongoing process.
As you begin the process of defining performance measures, there are seven rules that need to be kept in mind. Performance measures should be
This starter kit is designed to help jurisdictions understand performance measures and to provide a guide for the development and implementation of performance measures systemwide. Information about the key steps in performance measurement is provided in addition to sample performance measures. It is important to note, however, that performance measures should be locally defined and driven; as such, the sample measures may or may not be relevant in a specific jurisdiction, depending on the focus of the local initiative. Finally, tips are offered for the implementation and use of performance measures.
Development of performance measures should involve a variety of stakeholders. At a minimum, the leadership of the various components of the justice system, along with some line level representatives, should be part of the process. The leadership can provide the broad systemic perspective about how the system should be performing under an EBDM initiative and how each agency/entity within the justice system contributes to overall system performance. The inclusion of line personnel, however, provides a different level of detail and, to some extent, a reality check about how the system is currently performing and what the capacity is for performance. Participants should also include representation from groups that have an interest in the justice system—city/county government budget officers and managers, health/mental health treatment providers, etc. The community and the media can also be important stakeholders to include as, ultimately, it is through these groups that performance is communicated and legitimacy is established. The point is that for performance measures to have validity (not necessarily in the statistical sense), they must be meaningful for others who judge the performance of the system.
Jurisdictions may wish to consider engaging an outside facilitator with experience in performance measurement to provide guidance and assistance through the process. Local universities are an excellent resource for finding this kind of assistance.
To develop and implement performance measures, the stakeholders identified above should undertake four key steps:
Detailed guidance for each of these steps is provided below.
The first step for articulating performance measures is to define what is meant by "optimum performance," i.e., establishing harm reduction goals and objectives for the criminal justice system. Several questions can help focus the discussion on what the jurisdiction hopes to accomplish:
The answers to these questions need to then been articulated in terms of quantifiable goals and objectives. It is important to understand that goals and objectives are not synonymous. Goals represent the desired end result of the system. Objectives define the short-term indicators that demonstrate progress toward goal attainment and that describe who or what will change, by how much, and over what period of time. For example, broadly stated, one goal might be that the recidivism rate be no higher than 20%. An objective might be a 5% annual decrease in the percentage of offenders who commit new offenses in a three-year period.
Another important consideration in defining goals and objectives is adherence to the SMART principle:
Once goals and objectives have been defined, the stakeholders should compare them to the impacts and outcomes identified in the system-level logic model. Each goal and objective should align with the intended impacts and outcomes articulated in the logic model. Although there does not need to be complete overlap, there should be no contradictions.
The second step in defining performance measures encompasses a number of activities:
Well-articulated goals and objectives should lend themselves nicely to the identification of key indicator data. Using the worksheet in Appendix 1, jurisdictions will need to "break down" the goals and objectives into specific types of data that can be collected. Using the example from Step 1 above, the table below shows the goal, the objective, and the types of indicator data that are needed to measure performance:
Our jurisdiction will have a recidivism rate of less than 20%.
5% annual decrease in the percentage of offenders who commit new offenses in a three-year period
As indicator data are being identified, jurisdictions should note if the data already exist; if so, they should identify who "owns" the data and, if not, they should determine whether the capacity for obtaining the data exists. To the extent that data is not already being collected or the capacity to collect the data does not exist, consideration should be given to the relative importance of the indicator. This next step in the process will help refine the list of performance measures.
An ideal performance measurement system must be manageable; as such, the number of performance measures for each goal and objective should be limited. Generally, there should be no more than three or four measures per goal or objective and, in fact, there may be fewer. Jurisdictions should aim to select those measures that are the strongest indicators of performance for which data already exist or for which the capacity for the data to be collected is in place. In refining the list, it is important to consider the following seven questions:
Using Information Dashboards To Make Law Enforcement Decisions
Law enforcement has long understood the importance of routine performance measurement. By using the "dashboard" approach—that is, putting a spotlight on key information on a routine basis—law enforcement agencies around the country are using data to assess performance and adjust activities based on key outcome measures.
Police Chief Bence Hoyle, of Cornelius, North Carolina, states that such dashboards should
- identify and disseminate information about criminal activity to facilitate rapid intervention;
- identify and disseminate information about crime to assist in long- and short-term strategic solutions;
- allow agencies to research the key incident data patterns, such as modus operandi, repeat offender locations, or other related information, such as traffic stops near the scene, so suspects can quickly be identified;
- provide data on the effectiveness of specific tactics, in near real-time, through focused views; and
- support the analysis of workload distribution by shift and geographic area.
For more information, see "Dashboards Help Lift the 'Fog of Crime'" at http://www.theomegagroup
The question of performance targets is a particularly important one and requires more than a simple "yes/no" answer. As the list of measures is refined, jurisdictions should begin thinking in terms of what the specific performance targets should be. In other words, what is the "magic number" that demonstrates optimum performance? For example, if the intent is to implement pretrial risk assessments in order to decrease jail operating costs, the performance target might be that 90% of release decisions are consistent with assessment results. The logic model may provide some guidance in answering this question.
Because performance measurement is an ongoing process, it is important to have a well-defined data collection plan in place prior to the actual collection of data. As shown in Appendix 2, the data collection plan should include the following:
Once the data collection plan has been agreed upon by the key stakeholders and the agencies/persons that will be responsible for collecting the data, the jurisdiction should collect baseline data for each performance measure against which progress can later be measured.
It is rare that the data in raw form will be sufficient for assessing performance; quantitative analysis of the data is generally needed. The quantitative analysis will require basic statistical calculations such as ratios, percentages, percent change, and averages (mean, median, or mode). In some instances, depending on the measures selected, more complex statistics will be necessary and may require the involvement of persons with statistical analysis experience. Employees in the city/county manager's offices may be resources, or even employees within criminal justice agencies that have analysis units. Local universities are also good resources for statistical analyses.
Once the performance data is collected and analyzed, it should be reported to stakeholders in a clear and easily understood manner. Although there is no wrong or right way to report data, the following list of reporting formats should be considered:
Jurisdictions should also establish a regular mechanism for communicating and discussing performance that includes target dates for the release of information. Possible mechanisms include
The actual performance measures selected by the jurisdiction should be reflective of the goals and objectives that the stakeholders have identified as part of the EBDM initiative. The following list of possible performance measures are provided for illustrative purposes only:
- XX% of low risk arrestees cited and released
- XX% of defendants screened with a pretrial risk assessment tool
- No more than XX% cases resulting in deviations for pretrial release from risk assessment results
- XX% of jail beds occupied by low risk defendants awaiting adjudication
- XX% of defendants/offenders with low risk assessment scores placed in diversion programs
- Risk assessment information provided to judges in XX% of cases
- XX% of cases in which sentencing conditions align with assessed criminogenic needs
- XX% of offenders placed in interventions specifically addressing assessed criminogenic needs
- XX% of offenders who commit new offenses in a three-year period
- XX% of victims who report satisfaction with the handling of their cases
1 Satisfaction can be measured on different levels but generally represents the satisfaction of justice system "consumers" such as victims, witnesses, and defendants. However, in certain instances, it may be desirable and important to measure satisfaction among those working in the justice system.
2 For more information on developing a scorecard, see 6b: Developing a Systemwide Scorecard.
Boone, H. N., Jr., & Fulton, B. (1996). Implementing performance-based measures in community corrections (NCJ 158836). National Institute of Justice Research in Brief. Retrieved from http://www.ncjrs.gov/pdffiles/perform.pdf
Boone, H. N., Jr., Fulton, B., Crowe, A. H., & Markley, G. (1995). Results-driven management: Implementing performance-based measures in community corrections. Lexington, KY: American Probation and Parole Association.
Bureau of Justice Statistics. (1993). Performance measures for the criminal justice system. Retrieved from http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/documentI.html
Dillingham, S., Nugent, M. E., & Whitcomb, D. (2004). Prosecution in the 21st century: Goals, objectives, and performance measures. Alexandria, VA: American Prosecutors Research Institute.
Hatry, H. P. (2007). Performance measurement: Getting results. Washington, DC: Urban Institute Press.
Hoyle, B. (2011). Dashboards help lift the 'fog of crime.' Retrieved from http://www.theomegagroup.com/press/articles/dashboards_help_lift_the_fog_of_crime.pdf
National Center for State Courts. CourTools. Retrieved from http://www.ncsconline.org/D_Research/CourTools/index.html
National Research Council. (2003). Measurement problems in criminal justice research. Washington, DC: National Academies Press.
Pennsylvania Commission on Crime and Delinquency: Office of Criminal Justice Improvements. Criminal justice performance measures literature review calendar years: 2000 to 2010. Retrieved from http://www.google.com/url?
Rossman, S. B., & Winterfield, L. (2009). Measuring the impact of reentry efforts. Retrieved from http://cepp.com/documents/Measuring%20the%20Impact.pdf