A Taxonomy of Barriers to Producing Performance Indicators on Human Service Programs

© Copyright 2012 by Derek Coursen, All rights reserved.

Introduction

Over the last few decades human service programs have come under enormous pressure to measure their performance. The trend affects both government agencies and nonprofit service organizations. Funders now routinely expect their grantees to identify and track performance indicators, and an impressive infrastructure has been built up around that expectation: graduate courses, specialized consulting firms, and a growth of academic and professional literature. Most of the available guidance aims at helping stakeholders answer two questions. Most basically, what should be measured? And once measures have been obtained, how can they be used for management? But between those begin and end points there is a peculiar gap: the not-so-minor matter of actually producing the numbers.

Performance measurement projects typically begin with a group process to decide on a list of desired indicators. Stakeholders may create a logic model representing the program as a flow from inputs through outputs leading to outcomes, or they may use less formal methods. Either way, the group ends up with a notion of its program as a theoretical system that it wants to measure in specific ways. But at that point, every project must confront the feasibility of obtaining real data about the real life system.  That can be a difficult moment. The sad fact is that in most performance measurement projects, only a small proportion – usually between 20% and 40% – of the desired indicators will be feasible to produce.

Why is this so? There is a confusing and far-ranging muddle of barriers. They can have to do with everything from the structure of existing information systems to problems in administering surveys, from relationships between autonomous agencies to the human realities of a program’s day-to-day operations. Because there is such a grab-bag of problems, they are difficult to make sense of as a whole; at present, there is no structured tool to help people find their way through the morass.

To shed brighter light on the barriers, it will be helpful to organize them in a taxonomy. Traditionally, a taxonomy is a listing which, from the top down, logically divides a generic concept into the more specific concepts of which it is composed.1 The items in a taxonomy should, according to the classic rules, be mutually exclusive, jointly exhaustive, and chosen according to the same rationale. Building a taxonomy for this problem might seem like analytical overkill. In fact, it can be fruitful in several ways.

First, proposing a taxonomy is an invitation to others to help develop a shared understanding of what has so far remained murky. Others may look at the list, find it incomplete or otherwise limited according to their experience, and contribute to improving it. Second, a taxonomy can become a tool for practitioners. It can be used as a check-list of possible barriers to assess, and can point to the specific kinds of work that project teams would need to do in order to produce indicators. Finally, a taxonomy could raise awareness of systemic patterns and thus point toward areas where the human service sector as a whole might do things differently.

A Taxonomy of Barriers

Below is a proposed taxonomy. The items are intended to be mutually exclusive and jointly exhaustive. Production of any particular desired indicator, of course, may be – and often is – obstructed by more than one barrier at the same time. (For example, a necessary data element might be stored in a non-digital format and be collected optionally and be of doubtful accuracy.) The highest level of distinction is between cooperation and cost. In other words, the taxonomy assumes that unlimited resources for extra personnel, more software development and the like could overcome purely operational barriers. At the lower levels, each item represents a distinct aspect of the organizational or data environment which would need to be addressed in order for the indicator to become feasible. The depth of the taxonomy could be extended to further levels of detail.

1.0    COOPERATION
1.1    Data elements cannot be collected because of ethical concerns
1.2    Clients or other informants are unavailable or decline to answer question
1.3    Organization controlling data store will not share it
1.3.1    Legislative or administrative prohibition
1.3.2    Leadership unwillingness
2.0    COST
2.1    Data on the topic of indicator is not collected at all
2.2    Data on the topic of indicator is stored in a non-digital format
2.3    Data on the topic of indicator is embedded in unstructured text
2.4    Data on the topic of indicator is organized in a structure that cannot support the indicator
2.4.1    Definition of entity or attribute does not support indicator
2.4.2    Domain of values does not support indicator
2.4.3    Relationships between entities do not support indicator
2.5    Data on the topic of indicator is not collected consistently for all necessary cases
2.5.1    Data collection is optional
2.5.2    Compliance with required data collection is spotty
2.6    Data collected is of doubtful accuracy
2.7    Indicator requires merging client/service records from different data stores
2.8    Indicator requires components from data stores controlled by different organizations

Implications for Data Administration

Leaving aside the many organizational and management issues at play, the taxonomy points to a couple of ways in which approaches to data administration can help make performance measurement more feasible.

Several barriers have to do with interagency cooperation and merging data. This is because certain indicators (e.g., “% of program participants who are not rearrested within a year”) tend to require data from more than one store. Merging data has long been a challenge for formal social science research, and can be even more of a barrier for performance measurement projects which often have few resources allocated to them. But the trend toward building cross-agency data warehouses for multiple purposes means that government, at least, is developing greater capacity to produce indicators from merged data.2 At the same time, the advent of the National Information Exchange Model (NIEM) is enabling government agencies to share data with each other bilaterally more than they have in the past. Nonprofit service providers, unfortunately, have generally not yet begun to use data warehousing technology, nor do they usually have easy access to data from government agencies and other nonprofits. This gap could be filled through collective effort among funders to develop common formats for collecting and storing data from grantees.3

There is also a particularly frustrating kind of situation in which a program will expend great effort collecting data in an information system on a particular topic, yet still be unable to produce needed indicators because of the way the data is structured. For example, many programs screen clients for particular conditions and then refer them to specialized services elsewhere, so “% of positive screenings that are successfully linked to services” is commonly of interest. Yet even such a seemingly simple indicator can be difficult to produce: dropdown lists describing the referral may use sloppy categorization schemes, and in some databases the screening and referral records are not even easily linked with each other. This illustrates the widespread problem that human service concepts tend to be loosely defined and myopically framed, and thereby lay a poor foundation for designing information systems. The solution is for the sector to collectively develop clear and holistic conventions for modeling its environment.4 Until the sector has strong domain models, poorly conceived data structures upstream will continue to undermine performance measurement efforts.

References:

  1. Chisholm, M. (2012) “The Celestial Emporium of Benevolent Knowledge”, Information Management (February 17). Available at http://www.information-management.com/newsletters/IM-data-classification-taxonomy-Borges-Chisholm-10021966-1.html [Accessed 22 July 2012]
  2. Goerge, R. M. and Lee, B. J. (2002) “Matching and Cleaning Administrative Data” in M. Ver Ploeg, R.A. Moffitt and C.F. Citro (Eds.), Studies of Welfare Populations: Data Collection and Research Issues.  National Research Council. Available at  http://aspe.hhs.gov/hsp/welf-res-data-issues02/ [Accessed 22 July 2012]
  3. Coursen, D. (2011) “A Route to Streamlined Performance Reporting and Stronger Information Systems: Nonprofit Human Service Agencies and the National Information Exchange Model”, Data Administration Newsletter (October). Available at http://www.tdan.com/view-articles/15551 [Accessed 22 July 2012]
  4. Coursen, D. (2012) “Why Clarity and Holism Matter for Managing Human Service Information (And How the Sector Can Achieve Them)”, Data Administration Newsletter (April). Available at http://www.tdan.com/view-articles/15967 [Accessed 22 July 2012]

Share this post

Derek Coursen

Derek Coursen

Derek Coursen develops information systems strategy and data architecture for public service organizations. He has led informatics departments at two major nonprofit agencies in NYC and has been adjunct faculty at NYU’s Wagner School of Public Service. Derek holds master’s degrees in information science (Pratt Institute), information systems (Pace University) and management (NYU). He can be contacted via Derek Coursen Consulting LLC or LinkedIn.

scroll to top