By Robin Ghertner, MPP
Why You Should Assess Your Language Access Program
The most successful programs are the ones that regularly assess their progress toward their goals. This assessment is important for the success of a language access program because it can inform decisions about resource allocation, program improvement, and growth. There are three broad steps to measuring the progress of your program: selecting measures; collecting data on your measures; and using your measures.
Step 1: Select Appropriate Measures
The most important decision to make is what you will actually measure. Take care in selecting a small number of relevant measures - having too many can be just as bad as having too few. In general, measures fall into three categories: Inputs are those things that go into making your program work, such as finances, staff, technology, contractors, and volunteers. Outputs are those things that are directly produced by your program, such as translated documents, interpreted conversations, interpreted workshops, etc. Outcomes are the results of your outputs, for example, increased involved or participation by Limited English Proficient (LEP) clients, or decreased complaints by LEP clients.
There are benefits and setbacks to measuring each one of these, and what may work best is a combination of them. Inputs are often the easiest to measure - it is simple to count how many bilingual staff you have or how many staff are dedicated to managing your language access program - but that data may not convey meaningful information about the program's progress. In other words, having resources doesn’t necessarily lead to significant service delivery. Outputs are the most direct to measure since they are what your program produces, but doing more doesn’t always translate to reaching your goals. For example, an increase in the number of documents translated or more frequent use of interpreters does not provide information on the quality or impact of your agency’s translation and interpretation program. Measuring outcomes gets at those goals, but because a number of factors influence outcomes outside the control of your program, outcome measures may not only reflect your program’s effectiveness.
The measures chosen need to cover the full range of the services provided. This means they need to cover both the depth and breadth of language access. Depth refers to how much service is provided, for example how many documents are translated or how many hours of interpretation are provided. Breadth refers to how many ways services are provided, for example the range of languages served or the variety of ways services are provided, such as translation of documents, in-person interpretation, interpretation over the telephone, and provision directly in the foreign language. The table below gives examples of good measures I have seen for language access programs that cover depth, breadth, and both.
Table 1. Program Elements Used to Create Performance Metrics
|Category||Depth||Breadth||Depth and Breadth|
Step 2: Collect Data on Your Measures
After you have selected what to measure, you have to design a process for incorporating data collection into your assessment program. There are two aspects to collecting data on your measures: how you measure them, and how often you measure them.
The how depends on the measure. Some data may come from existing databases within your agency. Hours of staff time may already be collected in a human resources database, budgetary expenditures are probably tracked by a budget analyst, and participation in other activities may be collected on sign-in sheets. Other measures may require you to create data collection systems. If you don’t already track the number of documents translated or the number of LEP individuals served by an interpreter, you will likely need to determine the best way to do that reliably. In many cases, you can write data collection requirements into contracts with external translation or interpretation vendor. See for example, a recent Practitioner’s Corner on writing effective Requests for Proposals and contracts, here.
If your agency relies largely on internal staff to provide interpretation and translation services, ask staff to keep track and report to you the services they provide. Alternatively, some agencies responsible for providing language access have assigned an LEP coordinator, or key point of contact tasked with keeping track of and assessing data. In some cases, there could be a risk that staff may not provide reliable data if they perceive their pay or position could be affected by what they report. In that case, you may want to rely on LEP recipients of services to give you data for your measure through a short questionnaire, filled out after services have been provided. This questionnaire could also ask recipients their satisfaction with services, capturing an outcome measure. For example, see the Georgia Department of Human Services’ customer feedback form in Spanish and English.
Turning to how often you collect data, in general, the greater the frequency the better. For example, a weekly report gives you more opportunities to notice problems and identify their causes than a monthly report. Writing protocols into your agency’s strategic plan or annual work plans for how often data are to be collected is a good practice to ensure data are collected consistently.
Step 3: Use Your Measures to Inform Program Design Decisions
An important final step in developing an assessment program is designing a method for using the data you have collected. There are two steps to using your measures. First, you need to analyze the data, and second you need to determine how you will use the analysis results. One way to look at your measures is across program areas. If your language access program spans multiple locations, you can use your measures to compare the sites to each other. This can help you determine where greater or fewer resources are used, and where they may be more or less effective. Another way to analyze your measures is to look at them over time. To do so, you need a baseline, or starting point. Without some place to call the beginning, you can’t tell if your services are increasing, staying the same, or decreasing. The first round of data you collect on your measures can be considered your baseline. The second round of data can then be compared to your first round, and your third round compared to your first and second, and so on. As you get more rounds of data to analyze, you may start to notice trends. Trends may go upward, downward, or they may be cyclical, which means they are up during certain periods (like during the beginning of a school year or calendar year) and down during others (during summer vacation or during the holiday season).
After your analysis is complete, you want to interpret what the results mean, look for causes for any differences, and then make appropriate decisions to alter your program. Interpreting the results can be difficult, and in doing so you need to keep in mind what your measures are actually measuring. It is a natural tendency to assume that if your measures go up, you’re improving, and if they go down, you’re getting worse. But that may not be the case at all. If you’re measuring an output, such as, number of interpreted parent-teacher conferences, an increase literally means your program is doing more. That could mean you are providing more needed services to the community, which is a good thing. However, it could also mean your community is changing and there are more LEP parents requiring services. This type of information will have implications on the resources your program needs to adequately serve your clients.
If your measures are higher in one place than another, or if they have gone up or down over time, you will want to start investigating the causes for these differences. The reasons could be obvious. For example. a decrease in interpretation services requested could be easily explained by cuts in staffing. They could also be more complicated and take some digging. You may have to speak with staff members, clients, or outsiders who regularly observe your program in order to get a better sense of why the interpretation services you provide are not being fully utilized. You may also want to look at your measures together with other data sources, like data on community demographics (for national and state-level data on LEP populations, click here). After identifying causes for changes in measures, you can decide on the appropriate action to take. This could include shifting resources among sites, changing types of service delivery, providing staff development opportunities, or increasing awareness of your services.
Further Reading & Resources:
The following sources provide some more detailed information and examples of assessment of language access services.
Laglagaron, Laureen. 2009. Is This Working? Assessment and Evaluation Methods Used to Build and Assess Language Access Services in Social Service Agencies. Migration Policy Institute.http://www.migrationinformation.org/integration/language_portal/files/Language-Access-in-Social-Services.pdf
Ghertner, Robin. 2011. Implementing Meaningful Access: Language Access in New York City Schools. Report to the New York City Department of Education.http://www.migrationinformation.org/integration/language_portal/files/DOE-Language-Access-Report.pdf
For examples of how different state and local agencies across the country are selecting performance measures, collecting data, and analyzing data, do an advanced search of the MPI Language Portal by “Service Delivery Type” and select “Performance Measure.”
Robin Ghertner has spent his professional career implementing and assessing social programs, including language access, at the local and federal levels. He he is an analyst for the United States Government Accountability Office (GAO), where he conducts performance assessments and evaluations of federal programs across the country. In addition, Mr. Ghertner conducts program evaluations and statistical analyses for a variety of nonprofits and for-profit entities. He recently completed an assessment of language access in New York City Public Schools. This article represents the views of the author and do not represent the views of GAO.