National Safety Net Scorecard

A New Framework for Assessing Safety Net Delivery

The existing system for evaluating state safety net programs is not adequately assessing program delivery because it does not capture the human experience of accessing services.

This new National Safety Net Scorecard is a more meaningful set of metrics that could be used as “vital signs” to effectively assess the true state of the current program delivery landscape and measure progress over time. These metrics would reframe what quality service looks like for benefits administration, driving us closer to a human-centered safety net.

The Safety Net Scorecard

Safety Net delivery and progress should be measured against these three key indicators of program success:

Illustration of steps up to an open door

Equitable Access

Accessibility means removing barriers so that all people can easily access services across languages, racial and ethnic identities, ages, and levels of ability. It also means providing a dignified, respectful, and welcoming experience that makes it as easy as possible to access benefits. An accessible program has many open doors that successfully and equitably invite in all eligible people.

Online Accessibility

Is the online experience of the service simple and easy to use?

Measure the following:

  • % of applications submitted via paper, desktop, mobile, landline phone, in person
  • % of applications with at least one verification document submitted electronically  among applications where verification was required
  • Of website visitors who begin the application process, % who are unable to successfully create an account, if applicable.
  • Of website visitors who begin the application process, % who are unable to successfully complete remote identity proofing, if applicable

Account creation and remote ID proofing can be significant barriers to entry for many online applicants.

An estimated 35-54 million Americans don’t have enough credit history to complete Remote ID Proofing.

Mobile Accessibility

Does the service work easily on a mobile phone?

Measure the following:

  • Of applications submitted online, % of applications submitted from mobile, tablet, and desktop
  • % of recertifications and periodic reports completed via mobile, tablet, and desktop
83% of Americans own smartphones 25% of households with low incomes rely on a smartphone for internet access

To be truly accessible, websites need to be built with a mobile first design.

“My phone is my computer.”
Client in Colorado

Call Center Accessibility

Is the call center easily accessible to all users?

Measure the following:

  • Average # of calls per day
  • Average call wait times (by language)
  • Call abandonment rate: % of calls dropped before the customer speaks to an agent (by language)
  • First call resolution rate: % of issues resolved with one call (by language)
“Honestly, I have like a million accounts with CAFÉ because I done forgot my CAFÉ information and I just don't have the time to … be waiting on the phone for an hour or two trying to get them on the phone.”
Client in Louisiana
“Yes I know how to call. I have been calling every day for several weeks. The system tells me there are too many callers and then ends the call.”
Client in California

Local Office Accessibility

Are in-person locations easily accessible to all visitors?

Measure the following:

  • # of visitors to each local office relative to staffing ratios
  • Average wait times (by language)

Application Burden

How difficult is the application process for the service?

Measure the following:

  • Average # of minutes to complete an application and renewal, via desktop and mobile
  • Application completion rate: # of complete, submitted applications compared with applications started via desktop or mobile (by language)

Our 50 state report found that the time it takes to complete online applications varies dramatically, from 15 minutes in some states to 120 minutes in others. For our GetCalFresh service in California, the average application completion time is 15 minutes with less than 1% of applicants submitting minimal applications.

Customer Satisfaction

How satisfied are clients with the application experience?

Measure customer satisfaction rates via user responses to these recommended questions:

  • How easy or difficult was it to complete the application?
  • How confident were you about which programs to apply for?
  • How confident are you that you answered the questions correctly?
  • How confident are you that you know what the next steps in the process are?

Customer service questions can be thoughtfully integrated at multiple stages of the process. For our benefits application pilot in Minnesota, we provide a space for quick feedback at the end of the application, and we also send an automated post-application survey to applicants within one hour of applying. We track responses over time to know how clients are feeling about the application experience, then adjust the design accordingly.

Satisfaction survey asking
Improving Outcomes with Data
“We shouldn’t measure data for the sake of measuring it. Let’s say that … we identify a disparity. Unless [technical assistance] is able to truly help me understand policy or business processes that may be contributing to that disparity, or is able to show me what other states with better performance on that measure are doing, it doesn’t help. You can’t just measure something for the sake of measuring it … You have to actually help states drive outcomes on that measure.
State SNAP director

Federal Agencies

Federal agencies should be collecting comparison data to track changes and trends across the nation over time. These comparisons also highlight opportunities for technical assistance and peer-to-peer learning, and can direct policy initiatives and pilot programs for solutions like demonstration waivers.

State Agencies

State agencies should be tracking data progressions over time, observing trends in key “vital signs” metrics to measure progress.

State agencies should be comparing metrics across jurisdictions (counties, offices) to identify trends, bright spots, and challenge areas. These comparisons highlight opportunities for business process changes, technical assistance, and peer-to-peer learning.

Vendors or IT Systems

State agencies should ensure their vendors or IT systems regularly monitor and report back on the usability of client-facing digital products and tools, and confirm they can be responsive if a usability issue is discovered.  Usability includes things like:

  • Is the language easily understood by all users (written in plain language, at a fifth grade reading level, etc.)?
  • Are websites accessible to all users (readable by screen readers, readable on all browsers, etc.)?
  • Are systems able to quickly recover from bugs and breaks (monthly hours of website downtime, time to recover from downtime incidents, etc.)?

For more on building people-centered digital services, check out our Blueprint for a Human-Centered Safety Net.