Starting in March, Code for America, like most nonprofits and tech companies, pivoted to full-time remote work. This didn’t initially disrupt my research work, but as the weeks turned into months and new projects arose, so did the issues. While it’s very common for companies to conduct remote research, it doesn’t give a full and equitable picture of the clients we are trying to reach.
On the GetCalFresh team, we work with clients and families whose gross income cannot exceed 200% of the federal poverty level—for a household of one, that’s $2,126 per month. The folks that we work with often lack the economic resources to be able to access technology like smartphones and laptops, or reliable internet access. Not only can their resources be limited, but their overall capability and familiarity can be as well. If someone has never owned a laptop before, how can we expect them to install or even use Zoom?
As the Bay Area began loosening COVID-related restrictions and as our team worked through internal safety protocols, we felt we were able to try our hand at in-person research once more.
One of the regular research methods we use at Code for America is usability testing, which is a great way to evaluate new features and to get real-time feedback on your product from folks who would actually be using it. It involves putting your product in front of someone and observing them interact with it, and making resulting recommendations or changes to increase people’s ability to use the product.
While we did conduct usability testing remotely in the pre-COVID times, we also heavily relied on conducting it in person. This not only allows us to meet and get to know our clients, but also removes the barriers associated with using online testing services or video chat software. We know that many of our clients have limited resources and that their comfort levels with technology are low, so we do what we can to be accessible to them. This notion of accessibility is something we have had to think about even more critically in the current times, when face-to-face research is far more challenging.
A couple of months ago, I conducted a few remote usability testing sessions with families who were potentially eligible for Pandemic EBT (P-EBT). The challenges made themselves clear immediately. Since the research sessions would be remote, we had to be sure they had access to Zoom or another video chat service, which garnered hesitance from the participants. During one phone screening, a participant mentioned that their child had a Chromebook that their elementary school had loaned them and that she could use that for our call.
Thanks to the San Francisco Unified School District’s loaning of Chromebooks to their students, we were able to find participants for remote research sessions. But that reveals the key issue: Not one family we were trying to reach had reliable computer access without the borrowed Chromebooks.
A month later, I was tasked with another round of usability tests with a very similar demographic. It stood to reason that the same technology barriers would present themselves, so I decided to see how I could conduct these in person, without technology—in a way that was safe for all parties involved.
For those same accessibility reasons, I decided to skip one of our usual research recruitment methods of Craigslist ads and instead rely on the other: outreach through community-based organizations (CBOs). We find a CBO that works with the population we are looking to test with, since they tend to know their clients on a more personal level, and they screen and schedule participants who might be a good fit. Regardless of how we recruit, all participants are compensated for their time and energy with a Visa gift card.
In this case, the CBO I worked with was a middle school. All participants were either monolingual or primarily Spanish speaking, all were households containing children, and all were single income and below the income requirements for the P-EBT program. After speaking with the participants about computer access, my hunch was confirmed: One participant had been gifted a used laptop but unfortunately, it was broken. Another participant lived in a household of eight and the only computer belonged to her oldest daughter, who was using it for school.
The first thing we needed to think about was the research location. Normally, we let the participant decide where they would like to meet (often a cafe, library, or even their own home), but closures and safety precautions meant these options were out. Instead, the school assured us that they were taking the COVID-19 situation very seriously with masking, sanitizing, and contact tracing, and offered us their own space.
In non-pandemic times we conduct research in pairs, with one person interviewing and one taking notes. Instead, I went alone to reduce the amount of contact with others, and recorded all conversations on my phone to transcribe later. I conducted two rounds of research several months apart, with everyone masked, more than six feet apart, and all surfaces and objects sanitized.
In a non-socially distant usability testing session, I would sit next to the participant while they navigated the site on their phone, observing exactly how they interact with our product. Watching how fast or slow they scroll, or noticing how many times they reread a specific section of a page is very useful information. So in the first session, I had participants use their own phones to navigate our site. Right away, this posed some challenges. It was very difficult to control the flow of the conversation and ask questions about content from six feet away, especially when the screen was pointed toward their faces. I asked if they could keep their phone on the table so I could see better, but it remained difficult.
The second time around, I decided to take a step back: instead of relying on phones, I printed out the product flow I was testing. Each participant had their own stack of pages to avoid potential contamination. During the session I laid the pages out and let them navigate the “site.” They read through the content, and “clicked” (pointed) to buttons depending how they would expect to continue to the next step. This gave me the opportunity to pause and ask specific questions, as well as control the pace of the conversation. It also gave the participant the opportunity to make suggestions for a better client experience directly on the page, providing them with their own sense of control throughout the session.
Masks were, obviously, unavoidable. While they did muffle voices to an extent, the real difficulty was the inability to read facial expressions. Normally, researchers can tell when someone is stumped by the way they purse their lips. We can also see when someone is confused, such as when they start reading and rereading specific parts of the application to themselves. While not something you can necessarily work around, it is something that you can work through. Eye contact became even more important than in the pre-COVID times, and I found myself nodding my head and being more verbally affirmative and expressive throughout our conversations.
The importance of equitable, accessible research
At Code for America, we talk about working to make government services more accessible and more equitable. This is also true in the way that we conduct our research. In terms of the products we test and build, access to technology was already a huge consideration—in the pandemic, it’s become even more important.
When your clients have been systemically discriminated against for generations, leading to fewer resources in their communities as a whole, it’s critical that we include their voices in the process of building products that are meant to serve them. And when we do this we need to be sure that we are able to do this in a manner that is accessible for everyone. Travel restrictions and social distancing guidelines have made our work as a research community much more difficult these days, but with a bit of creativity and a lot of consideration, it’s still possible to find ways to meet people where they are.
Learn more about our research methodology and practices in our Qualitative Research Practice Guide