In partnership with MassMEDIC, I gave a webinar last October to address working with qualitative UX data. In particular, I spoke about the qualitative data researchers collect during the early stages of development (e.g., ethnography). The idea was to provide an overview of best practices for data collection, and then focus on scrubbing, analyzing, and reporting complex qualitative data from, say, a series of O.R. case observations. I used a hypothetical case study to provide structure, and provided concrete examples of how best practices can be applied to research. In addition, I included a few additional thoughts on why discussions like this are crucial to advancing UX research practices in MedTech development. View the webinar replay.
Why qualitative data?
‘Big data’ and data science are increasingly popular tools among product researchers, but these practices tend to rely predominantly on quantitative data. At their best, quantitative data provide excellent summaries, predict patterns, and help us differentiate between meaningful and non-meaningful variables. Such information, when properly analyzed, provides valuable insight that has the potential to propel products and businesses to the top of their industries. However, these numbers and models are, in some ways, faster and less messy to work with than their qualitative counterparts. This is why it can be tempting to quantify the qualitative with dummy codes, and in some cases, this is a great way to work. That said, qualitative data themselves are rich, nuanced findings that serve as some of the most compelling data on which to base design, strategic, and risk-based decisions (think root cause analyses). Nonetheless, I’ve seen consultants and OEMs, alike, shy away from relying on — or admitting they relied on — qualitative findings, even in the generative research stages. Too often, we classify subjectivity as a liability rather than a boon.
As a researcher and strategist, I tell clients that performing statistical analyses is not an objective science; instead, it means taking objective data and subjectively analyzing them. What I mean by this is that there are myriad ways in which we could choose to treat our data, and it’s therefore difficult to label any single method as ‘right’ or ‘wrong’ without knowing why the decisions were made. So how do we let others know they can trust our conclusions? If you can justify each choice and are transparent about the assumptions your method makes, then you’ve given your data robust integrity. Plus, by acknowledging alternative options, any disagreements will be more informed, and therefore have a higher chance of being productive.
The same idea is true of qualitative data, the interpretation of which is traditionally subjective. I’m sure many of you reading this will have different experiences with — and thereby opinions about — qualitative data analysis, and I think those differences are at the core of why I wanted to hold this webinar. Progress in any industry is born from educated debate, and my hope in starting this discussion is that our increasingly high-efficiency, data-driven mentality doesn’t discard the value of qualitative data analyses in favor of a more convenient, but potentially less beneficial, approach.