From Academia to Industry: Qualitative Research in the Digital Age - The Sound

As a qualitative researcher at The Sound, I am often asked if the research I have done throughout my graduate career is relevant or useful to the work I do now, helping brands and companies understand the ways that people think, behave, purchase, and live. The answer is a resounding yes. I recently completed my Communications PhD from the University of Michigan, where I researched race, media, and technology. It has made me keenly aware of the current state of affairs regarding the complicated role that technology plays in our lives. The world around us is advancing at breakneck speed, with large jumps being made in seemingly almost every sector imaginable. The rise of AI is less of an abstract concept or a boogeyman in a sci-fi movie; it is here, present in our apps and devices, being integrated into our workplaces, schools, and other day-to-day activities. Algorithms are utilized to help us decide what to wear, watch, and eat – from the recommendation systems in Spotify and Netflix to the “you may like” sections of platforms like Amazon. 

 

The Impact of Technology 

technology-equality

But let’s take a step back for a moment. For many of us, the first thing that comes to mind when we think of the term digital or technology is the internet. The rise of digital media and specifically the internet was revolutionary in many ways, but for many, the rise of this technology was to start a new era of democracy, equality, and the rise of a post-racial world. The growth of the internet for many people meant the start of a time where people were no longer separated by race and other identity markers. People thought the digital revolution would solve social inequality along racial and gendered lines. The digital divide was closing, and the growing access to the internet and other technologies like cell phones, computers, and laptops was supposed to usher in a wave of equality, research, and progress like we have never seen.

This has not been the case, and in fact, digital spaces and technologies have complicated ‌notions of identity. To capture attention and make decisions for people, algorithmic systems must take in significant amounts of data to build algorithmic versions of users and ultimately decide what is best suited to different individuals (Kyung Lee, 2018). Authors like John Cheney-Lippold argue that our data is shaping our identities through the online purchases we make, posts we like, sites we visit, etc. When these things happen, advertisers, marketers, and governments decide who we are online and what we mean to them. These categorical identities don’t care about what makes people what they are, but they are concerned with how to quantify people in terms of data and the choices they make online. Once people are measured in data, it’s easier to change and personalize models for each person and guess what they want. It is tempting to think about people as just data points, individual variables that can help us figure out problems and come up with solutions. And while researching people can get us there, we need to make sure that we are still thinking of people, as people. And we need to understand them as complex beings that hold different identities.

 

Rethinking Race in the Digital Age

Race in the Digital Age

As we think about identity, I am always interested in how race is changed through these digital channels. The digital has radically reorganized media systems, but it also asks questions about race and how race is reorganized. There have been several moments in recent years that reveal how race is misrepresented and flattened by digital technologies. In 2020, a pixelated image of President Obama was changed to a white man’s face by an artificial intelligence tool called Face Depixelizer. The photo spread on Twitter, and users were quick to call out the problem of bias in artificial intelligence and machine learning. An even earlier example of technology sidestepping the variable of race can be found in how Facebook implemented an “Ethnic Affinity” category as a part of their ad-targeting tool. Even though this category was included in the demographics category, Facebook executives stated that this was different from a racial category because the company doesn’t ask its users about their racial identity (Angwin and Parris Jr., 2016). However, in 2016, it was found that advertisers could stop Black and Hispanic ethnic affinity groups from seeing their ads. The attempt to not categorize race in more traditional ways caused the negative results that were (supposedly) being avoided. This is a trap that many researchers fall into; as we employ new technologies that are exciting and groundbreaking, there are people who get left on the wayside.

The rise of technological power to oppress marginalized groups and exploit individuals, communities, and societies is a vital framework to think about when we examine the relationship between marginalized people and technology (Fouche 2004). Walton (1999) also reminds us how certain stories about technology are given to the American public. These stories usually show technology as new, progressive, and improving health and ease of life. But this mostly ignores the weak and sometimes even hostile relationship that marginalized groups have with technology. It is a fact that technological advances can and have had horrifying impacts on marginalized groups of people. Many of these technologies are said to be forward-thinking, new, fair, and good for society. But when you look at them more closely, they have serious consequences (Werth, 2019; Hannah-Moffat, 2019). This is why it is important to think about our history, of how we got to this point and our collective relationship to technology and identity as researchers.

 

Exploring the Cultural Layers of Technology 

Close-up-of-a-womans-hand-controlling-a-virtual-database

In 2006, Joel Dinerstein wrote a book titled Technology and Its Discontents: On the Verge of the Posthuman. It’s almost 20 years old, but it’s still very important for understanding the technology we’re using now.‌ Dinerstein is cautious about the future that technology will bring and encourages us to think about the history that brought us here, as well as the main cultural ideas of terms like digital and technology. The digital is a loaded term, bound with claims of the future and progress. It’s easy to think of the digital as a product of today, as just electronic technology that makes, stores, and processes data, but the digital must be understood as part of past and current social, political, and economic structures. The digital is a long-lasting tradition, one that started long before the devices we associate so easily with the term. But as well as the digital being a lineage, it’s also deeply cultural and filled with ideology. And these ideas that we have are important as we continue to shape our research practices around new digital technologies.

I pay close attention to how ‌Silicon Valley prioritizes innovation and forward progress‌ (Leon and Rosen, 2021). In many ways, this kind of thinking has propelled society forward and made large jumps of progress, making our research easier. We can more easily come up with questions, analysis, and even hypothetical data sets to help our clients better understand people and the ways they think about branding and products. But the question of collateral damage always remains – is there someone who gets the short end of the stick? This is a valid concern and something that all researchers need to be aware of, but at the same time, my understanding of the digital is not purely antagonistic or top-down in a deterministic, imbalanced manner that only produces inequity (though that is certainly true and happens). I still leave room for the space of the digital to be open and free, where people and researchers like you and me can push back and develop in ways that are new and creative. 

 

Centering People, Not Technology

A-collage-of-faces-of-all-races-and-ages

I worked on a project last year with a tech company that was interested in how their users conceptualized AI. I had the chance to speak to our client about my research. We discussed the complicated relationship that people of color have had to technology historically, specifically about how Black Americans have always been caught in a back-and-forth between innovation and surveillance, and liberation and oppression. We changed the participant sample, as well as altered the scope of the project and discussion guide to reflect these ideas, and the research proved to be even more fruitful than it already was.

My point is this – as we think about the role of digital technologies in our lives and work as qualitative researchers, we must always remember that they are not simply tools to make life easier. Technology is simultaneously a belief system, a space of mediation, an industry, and a combination of user choices and habits. In other words, to truly understand how technology works and can be used to advance our field forward, we must understand people. And people are complicated. But it is our duty as researchers to take this task seriously, to interrogate how identity is shifting in this advanced digital era, to understand the fears and anxieties of the people around technology we research, to think about the ways that technology shapes our identities, as well as the ways that we can use technology to fashion ourselves. When we can center people and the complicated positions they hold, the research becomes that much richer. So come work with us at The Sound as we use technology to further our understandings of people, not replace them. 

 

Daniel's teen photo
Written By:
Daniel Meyerend

I was born and raised in Brooklyn, NY, and if you didn't know, it's a rule that I have to make that the first (and maybe only) thing I tell you. As a Brooklyn native, I've always had a good eye for streetwear fashion, and outside of doing work, you can catch me performing at comedy shows all over the Seattle/Tacoma area.

More by this author