Gretchen Gavett
https://hbr.org/resources/images/article_assets/2025/02/Feb25_20_3289719.jpg
At this point, AI has touched pretty much every workplace and every job, to some degree—from customer service to medicine to the little automated pop-up that appears whenever I cut-and-paste something into a document. This can sometimes be handy, but other times can degrade what John’s Hopkins University sociologist Allison J. Pugh calls “connective labor” in her latest book, The Last Human Job. This type of labor involves “seeing the other and reflecting understanding back,” an action critical to “millions of jobs, including people working not just in health care, counseling, or education, but also in the legal, advertising, and entertainment industries, in management, in real estate, in tourism, even in security.”
The humanity of this work is threatened when automation prevents us from recognizing and truly being present with other people, or when we have to collect data about every interaction instead of focusing on the human being in front of us. (The first chapter of the book features a hospital chaplain who must record her interactions with patients “in no fewer than three different tracking systems,” gathering information that probably isn’t even necessary for her to do her job successfully.)
Attempts to use technology to “solve” our perceived inefficiencies, or to supposedly free us up for meaningful work that matters, also raises a broader question. To paraphrase Pugh, “who is going to be freed and what counts as meaningful?”
I recently posed a few questions to Pugh over email to better understand her research on society’s rush to automate work, what we’ll lose if we go too far (and who will bear the brunt of these loses), and what advice she has for leaders who are considering integrating AI and other technologies into their organizations. This is an edited version of our conversation.
HBR: One commonly argued benefit of incorporating things like generative AI into work tasks is that busy work in front of screens or with devices will be reduced or eliminated, allowing for more human connections with colleagues, clients, patients, or customers. In other words, that the technology is beneficial in part because it will bring people closer together, which will in turn make people happier and more productive.
In your research, does this rationale ring true? And if not, what does this thinking miss?
Pugh: That kind of a change would indeed be a huge benefit. So many of us are beset with mind-numbing tasks in our jobs that the idea that we could slough some of them off to machines sounds positively utopian. Perhaps that is why the “AI will free us up for meaningful work” is such a common idea out there, promulgated by AI researchers and economic analysts alike. Yet this argument relies on a certain naiveté about capitalism, I think, at least as it is currently practiced in the United States: If AI takes work off our hands, is it likely that employers will then seek to fill our days with more meaningful tasks, or that they will instead take the opportunity to eliminate jobs and reduce their workforce when they can?
I love the vision of people closer together, happier and more productive. But that vision also relies on the notion that human connections with colleagues, clients, patients, or customers are highly valued, and I’m afraid that is not currently evident. If they were prized, then perhaps researchers would not find that the more a job involves face-to-face communication with clients or customers, the less it is paid—controlling for skill and other characteristics. If we treasured human connection, we would not ask the same people who are in charge of forging it—the teachers, primary care physicians, and others—to spend their time collecting data, fitting in their connective work on the side as they can. Instead, our current management practices suggest we don’t really value this work, and if that is true, then AI is unlikely to be deployed so that we can accomplish it better.
Indeed, I wrote The Last Human Job in part to spell out what this connective labor is, how people do it, and why it is so valuable. I’m afraid we are automating this work without really understanding it, and thus what is at stake.
What are some examples of how technology (be it gen AI or something else) promises to create connections or improve workers’ experiences, but instead does the opposite?
There are many examples of technology gone awry: e.g., chatbots that respond to people confessing their depression with “maybe the weather is affecting you,” or offering weight loss tips to eating disorder clients. Some organizations are racing to embrace AI before it is reliable enough to put in unsupervised contact with people.
But even when the technology performs as promised, it affects human relations. I talked to one woman who worked as a “coach” at a startup that offered cognitive behavioral therapy in an app to people with social anxiety. The firm expressly forbade her from doing therapy, and did not pay her or expect her to be a credentialed counselor, but the clients themselves ended up treating it like an inexpensive talking cure, with some working with her for months at a time. She told me how hard it was to hear about her clients’ trauma, and how she had had no training in how to handle it.
The impact of AI on her job was threefold. First, she was a classic example of what Harry Braverman once called “deskilling”: when a firm breaks down the component parts of a particular job and then hires cheaper labor to complete much of the work. While Frederick Taylor might have done this to bricklaying a century ago, today’s AI does this to socio-emotional work: the startup had divided a therapist’s job into the app software and a team of untrained, uncredentialed “coaches.” Second, her work was made invisible, as the firm denied that her regular encounters with clients counted as any sort of counseling. This kind of invisibilization is a common-enough finding for other examples of AI “automation,” where unseen armies labor behind the scenes to train models about whether something is a bagel or a dog, for example, but as it turns out, it applies to “automated” connecting work also. Finally, she faced the existential problem of having to prove that she was human to customers used to working with machines. The technology incontrovertibly shaped her experience, making her feel somewhat automated herself.
Are there things executives and managers can do better in thinking about what technologies they want to introduce in their organizations, and why? Are there key questions they should be asking — about the technology, what their organization really needs, and what their employees might experience or feel while using it?
The key issue is that technology is not a neutral force that might simply solve a problem, but that it reflects the culture of the firm where it is introduced. And business leaders have a lot of influence on what that culture is—particularly how it may or may not encourage human connections between workers and their clients. I analyzed firms where workers managed to forge strong connections, and others where they did not. The difference comes down not just to material factors like adequate time, money and space—although of course those factors are important. It also matters whether leaders articulate a vision with commitment to human connection, whether they foster mentorship and sounding boards for workers to process what they are hearing, and whether they help to enact the norms and rituals that prioritize relationships.
Any leader seeking to introduce new technology uses into their organizations should ask themselves how the tech would affect those factors. We might call this a “connection criteria”—how a given technology affects the way humans relate to each other—and applying it clarifies the potential impact of introducing new technology not just for core services offered to patients or clients, but also to the interactions among a firm’s workers. Unless your organization is almost entirely automated, those interactions are vital for the processes that produce what you sell.
In my research, I met charismatic leaders—the head of a clinic, the pr
Fuente: PMideas (“I’m Afraid We Are Automating This Work Without Really Understanding It”).