Ahn explores AI, social work and inequality

Faculty; Research; Social Work

Spot Eunhye Ahn on campus in deep thought, and you might catch her wrestling with a question she considers both urgent and underexplored: What does “human-centered” really mean from a social work perspective when it comes to artificial intelligence? 

“It’s interesting, literally every day there’s some new aspect of the AI field,” said Ahn, an assistant professor at the Brown School at Washington University in St. Louis and an affiliated researcher with WashU’s new AI for Health Institute, which explores responsible uses of AI in health and social systems. 

But Ahn challenges the notion that AI is impartial.  

“AI is not neutral. It reflects the biases and structures of the society that created it,” she said. “I get really angry when people say, ‘Oh, it’s just a tool, it’s neutral.’ That’s not true. AI sounds very neutral, technical, or algorithmic, but it’s not. It’s about who has power and how they choose to use it.” 

Her research sits at the intersection of social work, data science, and ethics. She studies how AI affects inequality, governance, and fairness, and how social workers should respond. 

“I’m interested in how AI will change society and the landscape of inequality, and how social workers respond to social change driven by AI,” Ahn said. “Behind AI, there is the power of money and political decisions. We all need AI knowledge and literacy to advocate effectively and conduct research that examines how AI will affect different communities.” 

Ahn recently led a paper published in the Journal for the Society for Social Work and Research as part of a new series on AI and social work. The article, “Artificial Intelligence (AI) Literacy for Social Work: Implications for Core Competencies,” examines how AI is transforming society and its implications for social work, particularly for marginalized populations who may face compounding disadvantages as technology reshapes both societal structures and the delivery of social services—often with policy lagging behind or remaining vague.  

The authors, including Patrick Fowler, professor at the School of Public Health and director of the Doctoral Program in Public Health Sciences, argue that social workers need AI literacy. That includes the ability to understand, use, and critically evaluate AI systems, even if they don’t directly use them in practice. They propose embedding AI literacy into core competencies to help professionals promote equity in an increasingly AI-influenced world. 

Ahn’s concerns aren’t just theoretical. She points to real-world consequences of AI. In Louisiana, a parole board denied release to a nearly blind, wheelchair-bound man after an algorithm deemed him a “moderate risk.” In April 2025, a teenager died by suicide after interacting with an AI chatbot

“That’s not just a technical problem,” Ahn said. “That’s a policy and justice problem. Social workers need to understand these technologies to intervene effectively. It shows how unequal access to AI literacy can have devastating effects.” 

She emphasizes that AI should be seen as both a resource and a risk, inseparably linked. She said thoughtful consideration of ethics and risk is essential if AI is to be a resource rather than a threat. 

“To use AI more critically, we have to decide what only humans should do. It’s not just about what AI can or can’t do. It has already surpassed human capacity,” she said. “But we must define the uniquely human roles we want to preserve and continue our legacy of integrity.” 

Ahn will be a panelist at a symposium hosted by the School of Public Health and the McKelvey School of Engineering titled “Can We Harness AI to Promote Healthier Lives?,” on Oct. 22.