Interdisciplinarity
What is the Role of Social Sciences Within the Field of AI?
What is the role of social sciences within the field of AI and more broadly, technology?
It would be an understatement to say that advancements in technology are moving quickly; what seemed impossible just a few years ago is unfolding before our eyes.
In such a fast paced tech landscape, is there room for non-traditional thinkers—those whose backgrounds don’t fall neatly within STEM disciplines?
The answer is a resounding yes. Social sciences are not just relevant but essential in understanding AI’s broader societal impact. They raise critical concerns that might otherwise be overlooked: issues of access, unconscious bias, and the widening gap between theory and practice.
The Access Problem - Who Gets Left Behind?
Technology is not developed in a vacuum, and access to it is far from equal. Socio-economic status, geographic location, and differences in physical and mental abilities all shape who can engage with AI—and who is left out. Many AI models are trained on English-language datasets, excluding vast portions of the world from meaningful interaction.
Similarly, technological infrastructure favors those in high-income regions, creating systems that reflect the priorities of the privileged few rather than the needs of the many.
The Bias Problem – AI is Not Neutral
AI is often portrayed as objective, but in reality, it reflects the biases ingrained in human society. These biases are embedded in datasets, reinforced in training, and coded into decision-making processes.
We’ve already seen how facial recognition systems disproportionately fail on darker skin tones, or how AI-driven hiring tools filter out applicants based on gender or race. If left unchecked, AI doesn’t challenge systemic inequities—it amplifies them. And this isn’t just about the data; it’s also about the developers themselves, whose unconscious biases inevitably shape the systems they build.
Bridging the Gap – Social Science as a Mediator
This is where social sciences come in. The corporate tech world prioritizes efficiency, optimization, and scalability—but what about transparency, accountability, and ethical responsibility?
Social scientists bring a human-centered lens, asking not just can we build this? but should we? Their work helps bridge the gap between corporate AI practices and real-world impact, making technological black boxes more transparent and challenging the idea that technology alone can solve complex human problems.
Why This Matters Now
AI is shaping the world at an unprecedented pace. If we fail to integrate social science perspectives into its development, we risk cementing existing inequalities into the very code that governs our lives. Now more than ever, social scientists need to be at the table, not as afterthoughts, but as essential voices in shaping AI’s future.
Written by Joseph Markman — exploring the human side of technology.
Curious to connect or collaborate?