EthicsBot

We present EthicsBot, an innovative, open-source large language model (LLM) project that is being itereatively designed and developed at Wellesley College to provide real-time, context-aware ethical guidance for researchers. Our work addresses the critical need for timely and accessible ethical support in modern research, particularly when confronting complex digital data on the social web. By investigating the potential of LLMs to serve as research ethics support tools, our goal is to foster deeper ethical reflection that effectively complements, rather than merely complies with, formal institutional review board (IRB) processes.

Papers

EthicsBot: Fine-Tuning Open-Source LLMs to Assist Scientific Investigators in Analyzing Ethical Issues in Research

Spencer Phillips Hey, Charles Weijer, Julie Walsh, Eni Mustafaraj

Venue: IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS), Chicago (USA)
Year: 2025

Abstract

Ethical considerations are fundamental to responsible research, yet many investigators struggle to identify and analyze ethical concerns in their study designs. Institutional review boards (IRBs) and other regulatory bodies, while essential, are often perceived as bureaucratic obstacles rather than collaborative partners in ethical inquiry. Recent advances in large language models (LLMs) offer an opportunity to enhance the way researchers engage with ethical analysis. This paper introduces EthicsBot, an innovative project that proposes to leverage open-source LLMs to provide real-time, context-aware ethical guidance for researchers. Download PDF

Comparing Human and LLM Ethical Analyses: A Case Study in Computational Social Science Research

Spencer Phillips Hey, Julie Walsh, Eni Mustafaraj

Venue: The 8th AAAI/ACM Conference on AI, Ethics, and Society (AIES), Madrid (Spain)
Year: 2025

Abstract

As researchers increasingly engage with ethically complex digital phenomena, timely and accessible support for ethical reflection is essential—yet often unavailable beyond formal institutional review processes, which are more focused on regulatory compliance than ethics. This paper investigates the potential of large language models (LLMs) to serve as re- search ethics support tools by providing immediate, context-sensitive feedback on draft research protocols. We analyze a draft research proposing to scrape digital platforms for data on "Sephora Kids" (a trend in which minors promote beauty products on platforms like YouTube and TikTok) as a case study to explore this possibility. Two human ethicists and two LLMs (GPT-4o and Claude 3.7 Sonnet) independently reviewed the proposal and produced ethical evaluations. We then compared the outputs to assess whether LLMs could meaningfully assist researchers in identifying and engaging with ethical issues. Our findings suggest that LLMs can already offer valuable support. Download PDF

Our Team

Photo of the entire research team

The core research group at the Wellesley College Science Center Summer Research Poster Session (July 2025).

Dr. Eni Mustafaraj

Affiliation: Wellesley College

Dr. Mustafaraj is an Associate Professor of Computer Science at Wellesley College. She is the PI of the NSF grant "Pathways to Ethics of Technology in the Liberal Arts Curriculum" that supports this research.

Dr. Julie Walsh

Affiliation: Wellesley College

Dr. Walsh is the Whitehead Associate Professor of Critical Thought and Associate Professor of Philosophy at Wellesley College. She is the co-PI of the NSF grant "Pathways to Ethics of Technology in the Liberal Arts Curriculum" that supports this research.

Dr. Spencer Philips Hey

Affiliation: Hey Research & Innovation

Dr. Hey is a the founder of Hey Research & Innovation. He has deep research expertise on the ethics of human subjects research. His publications can be found here.

Crystal Zhao

Affiliation: Wellesley College

Crystal is an undergraduate student at Wellesley College studying Data Science and Peace & Justice.

Jessica Chen

Affiliation: Wellesley College

Jessica an undergraduate student at Wellesley College studying Computer Science and Political Science.

Contact Us

For research inquiries, collaborations, or questions about the papers, please contact us at: .

Funding Acknowledgment

This research is generously supported by the National Science Foundation (NSF) under Grant No. 2220772. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.