Collaborative research on morality, civic reasoning, and AI alignment at the University of Southern California
Quick Snapshot
- Role: Research Assistant, Lab of Social and Moral Cognition
• Timeline: 2017–2020
• Tools: Qualitative Coding, Data Structuring, Corpus Development, Data Preparation for NLP Models
The Challenge
Artificial intelligence systems increasingly shape how we access information, interact socially, and even understand justice. But for these systems to behave ethically, they must be trained on data that reflects real human values. At USC’s Lab of Social and Moral Cognition, our challenge was to understand how morality shows up in everyday language—on platforms like Twitter or news media—and how we could transform that raw data into annotated corpora that train more value-aligned AI systems. We explored how moral concepts like fairness, harm, authority, and loyalty manifest in civic discourse and whether those patterns could be reliably detected and modeled.
My Role & Contributions
– Annotated and structured large qualitative datasets for use in AI training and social psychology studies
– Coded participant responses for moral framing, emotional cues, and civic themes using NVivo and custom schemas
– Co-authored and contributed to multiple published papers, including studies on how human speech encodes values and how AI can be trained to detect these ethical signals
– Supported interdisciplinary dialogue between computer scientists and social psychologists on model relevance and real-world application
Broader Impact & Relevance
– Informed early frameworks for building value-aligned AI systems rooted in human communication patterns
– Provided ethically annotated data that can be used to fine-tune natural language processing (NLP) models for public-facing technologies
– Helped bridge gaps between qualitative social research and scalable computational models
– Supported research that later contributed to ongoing national and international AI ethics discourse
What I Learned
Working across the fields of psychology and machine learning taught me how vital context is in building technology that serves society. I learned to grapple with linguistic ambiguity, emotional nuance, and sociopolitical tension—all while constructing data that algorithms could learn from. This experience reinforced my belief that behavioral scientists play a critical role in shaping ethical AI—not just through critique, but through collaboration.
Hoover, J., Portillo-Wightman, G., Yeh, L., Havaldar, S., Mostafazadeh Davani, A., Lin, Y., …
Dehghani, M. (2020). Moral Foundation Twitter Corpus: A Collection of 35k Tweets Annotated for Moral Sentiment. Social Psychological and Personality Science. https://doi.org/10.1177/1948550619876629
Aida Mostafazadeh Davani, Yeh, L., Atari, M., Kennedy, B. J., Gwenyth Portillo Wightman, Elaine
Acosta González, Delong, N., Bhatia, R., Arineh Mirinjian, Ren, X., & Dehghani, M. (2019).
Reporting the Unreported: Event Extraction for Analyzing the Local Representation of Hate Crimes. Empirical Methods in Natural Language Processing. https://doi.org/10.18653/v1/d19-1580
Atari, M., Mehl, M. R., Graham, J., Doris, J. M., Schwarz, N., Davani, A. M., Omrani, A.,
Kennedy, B., Gonzalez, E., Jafarzadeh, N., Hussain, A., Mirinjian, A., Madden, A., Bhatia, R., Burch, A., Harlan, A., Sbarra, D. A., Raison, C. L., Moseley, S. A., … Dehghani, M. (2023). The paucity of morality in everyday talk. Scientific Reports, 13(1). https://doi.org/10.1038/s41598-023-32711-4