Ethical Framing in AI and Social Psychology


ClientUSCToolsQualitative Coding, Data Structuring, NVivo, Python-adjacent annotation workflowsTimeline2017–2020TagsAI Ethics, Moral Psychology, NLP Training, Civic Reasoning, Cross-Disciplinary Research, Responsible AI, Dataset Curation, Research for Impact, Annotated Language Models

Collaborative research on morality, civic reasoning, and AI alignment at the University of Southern California


The Challenge

Artificial intelligence systems increasingly shape how we access information, interact socially, and even understand justice. But for these systems to behave ethically, they must be trained on data that reflects real human values. At USC’s Lab of Social and Moral Cognition, our challenge was to understand how morality shows up in everyday language—on platforms like Twitter or news media—and how we could transform that raw data into annotated corpora that train more value-aligned AI systems. We explored how moral concepts like fairness, harm, authority, and loyalty manifest in civic discourse and whether those patterns could be reliably detected and modeled.


My Role & Contributions

  • Annotated and structured large qualitative datasets for use in AI training and social psychology studies.
  • Coded participant responses for moral framing, emotional cues, and civic themes using NVivo and custom schemas.
  • Co-authored and contributed to multiple published papers, including studies on how human speech encodes values and how AI can be trained to detect these ethical signals.
  • Supported interdisciplinary dialogue between computer scientists and social psychologists on model relevance and real-world application.
 

Broader Impact & Relevance

  • Informed early frameworks for building value-aligned AI systems rooted in human communication patterns
  • Provided ethically annotated data that can be used to fine-tune natural language processing (NLP) models for public-facing technologies
  • Helped bridge gaps between qualitative social research and scalable computational models
  • Supported research that later contributed to ongoing national and international AI ethics discourse

What I Learned

This work deepened my understanding of how to bridge human insight with the technical demands of AI systems. I learned to annotate and structure large qualitative datasets using a custom schema—balancing rigor and empathy—to train NLP models that could detect real-world social signals, such as approximate crime rates in underreported areas. It also showed me how moral concepts like fairness and harm can be translated into computational patterns, expanding what AI can detect and understand.

But I also learned how complex and subjective human morality really is—like how some saw a simple “Happy birthday” as a moral comment, while others viewed it as an obligatory social norm. Grappling with that ambiguity taught me that responsible AI demands both technical precision and deep social and cultural awareness. Ultimately, this experience reinforced my conviction that behavioral scientists are critical to developing AI that is not just technically advanced, but also socially meaningful and ethically grounded.

Arineh Mirinjian

Tell me how I can Support.
Let’s build someting great!

Headquarter

9876 Design Blvd,
Suite 543, Beverly Hills,
CA 90212

Conversation

arinehm@gmail.com
+1(789) 800-1234

Privacy Preference Center