Main content start

SSP Forum [12:05-1:20pm]: James Brown, Anjali Ragupathi, and Naomi Eigbe (M.S. Candidates)

Monday, May 20, 2024
Margaret Jacks Hall (Bldg. 460)
Room 126
(See description for Notes on Entry)
symsys bubbles logo

Symbolic Systems Forum
(community sessions of SYMSYS 280 - Symbolic Systems Research Seminar)

Remaking Reality: Deception and Misinformation in Virtual Environments
James Brown (M.S. Candidate)
Symbolic Systems Program

Exploring Cross-Lingual Idiom Interpretation in Large Language Models
Anjali Ragupathi (M.S. Candidate)
Symbolic Systems Program

Quantifying and Evaluating Racial Bias in TV Dialogue
Naomi Eigbe (M.S. Candidate)
Symbolic Systems Program

Monday, May 20, 2024
12;05-1:20 pm PT
(note earlier than usual starting time)
Margaret Jacks Hall (Bldg. 460), Room 126
In-person event, not recorded
(see below for entry instructions if you are not an active Stanford affiliate)

Note: Lunch is provided, if pre-ordered, only for members of SYMSYS 280, but others are welcome to bring a lunch and eat during the presentation.


12:05pm James Brown, "Remaking Reality: Deception and Misinformation in Virtual Environments" (Primary Advisor: Jeremy Bailenson, Communication; Second Reader: Jeffrey Hancock, Communication)
     Humans are spending an increasing amount of time inside virtual environments with approximately 50 million headsets sold in the last five years. At the same time, the medium is becoming more immersive, creating perceptual experiences that come closer to the fidelity of reality with each new iteration. Misinformation and disinformation, longtime societal issues, have been exacerbated by the rise of the internet and social media platforms. Some of those same social media companies are also primarily driving the adoption of Mixed Reality (MR) as 90% of all MR headsets were sold by either Meta or ByteDance in 2022. The inherent nature of MR, designed to experientially simulate "reality," raises concerns about the potential impact of intentionally engineered or altered virtual realities on user beliefs and behaviors. This presentation will go over previous research in the field, the affordance approach for why MR may be more influential than traditional social media in impacting user behavior, and ongoing research efforts to measure and mitigate deception and misinformation in virtual environments.

12:30pm Anjali Ragupathi, "Exploring Cross-Lingual Idiom Interpretation in Large Language Models" (Primary Advisor: Chris Potts, Linguistics; Second Reader: Judith Degen, Linguistics)
     Idioms are figures of speech whose meaning cannot be deciphered from their constituent words. In the absence of cultural, social, and conversational context, it is challenging for humans and large language models (LLMs) alike to interpret an unfamiliar idiom. This study aims to compare these idiom interpretation abilities in humans and LLMs through a series of tasks, including free-form responses and choice-based prompts, on a novel dataset of idioms from different languages. Our work aims to answer three main questions: 1. How do humans and LLMs compare on a pragmatic interpretation task when no context is given? 2. In the absence of context cues, what linguistic features contribute to the interpretation of idioms? 3. What role does an underlying conceptual metaphor play in understanding novel idioms?

12:55pm Naomi Eigbe, Quantifying and Evaluating Racial Bias in TV Dialogue (Primary Advisor: Dan Jurafsky, Linguistics and Computer Science)
     Over the past few decades, the topic of diverse representation in media has grown in prominence, as marginalized communities have fought for more truthful, numerous, and multifaceted depictions on-screen. Prior psychological and sociological research has identified positive impacts of high-quality representation, underscoring the need to understand the progress we’ve made and areas we still need to improve. However, while the quantity of on-screen representation can be measured rather straightforwardly, assessing the quality of these depictions poses a more complex challenge. In this study, we aim to develop natural language processing methods for screenplay dialogue with which we can reliably measure character traits associated with the Stereotype Content Model, a psychological framework that posits that all interpersonal impressions exist along the dimensions of warmth and competence. We plan to computationally explore linguistic indicators of warmth, competence, and other individual characteristics, observe how they vary in the language assigned to characters of different demographic backgrounds, and investigate the best approaches to improving representation with this data. We hope that such analysis will provide a more nuanced examination of on-screen diversity.


Entry to the building is open to anyone with an active Stanford ID via the card readers next to each door. If you do not have a Stanford ID, you can gain entry between 12:15 and 12:30pm ONLY by knocking on the exterior windows of room 126. These windows are to the left of the west side exterior door on the first floor of Margaret Jacks Hall, which faces the back east side of Building 420. Please do not knock on these windows after 12:30pm when the talk has started. We will not be able to come out and open the door for you at that point.