May 21, 2018
& End Date
at 460-126 (Greenberg Seminar Room)
Symbolic Systems Forum
Public Presentations of M.S. Projects
Devangi Vivrekar (M.S. Candidate)
Symbolic Systems Program
George Pakapol Supaniratisai (M.S. Candidate)
Symbolic Systems Program
Monday, May 21, 2018
Building 460, Room 126 (Margaret Jacks Hall)
(1) Devangi Vivrekar (M.S. Candidate), Symbolic Systems Program, "Persuasive Design Techniques in the Attention Economy: Taxonomy and User Awareness" (Advisor: James Landay, Computer Science; Second Reader: Alia Crum, Psychology)
The systematic study of persuasion has captured researchers’ interest since the advent of mass influence mechanisms such as radio, television, and advertising. With the unprecedented growth of massively popular social media applications that are ubiquitously accessible on smart devices, consumers’ attention, attitudes, and behaviors are constantly influenced by persuasive design techniques on platforms that profit by maximizing users’ time spent on site. Although there exists a rich social psychology literature on methods of persuasion and exploitable cognitive biases, we lack a mapping of the specific persuasive design techniques used by products like Facebook or LinkedIn onto this persuasive space. In this talk, I will discuss our efforts to contribute to and update the taxonomies of persuasion. I will also present our design of a system that annotates online newsfeeds to point out the use of persuasive design techniques in real time, and discuss its effects on user awareness of these techniques.
(2) George Pakapol Supaniratisai (M.S. Candidate), Symbolic Systems Program (Primary Advisor: Surya Ganguli, Applied Physics; Second Reader: Johan Ugander, Management Science and Engineering)
With an ever larger amount of data stored throughout the internet, data privacy become a major concern for several parties. Recently, several incidents of unintentional “data breach” arises from publicly available data, where the information can be relinked to deanonymize the users through machine learning exploits. Conversely, several companies and organizations want to move towards transparency, more usage data are published, especially those related to the government officials’ activities.
In this talk, we will explore the idea of differential privacy: trading away the data precision to prevent reidentification of users, while preserving the performance in the machine learning applications in different types of models. We will walk through preliminary findings and discuss the implications and future works.