The Privacy Lab is led by Prof. Apu Kapadia in the School of Informatics, Computing, and Engineering at Indiana University. Our goal is to advance research in online privacy, mobile security, and peer-to-peer systems. For an overview of our research, see our Research Projects and Publications pages. Visit the People page to see the faces behind our research.
Pervasive photo sharing in online social media platforms can cause unintended privacy violations when elements of an image reveal sensitive information. Prior studies have identified image obfuscation methods (e.g., blurring) to enhance privacy, but many of these methods adversely affect viewers’ satisfaction with the photo, which may cause people to avoid using them. We study the novel hypothesis that it may be possible to restore viewers’ satisfaction by ‘boosting’ or enhancing the aesthetics of an obscured image, thereby compensating for the negative effects of a privacy transform. Using a between-subjects online experiment, we studied the effects of three artistic transformations on images that had objects obscured using three popular obfuscation methods validated by prior research. Our findings suggest that using artistic transformations can mitigate some negative effects of obfuscation methods, but more exploration is needed to retain viewer satisfaction.
Read more about it in our CHI 2019 paper.
High-fidelity, and often privacy-invasive, sensors are now becoming pervasive in our everyday environments. At home, digital assistants can constantly listen for instructions and security cameras can be on the lookout for unusual activity. Whereas once an individual’s physical actions, in their own home, were private, now networked cameras and microphones can give rise to electronic privacy concerns for one’s physical behaviors. Casual conversations and encounters, once thought to be private and ephemeral, now may be captured and disseminated or archived digitally. While these sensing devices benefit the users in many different ways, hence their popularity, their users may face serious privacy violations. A major problem with current sensing devices is that it is oftentimes unclear whether an audio or video sensor is, indeed, off upon visual inspection by occupants of a space. For example, sensors that have been hacked may indeed record people without their consent, even when their interfaces (e.g., small indicator lights) claim that they are off. The goal of this project is to explore privacy-enhanced sensor designs that provide people with the knowledge and assurance of when they are being recorded and what data is being captured and disseminated. Whereas purely software mechanisms may not inspire trust, physical mechanisms (e.g., a camera’s physical lens cap) can provide a more tangible privacy guarantee to people. This project explores novel, physical designs of sensors that convey a clear and definite sense of assurance to people about their physical privacy.
Through a collaboration with University of Pittsburgh professors Rosta Farzan and Adam J. Lee, this project brings together expertise in computer security and privacy, access control, human computer interaction, and social computing. Through this interdisciplinary team, the project makes socio-technical contributions to both theory and practice by: (1) understanding the privacy concerns, needs, and behaviors of people in the face of increased sensing in physical environments; (2) exploring the design space for hardware sensing platforms to convey meaningful (‘tangible’) assurances of privacy to people by their physical appearance and function; (3) exploring visual indicators of what information is being sent over the network; and (4) exploring alternative sensor designs that trade off sensing fidelity for higher privacy. Together these designs combine hardware and software techniques to tangibly and visually convey a sense of privacy to people impacted by the sensors.
With the rise of digital photography and social networking, people are capturing and sharing photos on social media at an unprecedented rate. Such sharing may lead to privacy concerns for the people captured in such photos, e.g., in the context of embarrassing photos that go “viral” and are shared widely. At worst, online photo sharing can result in cyber-bullying that can greatly affect the subjects of such photos. This research builds on the observation that viewers of a photo are mindful of the privacy of other people, and could be influenced to protect their privacy when sharing photos online. First, the research will study how people think and feel about sharing photos of themselves and others. This will involve measuring their behavioral and physiological responses as they make their decisions. Second, the research will identify to what degree these decisions can be altered through technical mechanisms that are designed to encourage responsible image sharing activity that respects the privacy of people captured in the photo. The investigators will involve graduate and undergraduates students in this research.
This project brings together expertise in the psychological and brain sciences (Bennett Bertenthal) and computer security and privacy (Apu Kapadia) to explore socio-technical solutions for privacy in the context of photo sharing. In particular, the research focuses on first developing an understanding of people’s cognitive and affective dynamics while sharing photos on social media. The research seeks to 1) determine the effects of attention, depth of processing, and decisional uncertainty on image sharing decisions; and 2) identify the relationship between affective responses to images and decisions to share images on social media. Building on the knowledge gained from these experiments, the research seeks to develop and test a series of socio-technical intervention strategies such as face-highlighting and identity-priming, which are informed by a novel theoretical and methodological framework called objectification theory, or the idea that people are often motivated to see other people in terms of particular features or as objects of entertainment. These interventions will counteract objectification by encouraging viewers to consider the personal identity or privacy of the people depicted in each image before making image-sharing decisions. Thus, the mechanisms for addressing bystander privacy will be grounded in a psychological study that understands and manipulates the elements of decision making while sharing photos.
A critical element for a successful transition is the ability to disclose, or make known, one’s struggles. In our paper ‘Challenges in Transitioning from Civil to Military Culture: Hyper-Selective Disclosure through ICTs’ we explore the transition disclosure practices of Reserve Officers’ Training Corps (ROTC) students who are transitioning from an individualistic culture to one that is highly collective. As ROTC students routinely evaluate their peers through a ranking system, the act of disclosure may impact a student’s ability to secure limited opportunities within the military upon graduation. We perform interviews of 14 ROTC students studying how they use information communication technologies (ICTs) to disclose their struggles in a hyper-competitive environment, we find they engage in a process of highly selective disclosure, choosing different groups with which to disclose based on the types of issues they face. We share implications for designing ICTs that better facilitate how ROTC students cope with personal challenges during their formative transition into the military.
Read more in our CSCW 2018 paper.
Millions of apps available to smartphone owners request various permissions to resources on the devices including sensitive data such as location and contact information. Disabling permissions for sensitive resources could improve privacy but can also impact the usability of apps in ways users may not be able to predict. In our paper ‘To Permit or Not to Permit, That is the Usability Question: Crowdsourcing Mobile Apps’ Privacy Permissions Settings’ we study an efficient approach that ascertains the impact of disabling permissions on the usability of apps through large-scale, crowdsourced user testing with the ultimate goal of making recommendations to users about which permissions can be disabled for improved privacy without sacrificing usability.
We replicate and significantly extend previous analysis that showed the promise of a crowdsourcing approach where crowd workers test and report back on various configurations of an app. Through a large, between-subjects user experiment, our work provides insight into the impact of removing permissions within and across different apps. We had 218 users test Facebook Messenger, 227 test Instagram, and 110 test Twitter. We study the impact of removing various permissions within and across apps, and we discover that it is possible to increase user privacy by disabling app permissions while also maintaining app usability.
In the paper ‘Cartooning for Enhanced Privacy in Lifelogging and Streaming Videos’, we describe an object replacement approach whereby privacy-sensitive objects in videos are replaced by abstract cartoons taken from clip art. We used a combination of computer vision, deep learning, and image processing techniques to detect objects, abstract details, and replace them with cartoon clip art. We conducted a user study with 85 users to discern the utility and effectiveness of our cartoon replacement technique. The results suggest that our object replacement approach preserves a video’s semantic content while improving its piracy by obscuring details of objects.
Major online messaging services such as Facebook Messenger and WhatsApp are starting to provide users with real-time information about when precipitants read their messages. This useful feature has the potential to negatively impact privacy as well as cause concern over access to self. In the paper ‘Was My Message Read?: Privacy and Signaling on Facebook Messenger’ we surveyed 402 senders and 316 recipients on Mechanical Turk. We looked at senders’ use of and reactions to the ‘message seen’ feature, and recipients privacy and signaling behaviors in the face of such visibility. Our findings indicate that senders experience a range of emotions when their message is not read, or is read but not answered immediately. Recipients also engage in various signaling behaviors in the face of visibility by both replying or not replying immediately.