The Privacy Lab is led by Prof. Apu Kapadia in the School of Informatics, Computing, and Engineering at Indiana University. Our goal is to advance research in online privacy, mobile security, and peer-to-peer systems. For an overview of our research, see our Research Projects and Publications pages. Visit the People page to see the faces behind our research.

Can Privacy be ‘Satisfying’ too? Paper accepted at CHI 2019

Pervasive photo sharing in online social media platforms can cause unintended privacy violations when elements of an image reveal sensitive information. Prior studies have identified image obfuscation methods (e.g., blurring) to enhance privacy, but many of these methods adversely affect viewers’ satisfaction with the photo, which may cause people to avoid using them. We study the novel hypothesis that it may be possible to restore viewers’ satisfaction by ‘boosting’ or enhancing the aesthetics of an obscured image, thereby compensating for the negative effects of a privacy transform. Using a between-subjects online experiment, we studied the effects of three artistic transformations on images that had objects obscured using three popular obfuscation methods validated by prior research. Our findings suggest that using artistic transformations can mitigate some negative effects of obfuscation methods, but more exploration is needed to retain viewer satisfaction.

Read more about it in our CHI 2019 paper.

NSF $450K (IU: $160K) Award to Study “Tangible Privacy: User-Centric Sensor Designs for Assured Privacy”

High-fidelity, and often privacy-invasive, sensors are now becoming pervasive in our everyday environments. At home, digital assistants can constantly listen for instructions and security cameras can be on the lookout for unusual activity. Whereas once an individual’s physical actions, in their own home, were private, now networked cameras and microphones can give rise to electronic privacy concerns for one’s physical behaviors. Casual conversations and encounters, once thought to be private and ephemeral, now may be captured and disseminated or archived digitally. While these sensing devices benefit the users in many different ways, hence their popularity, their users may face serious privacy violations. A major problem with current sensing devices is that it is oftentimes unclear whether an audio or video sensor is, indeed, off upon visual inspection by occupants of a space. For example, sensors that have been hacked may indeed record people without their consent, even when their interfaces (e.g., small indicator lights) claim that they are off. The goal of this project is to explore privacy-enhanced sensor designs that provide people with the knowledge and assurance of when they are being recorded and what data is being captured and disseminated. Whereas purely software mechanisms may not inspire trust, physical mechanisms (e.g., a camera’s physical lens cap) can provide a more tangible privacy guarantee to people. This project explores novel, physical designs of sensors that convey a clear and definite sense of assurance to people about their physical privacy.

Through a collaboration with University of Pittsburgh professors Rosta Farzan and Adam J. Lee, this project brings together expertise in computer security and privacy, access control, human computer interaction, and social computing. Through this interdisciplinary team, the project makes socio-technical contributions to both theory and practice by: (1) understanding the privacy concerns, needs, and behaviors of people in the face of increased sensing in physical environments; (2) exploring the design space for hardware sensing platforms to convey meaningful (‘tangible’) assurances of privacy to people by their physical appearance and function; (3) exploring visual indicators of what information is being sent over the network; and (4) exploring alternative sensor designs that trade off sensing fidelity for higher privacy. Together these designs combine hardware and software techniques to tangibly and visually convey a sense of privacy to people impacted by the sensors.

NSF $500K Award to Study “Socio-Technical Strategies for Enhancing Privacy in Photo Sharing”

With the rise of digital photography and social networking, people are capturing and sharing photos on social media at an unprecedented rate. Such sharing may lead to privacy concerns for the people captured in such photos, e.g., in the context of embarrassing photos that go “viral” and are shared widely. At worst, online photo sharing can result in cyber-bullying that can greatly affect the subjects of such photos. This research builds on the observation that viewers of a photo are mindful of the privacy of other people, and could be influenced to protect their privacy when sharing photos online. First, the research will study how people think and feel about sharing photos of themselves and others. This will involve measuring their behavioral and physiological responses as they make their decisions. Second, the research will identify to what degree these decisions can be altered through technical mechanisms that are designed to encourage responsible image sharing activity that respects the privacy of people captured in the photo. The investigators will involve graduate and undergraduates students in this research.

This project brings together expertise in the psychological and brain sciences (Bennett Bertenthal) and computer security and privacy (Apu Kapadia) to explore socio-technical solutions for privacy in the context of photo sharing. In particular, the research focuses on first developing an understanding of people’s cognitive and affective dynamics while sharing photos on social media. The research seeks to 1) determine the effects of attention, depth of processing, and decisional uncertainty on image sharing decisions; and 2) identify the relationship between affective responses to images and decisions to share images on social media. Building on the knowledge gained from these experiments, the research seeks to develop and test a series of socio-technical intervention strategies such as face-highlighting and identity-priming, which are informed by a novel theoretical and methodological framework called objectification theory, or the idea that people are often motivated to see other people in terms of particular features or as objects of entertainment. These interventions will counteract objectification by encouraging viewers to consider the personal identity or privacy of the people depicted in each image before making image-sharing decisions. Thus, the mechanisms for addressing bystander privacy will be grounded in a psychological study that understands and manipulates the elements of decision making while sharing photos.

Paper on Privacy and Assistive Technologies for the Visually Impaired to be Presented at UbiComp 2018

The emergence of augmented reality and computer vision based tools offer new opportunities to visually impaired persons (VIPs). Solutions that help VIPs in social interactions by providing information (age, gender, attire, expressions etc.) about people in the vicinity are becoming available. Although such assistive technologies are already collecting and sharing such information with VIPs, the views, perceptions, and preferences of sighted bystanders about such information sharing remain unexplored. Although bystanders may be willing to share more information for assistive uses it remains to be explored to what degree bystanders are willing to share various kinds of information and what might encourage additional sharing of information based on the contextual needs of VIPs. We describe the first empirical study of information sharing preferences of sighted bystanders of assistive devices. We conducted a survey based study using a contextual method of inquiry with 62 participants followed by nine semi-structured interviews to shed more insight on our key quantitative findings. We find that bystanders are more willing to share some kinds of personal information with VIPs and are willing to share additional information if higher security assurances can be made by improving their control over how their information is shared.

Read about it in our UbiComp 2018 paper.

Paper “You don’t want to be the next meme” Presented at SOUPS 2018

Pervasive photography and the sharing of photos on social media pose a significant challenge to undergraduates’ ability to manage their privacy. Drawing from an interview-based study, we find undergraduates feel a heightened state of being surveilled by their peers and rely on innovative workarounds – negotiating the terms and ways in which they will and will not be recorded by technology-wielding others – to address these challenges. We present our findings through an experience model of the life span of a photo, including an analysis of college students’ workarounds to deal with the technological challenges they encounter as they manage potential threats to privacy at each of our proposed four stages. We further propose a set of design directions that address our users’ current workarounds at each stage. We argue for a holistic perspective on privacy management that considers workarounds across all these stages. In particular, designs for privacy need to more equitably distribute the technical power of determining what happens with and to a photo among all the stakeholders of the photo, including subjects and bystanders, rather than the photographer alone.

Read more in our SOUPS 2018 paper.

Paper ‘Viewer Experience of Obscuring Scene Elements in Photos to Enhance Privacy’ to be Presented at ACM CHI ’18

With the rise of digital photography and social networking, people are sharing personal photos online at an unprecedented rate. In addition to their main subject matter, photographs often capture various incidental information that could harm people’s privacy. In our paper, ‘Viewer Experience of Obscuring Scene Elements in Photos to Enhance Privacy’, we explore methods to keep this sensitive information private while preserving image utility. While common image filters, such as blurring, may help obscure private content, they affect the utility and aesthetics of the photos, which is important since the photos are mainly shared on social media for human consumption. Existing research on privacy-enhancing image filters predominately  focus on obscuring faces or lack a systematic study of how filters affect image utility. To understand the trade-offs when obscuring various sensitive aspects of images, we study eleven filters applied to obfuscate twenty different objects and attributes, and evaluate how effectively they protect privacy and preserve image utility and aesthetics for human viewers.

Read more in our CHI ’18 paper.

Paper ‘Challenges in Transitioning from Civil to Military Culture: Hyper-Selective Disclosure through ICTs’ to be Presented at CSCW ’18

A critical element for a successful transition is the ability to disclose, or make known, one’s struggles. In our paper ‘Challenges in Transitioning from Civil to Military Culture: Hyper-Selective Disclosure through ICTs’ we explore the transition disclosure practices of Reserve Officers’ Training Corps (ROTC) students who are transitioning from an individualistic culture to one that is highly collective. As ROTC students routinely evaluate their peers through a ranking system, the act of disclosure may impact a student’s ability to secure limited opportunities within the military upon graduation. We perform interviews of 14 ROTC students studying how they use information communication technologies (ICTs) to disclose their struggles in a hyper-competitive environment, we find they engage in a process of highly selective disclosure, choosing different groups with which to disclose based on the types of issues they face. We share implications for designing ICTs that better facilitate how ROTC students cope with personal challenges during their formative transition into the military.

Read more in our CSCW 2018 paper.

Paper ‘To Permit or Not to Permit, That is the Usability Question’ Presented at PETS ’17

Millions of apps available to smartphone owners request various permissions to resources on the devices including sensitive data such as location and contact information. Disabling permissions for sensitive resources could improve privacy but can also impact the usability of apps in ways users may not be able to predict. In our paper ‘To Permit or Not to Permit, That is the Usability Question: Crowdsourcing  Mobile Apps’ Privacy Permissions Settings’ we study an efficient approach that ascertains the impact of disabling permissions on the usability of apps through large-scale, crowdsourced user testing with the ultimate goal of making recommendations to users about which permissions can be disabled for improved privacy without sacrificing usability.

We replicate and significantly extend previous analysis that showed the promise of a crowdsourcing approach where crowd workers test and report back on various configurations of an app. Through a large, between-subjects user experiment, our work provides insight into the impact of removing permissions within and across different apps. We had 218 users test Facebook Messenger, 227 test Instagram, and 110 test Twitter. We study the impact of removing various permissions within and across apps, and we discover that it is possible to increase user privacy by disabling app permissions while also maintaining app usability.

Paper ‘Cartooning for Enhanced Privacy in Lifelogging and Streaming Videos’ Presented at CV-COPS ’17

In the paper ‘Cartooning for Enhanced Privacy in Lifelogging and Streaming Videos’, we describe an object replacement approach whereby privacy-sensitive objects in videos are replaced by abstract cartoons taken from clip art. We used a combination of computer vision, deep learning, and image processing techniques to detect objects, abstract details, and replace them with cartoon clip art. We conducted a user study with 85 users to discern the utility and effectiveness of our cartoon replacement technique. The results suggest that our object replacement approach preserves a video’s semantic content while improving its piracy by obscuring details of objects.

Paper ‘Was My Message Read?’ Presented at CHI 17

Major online messaging services such as Facebook Messenger and WhatsApp are starting to provide users with real-time information about when precipitants read their messages. This useful feature has the potential to negatively impact privacy as well as cause concern over access to self. In the paper ‘Was My Message Read?: Privacy and Signaling on Facebook Messenger’ we surveyed 402 senders and 316 recipients on Mechanical Turk. We looked at senders’ use of and reactions to the ‘message seen’ feature, and recipients privacy and signaling behaviors in the face of such visibility.  Our findings indicate that senders experience a range of emotions when their message is not read, or is read but not answered immediately. Recipients also engage in various signaling behaviors in the face of visibility by both replying or not replying immediately.