Research Projects & Mentors
Advancements in sensing, computation, and communication technologies allow us to create new pervasive systems that can help to address public health and safety. For example, wearable devices, like smartwatches, are widely available and have the potential to serve as platforms for unobtrusive systems that continuously monitor health behaviors and conditions, allowing users and medical professionals to understand and potentially improve health and well-being. Spaces instrumented with smart camera networks can help to monitor airports for suspicious behavior and to detect falls in common areas of assisted living centers. In this REU Site, students will explore the use of pervasive computing for improving health, safety, and well-being, solving core research problems in the areas of wireless networking, computer vision, and data analytics. Example projects are listed below.
Summer 2022 Projects
Project HATCH: Healthy After Childbirth with Community-Based Sensing and Behavior Change Techniques
Faculty mentor: Dr. Jamie Payton
Although pregnancy motivates many women to quit smoking, the vast majority return to smoking within 12 months of childbirth. Reducing postpartum smoking relapse can significantly improve both women’s and children’s health, and is directly linked to cancer prevention. Although many existing smoking cessation programs consider smoking in isolation of other behaviors, evidence shows that smoking abstinence is one of several cancer-preventative behaviors that interact with and impact one another for postpartum women; in particular, smoking abstinence, breastfeeding and physical activity are interrelated behaviors.
Through Project Healthy After Childbirth (HATCH), a collaborative research effort supported with seed funding by the National Cancer Institute and Cancer Research U.K., we are developing a wearable computing solution to promote physical activity, breastfeeding, and sustained abstinence from smoking among postpartum women. Key to our solution is the observation that communities contribute to the larger social, cultural and environmental conditions that influence behavior. As such, our approach uses embedded and wearable sensors and social media data to monitor, detect, and influence not just the activities of postpartum women, but also of their local, in-person support networks (e.g., household members), remote, in-person communities (e.g., neighbors and friends), and remote, virtual communities (e.g., social networks). Such an approach builds on social contagion theory, which posits that an individual’s health is interconnected with the behaviors of people within their social network, and on the findings for the effective use of behavior change techniques to promote health and wellness.
Extending preliminary work on Project Hatch, REU Site students will work with mentors to explore the use of behavior change techniques that use location-based prompts to address some of the physical barriers that postpartum women face for smoking abstinence, exercise, and breastfeeding. Students will help to apply and extend activity recognition techniques using data from the sensors on a smartwatch (e.g., accelerometer and gyroscope for human motion, photoplethysmography (PPG) for heartrate, GPS for location) and will design and implement a wearable application that embodies context-aware prompts for behavior change techniques and the associated mechanisms of action (e.g., goal setting for cancer-preventative behaviors, feedback and monitoring, rewards, identity-building as a former smoker). As the project advances, students may also explore how existing activity recognition, stress detection, and context recognition algorithms for detecting cancer-preventative behaviors performed by an individual postpartum woman can be extended to apply to their local, remote, and virtual communities.
TraffickCam: Computer Vision + Crowdsourced Image Acquisition to Combat Human Trafficking
Faculty mentor: Dr. Richard Souvenir
Images of human trafficking victims are shared among criminal networks and used to advertise sex services. A variety of efforts apply computer vision and machine learning approaches to these images in order to identify the location and support the inves-tigation of criminal activity. Much of this activity takes place in hotels; our work focuses on identifying hotel rooms from images. We have created a large-scale database of hotel room images collected from both Internet sites (e.g., Expedia, TripAdvisor) and crowdsourcing using a mobile app that allows users to capture images of hotel rooms while traveling. This project, called Traffickcam, is part of a larger effort spearheaded by the Exchange Initiative and includes researchers from multiple academic institutions. Previous REU students have contributed to the project, using deep convolutional neural networks to develop the initial filter to determine whether or not contributed images represented hotel rooms or bathrooms and not other areas (e.g., pool, restaurant) or irrelevant images.
Modern machine learning methods rely on large-scale image sets to train models for recognition and identification tasks. To date, our database of hotel room images contains millions of Internet images and hundreds of thousands of images crowdsourced from the mobile app. While the Internet images are abundant and useful for discriminating between hotel rooms, these scenes are often lit artificially and professionally captured, and the resulting images are visually dissimilar from the types of images used in human trafficking (i.e., captured by amateurs). The opposite is true for the images from the mobile app users; they share visual characteristics with the query images, but are less abundant and often uninformative due to the lighting or portion of the scene captured.
REU students will investigate approaches for providing feedback to mobile app users during image capture to maximize the utility of crowdsourced images without overwhelming the volunteer. In addition to the design and implementation of newer versions of the mobile application, students will carry out formal user studies to assess the tradeoffs between user engagement and the amount and type of instructions. Students with previous experience in image analysis can work on adapting image quality assessment algorithms for real-time use on mobile platforms. Students with previous experience or interest in machine learning can collaborate to extract usable information from approaches in visual domain adaptation to understand what features of user-supplied images best complement the Internet image and determine if these insights can be translated into specific instructions (or qualitative assessments) provided to the mobile app users.
Healthy-AR: Healthier with Augmented Reality Smart Health Systems
Faculty mentors: Dr. Jie Wu, Dr. Chiu Tan
Augmented reality (AR), in which a user's visual perception of the real world is overlaid with digital information, is a compelling tool to support a multitude of future smart health and well-being applications. For example, AR can be used to foster diagnosis and treatment of children with autism, to provide increasingly complex tasks and activities of daily living as part of rehabilitation therapy for traumatic brain injury, and to provide exposure therapy for post-traumatic stress disorder. As with any technology used to support healthcare and collection of health data, security and privacy are key concerns. To enable adoption of AR-enabled smart health approaches, authentication of users to the AR devices is needed. Voice-based authentication is promising, as it offers a hands-free, non-intrusive, continuous approach to authenticating users to a device. However, since voices can easily be recorded in public settings, voice-based authentication is vulnerable to replay attack. To support voice-based authentication for augmented reality applications for smart health, an approach that verifies the liveness of voices is needed.
A key insight in our approach to verifying voice liveness for AR systems is that the internal body voice, i.e., the vibration signal captured as the voice travels through the speaker's body, has a strong correlation with the ambient signal of the voice. Since this vibration signal can only be recorded by attaching a contact microphone firmly to the body, replay attackers cannot obtain the vibration signals. In our approach, a low-cost contact microphone is mounted on the AR headset to capture the vibration signal as a human voice propagates within the speaker's body, and a standard headset is used to capture the ambient propagation of the speaker's voice. We first transform the voice and vibration signals, extracting a feature vector that represents the correlation and the amount of shared information between signals. These features are then used to train a classification model for liveness detection. We also consider a strong attack model where the replay attackers can collect enough pairs of voice and vibration signals themselves and learn a model to generate synthetic vibration signals. The results on eight volunteers show that our system identifies authorized users with average accuracy of 97% and defends against two basic types of attacks with a success rate of at least 98%.
The REU students will extend this project to (a) recognize unauthorized users within a noisy and dynamic environment, (b) defend against sophisticated attackers who try to generate the victim's vibration signals, and (c) explore tradeoffs between accuracy, privacy, and usability. Through this project, the REU students will develop knowledge and skills about the state of art in augmented reality technology, applications of machine learning, applications of sensor technology, and the role of voice authentication solutions to address security and privacy concerns in smarthealth applications.
Reasoning and Rationalization: How Users Interpret Personal and Population Health Data
Faculty mentor: Dr. Stephen MacNeil
As mobile phones, wearable devices, and smart sensing technology continue to make their way into the home, people have unprecedented access to information about themselves. Previous research has demonstrated how personal informatics devices can increase self-awareness, facilitate positive behavioral changes, and foster engagement. At the same time, population health data is increasingly being shared with the general public through government data portals, news articles, and public health announcements. While recent research is developing an understanding about how cognitive biases affect how individuals interpret data and make decisions with data~, it is not yet clear how individuals interpret this aggregate data when it conflicts or conforms with personal data and lived experiences, particularly with respect to health and safety.
In our work, we develop theory about how an individual's relationship to data and their positionality within that data affects their reasoning. A series of experiments will progressively increase the participant's proximity to population-level data. Participants will reflect on their own lived experiences, then on local data about themselves and their region, and finally on large-scale nationwide data that shows a clear and generalizable trend. We hypothesize that known cognitive biases, such as confirmation biasand anchoring effects may negatively impact how people ``see themselves'' in the data. We further hypothesize that lived experiences, confirmed by personal data, may trump rational interpretation of trends in data. We are particularly interested in understanding how people rationalize about being an outlier in a dataset.
REU students will work as part of a team to design and develop a series of web-based human subject experiments where participants compare and contrast data about themselves with large datasets that demonstrate clear population-level trends. The analysis will employ mixed methods to investigate participant behaviors, perspectives, and attitudes. Students will also develop technical skills in full-stack web development, data visualization, and data analysis. This project offers a unique opportunity for students to engage in interdisciplinary research at the intersection of cognitive science, public health policy, and computer science, which is especially beneficial for undergraduate students that are still exploring computing career paths. Finally, the real-world applicability of this research should be engaging for students who are interested in the growing trends in scientific disinformation and misinterpretation as it relates to health and safety, particularly in light of the COVID-19 pandemic.
BikeBalance: Incentive Algorithms for Rebalancing Shared Bike Systems to Promote Active Transportation
Faculty mentor: Dr. Jie Wu
Bike-sharing systems (BSSs) are widely used in cities worldwide as they offer an affordable, eco-friendly method of transport. However, the rate of renting and returning bikes from stations is not always equal. The stations with imbalanced demand can become out of service by having all docks filled or emptied. These out-of-service stations can lead to a worse user experience and fewer people using BSSs. Researchers are trying to solve the rebalancing problem by developing algorithms that incentivize workers to pick up and drop off bicycles from different stations to balance the rent and return rates. Many of these algorithms focus on creating incentive and pricing models to encourage workers to go to imbalanced stations. Since they do not consider all of the placements of workers, this strategy may lead to inefficiencies where workers travel farther than they need. We can treat rebalancing as a Worker Assignment Problem by assigning worker stations to minimize the total distance traveled. REU Site students have previously proposed an algorithm that can approximate the optimal assignment significantly faster than other techniques with very high performance. The rapid speed allows for real-time use in augmenting pricing models and as a stand-alone method for worker assignment. REU Site students will experimentally compare this approach against existing algorithms on real-world data to evaluate computational speed and effectiveness, and will seek to develop improvements to the algorithm to address the problem with real-world constraints.
Decentralized Learning at the Edge for Smarthealth and Well-being Applications
Dr. Hongchang Gao, Dr. Yan Wang, Dr. Yu Wang
Devices at the edge of the network, such as smartphones, sensors, and Internet of Things devices, have been widely used to collect users' personal data to support a wide variety of health and well-being applications. However, the data collected by edge devices is often sensitive and private in the context of smarthealth applications, and can be exposed when uploaded to a central server. In order to promote data privacy, a promising approach is to train machine learning models using data on the edge devices and use them to make decisions locally at the edge of the network. With a decentralized approach , the edge devices only need to communicate with the neighboring devices. As a result, in addition to preserving the privacy of data, the decentralized training method reduces communication overhead and eliminates potential bottleneck issues that occur in systems with a centralized server.
Extending our prior work on large-scale optimization, the REU students will join our team to investigate a decentralized approach to train machine learning models on edge devices. A key challenge to training machine learning models at the network's edge is the heterogenity of edge devices used to support smarthealth applications. These devices have different computation and communication capabilities, resulting in different computational speed and communication latency, which challenge traditional optimization algorithms for machine learning models on edge devices. The research team has recently developed a decntralized optimization algorithm. REU students will conduct simulation experiments with real-world data, such as the healthcare data collected by smartwatches and room sensors, to validate the convergence and generalization performance of the developed decentralized optimization algorithm. Additionally, other REU students can help to develop a toolbox for the developed decentralized training method that is compatible with existing machine learning packages to more broadly disseminate the results of the work. Through these projects, REU students will be exposed to the knowledge in machine learning, optimization, and distributed computing problems and solutions in the context of smarthealth applications.
Previous REU Site Projects (2019-2021):
Crowdsensing Task Assignment to Support Community-Driven Health Applications
Faculty mentor: Dr. Jamie Payton
Mobile crowdsensing via smartphones enables mobile data collection on a massive scale and has been widely used to investigate scientific questions or address civic issues, such as traffic planning and en- vironmental monitoring. Compared with static sensor networks, mobile crowdsensing leverages existing sensing and communication infrastructure without additional costs; provides unprecedented spatio-temporal coverage, especially for observing unpredictable events; and integrates human intelligence into the sensing and data processing. Research challenges in crowdsensing include incentivizing volunteers to participate and assigning tasks to volunteers in a way that meets data quality requirements. In this project, students will explore the design of new incentive and task assignment algorithms for collaborative, team-driven crowdsensing that consider the intrinsic motivations of volunteers for community health applications.
In Summer 2020, students will explore the use of wearable computing devices and behavior change techniques to promote three interactive smoking preventative behaviors among postpartum women: smoking abstinence, exercise, and breastfeeding.
Sign Language Recognition System Using WiFi Signals
Faculty mentors: Dr. Jie Wu, Dr. Chiu Tan
Creating pervasive systems that support the automated translation of sign language can expand oppor- tunities for communication and help to increase accessibility for the deaf and hearing impaired. Several approaches for automated sign language translation have been developed, however most rely on video or data from specialized wearable sensors. In contrast, our work applies human activity recognition techniques to WiFi signals to identify American Sign Language (ASL) signs in real-time. Such WiFi-based activity recognition systems eliminate the requirement for users to wear or carry a device and avoid privacy issues associated with cameras. However, several research challenges remain. In this project, REU students will focus on designing new feature extraction and classification methods for Channel State Information signals that can be used to support the identification of more gestures with higher recognition accuracies.
Supporting Pervasive Health Applications with Worst-Case Age-of-Information Guarantees
Faculty mentors: Dr. Bo Ji, Dr. Anduo Wang
Real-time pervasive health monitoring has the potential to revolutionize patient care, allowing pa- tient health characteristics to be transmitted to remote medical professionals. Given the sensitivity of the information and the critical nature of such systems, rigorous evaluation of new protocols to support pervasive health care is required. Commonly used performance metrics for distributed systems, such as throughput and delay, do not capture some of the key concerns associated with delivery of remotely sensed health in- formation; in particular, understanding the timeliness of the data is of the utmost importance to support analysis and decision-making by medical care professionals. Recently, the Age-of-Information (AoI) metric has been proposed as an important metric for investigating timeliness performance. In this project, REU studentw will explore ways to specify and guarantee the worst-case Age-of-Information performance for pervasive health applications.
Lightweight Supervised Learning for Human Activity Recognition
Faculty mentor: Dr. Slobodan Vucetic
A common aim in pervasive health systems is to use human activity recognition techniques applied to sensor data from wearable devices to monitor health-related behaviors (e.g., cigarette smoking) and activities of daily living (e.g., walking, preparing a meal). The objective of this project is to develop computationally lightweight algorithms that allow learning of activity recognition predictive models on resource-constrained wearable devices. In our previous work, we developed BudgetedSVM, a C++ toolbox containing highly optimized memory-light implementations for scalable training of Support Vector Machines (SVMs). While BudgetedSVM allows efficient learning of memory-light SVMs that can be used for human activity recognition, it cannot be implemented on extremely resource-constrained devices. In this project, REU students will extend BudgetedSVM for use on wearable devices to support pervasive health applications.
Sentiment Analysis
Faculty mentor: Dr. Eduard Dragut
Accurate information from both sides of the contemporary issues is known to be an `antidote in confirmation bias'. While these types of information help the educators to improve their vital skills including critical thinking and open-mindedness, they are relatively rare and hard to find online. With the well-researched argumentative opinions (arguments) on controversial issues shared by Procon.org in a nonpartisan format, detecting the stance of arguments is a crucial step to automate organizing such resources. We use a universal pretrained language model with weight-dropped LSTM neural network to leverage the context of an argument for stance detection on the proposed dataset. Experimental results show that the dataset is challenging, however, utilizing the pretrained language model fine-tuned on context information yields a general model that beats the competitive baselines. We also provide analysis to find the informative segments of an argument to our stance detection model and investigate the relationship between the sentiment of an argument with its stance. In this project, REU Site students will explore alternative approaches to further improve the utilization of context for detecting the stance of arguments.
Apply Now!
Important Dates
Applications Open:
Nov. 15, 2023
Application Deadlines:
Application Deadline: Feb. 12, 2024
Acceptance Notification:
Notifications: Feb. 19, 2024
REU Program:
June 3, 2024 to Aug. 2, 2024