Browsing by Author "Gurbuz, Sevgi Z."
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Cross-Frequency Training with Adversarial Learning for Radar Micro-Doppler Signature Classification(2020) Gurbuz, Sevgi Z.; Rahman, M. Mahbubur; Kurtoglu, Emre; Macks, Trevor; Fioranelli, Francesco; University of Alabama TuscaloosaDeep neural networks have become increasingly popular in radar micro-Doppler classification; yet, a key challenge, which has limited potential gains, is the lack of large amounts of measured data that can facilitate the design of deeper networks with greater robustness and performance. Several approaches have been proposed in the literature to address this problem, such as unsupervised pre-training and transfer learning from optical imagery or synthetic RF data. This work investigates an alternative approach to training which involves exploitation of “datasets of opportunity” – micro-Doppler datasets collected using other RF sensors, which may be of a different frequency, bandwidth or waveform - for the purposes of training. Specifically, this work compares in detail the cross-frequency training degradation incurred for several different training approaches and deep neural network (DNN) architectures. Results show a 70% drop in classification accuracy when the RF sensors for pre-training, fine-tuning, and testing are different, and a 15% degradation when only the pre-training data is different, but the fine-tuning and test data are from the same sensor. By using generative adversarial networks (GANs), a large amount of synthetic data is generated for pre-training. Results show that cross-frequency performance degradation is reduced by 50% when kinematically-sifted GAN-synthesized signatures are used in pre-training.Item A CubeSat Train for Radar Sounding and Imaging of Antarctic Ice SheetGogineni, Prasad; Simpson, Christopher R.; Yan, Jie-Bang; O'Neill, Charles R.; Sood, Rohan; Gurbuz, Sevgi Z.; Gurbuz, Ali C.; University of Alabama TuscaloosaItem Monitoring of Eating Behavior Using Sensor-Based Methods(University of Alabama Libraries, 2024) Hossain, Delwar; Sazonov, EdwardThe essential physiological functions of the human body, including respiration, circulation, physical exertion, and protein synthesis, rely on energy from dietary constituents. Understanding eating behavior is crucial for overall health, as deviations in energy intake can lead to malnutrition-induced weight loss or obesity-related weight gain. Traditionally, dietary intake assessment has relied on self-reporting methods, such as dietary records, 24-hour recalls, and food frequency questionnaires. While these methods help understand relationships between eating behavior and dietary intake, they lack the granularity needed to explore detailed food consumption processes. Therefore, there is a need for innovative solutions that enable objective, precise, and automated monitoring of eating behavior, especially in free-living conditions.This dissertation investigates the application of wearable sensor systems for the automatic monitoring of eating behavior with minimal effort from subjects. First, a systematic review was conducted to identify available technology-driven methods for monitoring eating behavior. Then, a novel, contactless method for detecting and measuring eating behaviors such as chews and bites from eating videos was developed. Then an algorithm was devised to evaluate and compare different sensor modalities for identifying eating behavior, specifically focusing on chewing and chewing strength measurement. Four sensor modalities--Ear Canal Pressure Sensor, Piezoresistive Bend Sensor, Piezoelectric Strain Sensor, and EMG Sensor--were assessed. Results indicated comparable efficacy across all four systems in identifying chewing and chewing strength.Next, a novel Ear Canal Pressure Sensor system was explored for monitoring eating behavior, particularly chewing, in free-living environments. The findings demonstrated accurate detection and estimation of chewing in both controlled and free-living settings. Finally, a machine learning model to estimate energy intake (EI) from food intake using sensor-captured eating behavior features was developed and evaluated in free-living settings. The results highlight the efficacy of the sensor-based EI model and the potential for improved accuracy by leveraging image assistance and automatic food item detection.In conclusion, this research advances eating behavior monitoring using wearable sensor technologies. The findings hold promise for personalized nutrition interventions and mark a significant step forward in the objective assessment of eating habits.Item Multi-Frequency RF Sensor Data Adaptation for Motion Recognition with Multi-Modal Deep Learning(2021) Rahman, M. Mahbubur; Gurbuz, Sevgi Z.; University of Alabama TuscaloosaThe widespread availability of low-cost RF sensors has made it easier to construct RF sensor networks for motion recognition, as well as increased the availability of RF data across a variety of frequencies, waveforms, and transmit parameters. However, it is not effective to directly use disparate RF sensor data for the training of deep neural networks, as the phenomenological differences in the data result in significant performance degradation. In this paper, we consider two approaches for the exploitation of multi-frequency RF data: 1) a single sensor case, where adversarial domain adaptation is used to transform the data from one RF sensor to resemble that of another, and 2) a multi-sensor case, where a multi-modal neural network is designed for joint target recognition using measurements from all sensors. Our results show that the developed approaches offer effective techniques for leveraging multi-frequency RF sensor data for target recognition.Item Multi-Frequency RF Sensor Fusion for Word-Level Fluent ASL Recognition(IEEE, 2021) Gurbuz, Sevgi Z.; Rahman, M. Mahbubur; Kurtoglu, Emre; Malaia, Evie; Gurbuz, Ali Cafer; Griffin, Darrin J.; Crawford, Chris; University of Alabama Tuscaloosa; Mississippi State UniversityDeaf spaces are unique indoor environments designed to optimize visual communication and Deaf cultural expression. However, much of the technological research geared towards the deaf involve use of video or wearables for American sign language (ASL) translation, with little consideration for Deaf perspective on privacy and usability of the technology. In contrast to video, RF sensors offer the avenue for ambient ASL recognition while also preserving privacy for Deaf signers. Methods: This paper investigates the RF transmit waveform parameters required for effective measurement of ASL signs and their effect on word-level classification accuracy attained with transfer learning and convolutional autoencoders (CAE). A multi-frequency fusion network is proposed to exploit data from all sensors in an RF sensor network and improve the recognition accuracy of fluent ASL signing. Results: For fluent signers, CAEs yield a 20-sign classification accuracy of %76 at 77 GHz and %73 at 24 GHz, while at X-band (10 Ghz) accuracy drops to 67%. For hearing imitation signers, signs are more separable, resulting in a 96% accuracy with CAEs. Further, fluent ASL recognition accuracy is significantly increased with use of the multi-frequency fusion network, which boosts the 20-sign fluent ASL recognition accuracy to 95%, surpassing conventional feature level fusion by 12%. Implications: Signing involves finer spatiotemporal dynamics than typical hand gestures, and thus requires interrogation with a transmit waveform that has a rapid succession of pulses and high bandwidth. Millimeter wave RF frequencies also yield greater accuracy due to the increased Doppler spread of the radar backscatter. Comparative analysis of articulation dynamics also shows that imitation signing is not representative of fluent signing, and not effective in pre-training networks for fluent ASL classification. Deep neural networks employing multi-frequency fusion capture both shared, as well as sensor-specific features and thus offer significant performance gains in comparison to using a single sensor or feature-level fusion.