Browsing by Author "Gurbuz, Sevgi Zubeyde"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Cross-frequency training with adversarial learning for radar micro-Doppler signature classificationKurtoglu, Emre; Rahman, Mahbubur (Mahbub); Macks, Trevor; Gurbuz, Sevgi Zubeyde; Fioranelli, FrancescoItem Deep Neural Network Initialization Methods for Micro-Doppler Classification With Low Training Sample Support(IEEE, 2017) Seyfioglu, Mehmet Saygin; Gurbuz, Sevgi Zubeyde; TOBB Ekonomi ve Teknoloji University; University of Alabama TuscaloosaDeep neural networks (DNNs) require large-scale labeled data sets to prevent overfitting while having good generalization. In radar applications, however, acquiring a measured data set of the order of thousands is challenging due to constraints on manpower, cost, and other resources. In this letter, the efficacy of two neural network initialization techniques-unsupervised pretraining and transfer learning-for dealing with training DNNs on small data sets is compared. Unsupervised pretraining is implemented through the design of a convolutional autoencoder (CAE), while transfer learning from two popular convolutional neural network architectures (VGGNet and GoogleNet) is used to augment measured RF data for training. A 12-class problem for discrimination of micro-Doppler signatures for indoor human activities is utilized to analyze activation maps, bottleneck features, class model, and classification accuracy with respect to training sample size. Results show that on meager data sets, transfer learning outperforms unsupervised pretraining and random initialization by 10% and 25%, respectively, but that when the sample size exceeds 650, unsupervised pretraining surpasses transfer learning and random initialization by 5% and 10%, respectively. Visualization of activation layers and learned models reveals how the CAE succeeds in representing the micro-Doppler signature.Item Fully-Adaptive RF Sensing for Non-Intrusive ASL Recognition via Interactive Smart Environments(University of Alabama Libraries, 2024) Kurtoglu, Emre; Gurbuz, Sevgi ZubeydeThe past decade has seen great advancements in speech recognition for control of interactivedevices, personal assistants, and computer interfaces. However, Deaf people andpeople with hard-of-hearing, whose primary mode of communication is sign language, cannotuse voice-controlled interfaces. Although there has been significant work in video-basedsign language recognition, video is not effective in the dark and has raised privacy concernsin the Deaf community when used in the context of human ambient intelligence. Radarshave recently been started to be used as a new modality that can be effective under thecircumstances where video is not. This dissertation conducts a thorough exploration of the challenges in RF-enabled signlanguage recognition systems. Specifically, it proposes an end-to-end framework to acquire,temporally isolate, and recognize individual signs. A trigger sign detection with an adaptivethresholding method is also proposed. An angular subspace projection method is presentedto separate multiple targets at raw data level. An interactive sign language-controlled chessgame is designed to enhance the user experience and automate the data collection andannotation process for labor-intensive data collection procedure. Finally, a framework ispresented to dynamically adjust radar waveform parameters based on human presence andtheir activity.Item Physics-Aware Deep Learning for Radar-Based Cyber Physical Human Systems(University of Alabama Libraries, 2023) Rahman, Mohammad Mahbubur; Gurbuz, Sevgi ZubeydeRadar, an acronym for Radio Detection and Ranging, is an emerging sensing modality utilized in Cyber-Physical Systems (CPS), that blends physical processes with computational and communication components. Due to its ability to sense physical objects in challenging environments (like fog, rain, and darkness), cost-effectiveness, and privacy-preserving nature, radar has found applications in health and safety CPS, CPS for gesture/sign language-driven smart environments, and automotive CPS. Health and safety CPS relies on the radar for fall detection, gait analysis, vital sign detection, and human activity recognition. Gesture and sign-language-driven smart environments utilize radar to interact with assistive devices and household appliances through gesture and sign-language recognition. Automotive CPS utilizes radar for in-cabin sensing and external sensing, including pedestrian detection, self-driving car vs police communication, and off-road traffic gesture detection. In radar-enabled CPS, the efficacy of each sensing task hinges on the radar's perception of the human test subject. This can be expensive and often results in a scarcity of data. Additionally, multiple RF sensors with different bandwidths, frequencies, and waveforms are required for CPS to capture the variety of motions performed in a wider physical space. Diverse human motions, such as fine-grain or gross body movements, exhibit varying degrees of sensitivity to distinct frequency sensors. As a result, achieving interoperability among multiple-frequency sensors is crucial for delivering a seamless CPS experience. Furthermore, when implementing CPS for remote health monitoring, it is frequently necessary to comprehend the intricacies of human movement. For example, assessing gait abnormalities and the risk of falls necessitates the estimation of human gait parameters and an understanding of human posture. As a result, a critical sensing challenge is how to extract valuable insights from the complex and high-dimensional information contained in RF measurements, including range-Doppler, range-azimuth, and range-elevation heatmaps. These sensing challenges lead to various deep learning challenges, like training under insufficient support samples, sensors/ aspect angles invariant feature learning for RF sensors interoperability, open-set problems for recognizing diverse human motions without prior training data, and decoupling individual motions from a multi-person scenario. Conventional data-driven deep learning approaches require vast amounts of training data for each scenario, but gathering large volumes of RF data is a major challenge in itself. To address these issues, this dissertation proposes physics-aware solutions that leverage physical prior knowledge and operate effectively with limited measured data. To address the data insufficiency, this dissertation designed a novel Physics-aware Generative Adversarial Network (PhGAN), which synthesizes a large volume of kinematically accurate radar micro-Doppler data using a few measured radar data and knowledge of human motion kinematics. As the envelopes of micro-Doppler constrain the maximum velocity incurred during motion and capture the differences between human gaits, it is essential that the process for generating synthetic samples consistently and realistically replicates the envelopes of the motion classes. Thus, the proposed method precludes gross kinematic errors in synthetic samples by supplying the signature envelopes as inputs to additional branches in the discriminator and utilizing the envelope distance between the real and synthetic micro-Doppler as an additional physics-based loss term in the discriminator loss function. Radar-enabled CPS requires multiple RF sensors with different operating characteristics to capture a wide range of motions in a larger physical space. However, the differences in the time-frequency representation of acquired data from different RF sensors lead to poor classification performance. This thesis explores the question of how to achieve interoperability between multi-frequency RF sensors to mitigate training data deficiency. Two approaches namely, domain adaptation, and cross-modal fusion have been proposed to address this problem. The domain adaptation approach uses adversarial image-to-image translation techniques to adapt micro-Doppler signatures from one frequency sensor to another. The adapted signatures are then used to train the DNNs while real signatures are used during the inference phase. In cross-modal fusion, different RF sensors are treated as different modalities, and a proposed framework jointly exploits these disparate RF sensor data to improve target recognition. The efficacy of these solutions is demonstrated through examples of the gross body human activity, such as ambulatory human motion recognition for health and safety CPS applications, and fine-grained human motions, such as American Sign Language (ASL) recognition for smart deaf space design. Notably, this dissertation is the first to conduct a study on RF-based ASL recognition, achieving state-of-the-art classification performance for 100-word fluent ASL recognition. Finally, this dissertation proposes an RF-based human pose estimation framework that recognizes diverse human motions and facilitates gait analysis and fall risk assessments in elderly care and nursing homes. The accuracy of this framework is validated by gold standard Vicon motion capture measurements. In conclusion, this dissertation provides innovative and practical solutions to the challenges of radar-enabled CPS, leveraging physics-aware techniques to enable effective learning with limited data and advance the field toward more accurate and reliable sensing.