Browsing by Author "Kurtoglu, Emre"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Cross-Frequency Training with Adversarial Learning for Radar Micro-Doppler Signature Classification(2020) Gurbuz, Sevgi Z.; Rahman, M. Mahbubur; Kurtoglu, Emre; Macks, Trevor; Fioranelli, Francesco; University of Alabama TuscaloosaDeep neural networks have become increasingly popular in radar micro-Doppler classification; yet, a key challenge, which has limited potential gains, is the lack of large amounts of measured data that can facilitate the design of deeper networks with greater robustness and performance. Several approaches have been proposed in the literature to address this problem, such as unsupervised pre-training and transfer learning from optical imagery or synthetic RF data. This work investigates an alternative approach to training which involves exploitation of “datasets of opportunity” – micro-Doppler datasets collected using other RF sensors, which may be of a different frequency, bandwidth or waveform - for the purposes of training. Specifically, this work compares in detail the cross-frequency training degradation incurred for several different training approaches and deep neural network (DNN) architectures. Results show a 70% drop in classification accuracy when the RF sensors for pre-training, fine-tuning, and testing are different, and a 15% degradation when only the pre-training data is different, but the fine-tuning and test data are from the same sensor. By using generative adversarial networks (GANs), a large amount of synthetic data is generated for pre-training. Results show that cross-frequency performance degradation is reduced by 50% when kinematically-sifted GAN-synthesized signatures are used in pre-training.Item Cross-frequency training with adversarial learning for radar micro-Doppler signature classificationKurtoglu, Emre; Rahman, Mahbubur (Mahbub); Macks, Trevor; Gurbuz, Sevgi Zubeyde; Fioranelli, FrancescoItem Fully-Adaptive RF Sensing for Non-Intrusive ASL Recognition via Interactive Smart Environments(University of Alabama Libraries, 2024) Kurtoglu, Emre; Gurbuz, Sevgi ZubeydeThe past decade has seen great advancements in speech recognition for control of interactivedevices, personal assistants, and computer interfaces. However, Deaf people andpeople with hard-of-hearing, whose primary mode of communication is sign language, cannotuse voice-controlled interfaces. Although there has been significant work in video-basedsign language recognition, video is not effective in the dark and has raised privacy concernsin the Deaf community when used in the context of human ambient intelligence. Radarshave recently been started to be used as a new modality that can be effective under thecircumstances where video is not. This dissertation conducts a thorough exploration of the challenges in RF-enabled signlanguage recognition systems. Specifically, it proposes an end-to-end framework to acquire,temporally isolate, and recognize individual signs. A trigger sign detection with an adaptivethresholding method is also proposed. An angular subspace projection method is presentedto separate multiple targets at raw data level. An interactive sign language-controlled chessgame is designed to enhance the user experience and automate the data collection andannotation process for labor-intensive data collection procedure. Finally, a framework ispresented to dynamically adjust radar waveform parameters based on human presence andtheir activity.Item Multi-Frequency RF Sensor Fusion for Word-Level Fluent ASL Recognition(IEEE, 2021) Gurbuz, Sevgi Z.; Rahman, M. Mahbubur; Kurtoglu, Emre; Malaia, Evie; Gurbuz, Ali Cafer; Griffin, Darrin J.; Crawford, Chris; University of Alabama Tuscaloosa; Mississippi State UniversityDeaf spaces are unique indoor environments designed to optimize visual communication and Deaf cultural expression. However, much of the technological research geared towards the deaf involve use of video or wearables for American sign language (ASL) translation, with little consideration for Deaf perspective on privacy and usability of the technology. In contrast to video, RF sensors offer the avenue for ambient ASL recognition while also preserving privacy for Deaf signers. Methods: This paper investigates the RF transmit waveform parameters required for effective measurement of ASL signs and their effect on word-level classification accuracy attained with transfer learning and convolutional autoencoders (CAE). A multi-frequency fusion network is proposed to exploit data from all sensors in an RF sensor network and improve the recognition accuracy of fluent ASL signing. Results: For fluent signers, CAEs yield a 20-sign classification accuracy of %76 at 77 GHz and %73 at 24 GHz, while at X-band (10 Ghz) accuracy drops to 67%. For hearing imitation signers, signs are more separable, resulting in a 96% accuracy with CAEs. Further, fluent ASL recognition accuracy is significantly increased with use of the multi-frequency fusion network, which boosts the 20-sign fluent ASL recognition accuracy to 95%, surpassing conventional feature level fusion by 12%. Implications: Signing involves finer spatiotemporal dynamics than typical hand gestures, and thus requires interrogation with a transmit waveform that has a rapid succession of pulses and high bandwidth. Millimeter wave RF frequencies also yield greater accuracy due to the increased Doppler spread of the radar backscatter. Comparative analysis of articulation dynamics also shows that imitation signing is not representative of fluent signing, and not effective in pre-training networks for fluent ASL classification. Deep neural networks employing multi-frequency fusion capture both shared, as well as sensor-specific features and thus offer significant performance gains in comparison to using a single sensor or feature-level fusion.