Deep Neural Network Initialization Methods for Micro-Doppler Classification With Low Training Sample Support

Abstract

Deep neural networks (DNNs) require large-scale labeled data sets to prevent overfitting while having good generalization. In radar applications, however, acquiring a measured data set of the order of thousands is challenging due to constraints on manpower, cost, and other resources. In this letter, the efficacy of two neural network initialization techniques-unsupervised pretraining and transfer learning-for dealing with training DNNs on small data sets is compared. Unsupervised pretraining is implemented through the design of a convolutional autoencoder (CAE), while transfer learning from two popular convolutional neural network architectures (VGGNet and GoogleNet) is used to augment measured RF data for training. A 12-class problem for discrimination of micro-Doppler signatures for indoor human activities is utilized to analyze activation maps, bottleneck features, class model, and classification accuracy with respect to training sample size. Results show that on meager data sets, transfer learning outperforms unsupervised pretraining and random initialization by 10% and 25%, respectively, but that when the sample size exceeds 650, unsupervised pretraining surpasses transfer learning and random initialization by 5% and 10%, respectively. Visualization of activation layers and learned models reveals how the CAE succeeds in representing the micro-Doppler signature.

Description
Keywords
Convolutional autoencoders (CAEs), convolutional neural networks (CNN), gait classification, micro-Doppler, radar, transfer learning, VGGNet, FEATURES, Geochemistry & Geophysics, Engineering, Electrical & Electronic, Remote Sensing, Imaging Science & Photographic Technology, Engineering
Citation
Seyfioğlu, M., Gürbüz, S. (2017): Deep Neural Network Initialization Methods for MicroDoppler Classification with Low Training Sample Support. IEEE Geoscience and Remote Sensing Letters, 14(12).