DopNet: A Deep Convolutional Neural Network to Recognize Armed and Unarmed Human Targets

Qingchao Chen, Yang Liu, Francesco Fioranelli, Matthiew Ritchie, Bo Tan, Kevin Chetty

Research output: Contribution to journalArticle

3 Downloads (Pure)

Abstract

The work presented in this paper aims to distinguish between armed or unarmed personnel using multi-static radar data and advanced Doppler processing. We propose two modified deep convolutional neural networks (DCNNs) termed single channel DopNet (SC-DopNet) and multiple channel DopNet (MC-DopNet) for mono-static and multi-static micro-Doppler signature ( mu -DS) classification. Differentiating armed and unarmed walking personnel is challenging due to the effect of the aspect angle and the channel diversity in real-world scenarios. In addition, the DCNN easily overfits the relatively small-scale mu -DS dataset. To address these problems, the work carried out in this paper makes three key contributions. First, two effective schemes including a data augmentation operation and a regularization term are proposed to train the SC-DopNet from scratch. Next, a factor analysis of the SC-DopNet is conducted based on various operating parameters in both the processing and radar operations. Third, to solve the problem of aspect angle diversity for the mu -DS classification, we design the MC-DopNet for multi-static mu -DS which is embedded with two new fusion schemes termed as greedy importance reweighting (GIR) and ell-{21} -Norm. These two schemes are based on two different strategies and have been evaluated experimentally. The GIR uses a win by sacrificing worst case approach, whereas ell -{21} -Norm adopts a win by sacrificing best case approach. The SC-DopNet outperforms the non-deep methods by 12.5% in average, and the proposed MC-DopNet with two fusion methods outperforms the conventional binary voting by 1.2% in average. Note that we also argue and discuss how to utilize the statistics of SC-DopNet results to infer the selection of fusion strategies for the MC-DopNet under different experimental scenarios.

Original languageEnglish
Article number8626156
Pages (from-to)4160-4172
Number of pages13
JournalIEEE Sensors Journal
Volume19
Issue number11
Early online date25 Jan 2019
DOIs
Publication statusPublished - 1 Jun 2019

Fingerprint

Fusion reactions
Neural networks
Radar
Personnel
Factor analysis
Processing
signatures
Statistics
fusion
personnel
norms
multistatic radar
voting
factor analysis
walking
radar data
radar
statistics
augmentation

Keywords

  • DCNN
  • armed personnel
  • classification
  • multi-static μ-DS

ASJC Scopus subject areas

  • Instrumentation
  • Electrical and Electronic Engineering

Cite this

Chen, Q., Liu, Y., Fioranelli, F., Ritchie, M., Tan, B., & Chetty, K. (2019). DopNet: A Deep Convolutional Neural Network to Recognize Armed and Unarmed Human Targets. IEEE Sensors Journal, 19(11), 4160-4172. [8626156]. https://doi.org/10.1109/JSEN.2019.2895538

DopNet : A Deep Convolutional Neural Network to Recognize Armed and Unarmed Human Targets. / Chen, Qingchao; Liu, Yang; Fioranelli, Francesco; Ritchie, Matthiew; Tan, Bo; Chetty, Kevin.

In: IEEE Sensors Journal, Vol. 19, No. 11, 8626156, 01.06.2019, p. 4160-4172.

Research output: Contribution to journalArticle

Chen, Q, Liu, Y, Fioranelli, F, Ritchie, M, Tan, B & Chetty, K 2019, 'DopNet: A Deep Convolutional Neural Network to Recognize Armed and Unarmed Human Targets' IEEE Sensors Journal, vol. 19, no. 11, 8626156, pp. 4160-4172. https://doi.org/10.1109/JSEN.2019.2895538
Chen, Qingchao ; Liu, Yang ; Fioranelli, Francesco ; Ritchie, Matthiew ; Tan, Bo ; Chetty, Kevin. / DopNet : A Deep Convolutional Neural Network to Recognize Armed and Unarmed Human Targets. In: IEEE Sensors Journal. 2019 ; Vol. 19, No. 11. pp. 4160-4172.
@article{166ed4b9fd4c423fa60173d2355a6a0f,
title = "DopNet: A Deep Convolutional Neural Network to Recognize Armed and Unarmed Human Targets",
abstract = "The work presented in this paper aims to distinguish between armed or unarmed personnel using multi-static radar data and advanced Doppler processing. We propose two modified deep convolutional neural networks (DCNNs) termed single channel DopNet (SC-DopNet) and multiple channel DopNet (MC-DopNet) for mono-static and multi-static micro-Doppler signature ( mu -DS) classification. Differentiating armed and unarmed walking personnel is challenging due to the effect of the aspect angle and the channel diversity in real-world scenarios. In addition, the DCNN easily overfits the relatively small-scale mu -DS dataset. To address these problems, the work carried out in this paper makes three key contributions. First, two effective schemes including a data augmentation operation and a regularization term are proposed to train the SC-DopNet from scratch. Next, a factor analysis of the SC-DopNet is conducted based on various operating parameters in both the processing and radar operations. Third, to solve the problem of aspect angle diversity for the mu -DS classification, we design the MC-DopNet for multi-static mu -DS which is embedded with two new fusion schemes termed as greedy importance reweighting (GIR) and ell-{21} -Norm. These two schemes are based on two different strategies and have been evaluated experimentally. The GIR uses a win by sacrificing worst case approach, whereas ell -{21} -Norm adopts a win by sacrificing best case approach. The SC-DopNet outperforms the non-deep methods by 12.5{\%} in average, and the proposed MC-DopNet with two fusion methods outperforms the conventional binary voting by 1.2{\%} in average. Note that we also argue and discuss how to utilize the statistics of SC-DopNet results to infer the selection of fusion strategies for the MC-DopNet under different experimental scenarios.",
keywords = "DCNN, armed personnel, classification, multi-static μ-DS",
author = "Qingchao Chen and Yang Liu and Francesco Fioranelli and Matthiew Ritchie and Bo Tan and Kevin Chetty",
year = "2019",
month = "6",
day = "1",
doi = "10.1109/JSEN.2019.2895538",
language = "English",
volume = "19",
pages = "4160--4172",
journal = "IEEE Sensors Journal",
issn = "1530-437X",
publisher = "Institute of Electrical and Electronics Engineers",
number = "11",

}

TY - JOUR

T1 - DopNet

T2 - A Deep Convolutional Neural Network to Recognize Armed and Unarmed Human Targets

AU - Chen, Qingchao

AU - Liu, Yang

AU - Fioranelli, Francesco

AU - Ritchie, Matthiew

AU - Tan, Bo

AU - Chetty, Kevin

PY - 2019/6/1

Y1 - 2019/6/1

N2 - The work presented in this paper aims to distinguish between armed or unarmed personnel using multi-static radar data and advanced Doppler processing. We propose two modified deep convolutional neural networks (DCNNs) termed single channel DopNet (SC-DopNet) and multiple channel DopNet (MC-DopNet) for mono-static and multi-static micro-Doppler signature ( mu -DS) classification. Differentiating armed and unarmed walking personnel is challenging due to the effect of the aspect angle and the channel diversity in real-world scenarios. In addition, the DCNN easily overfits the relatively small-scale mu -DS dataset. To address these problems, the work carried out in this paper makes three key contributions. First, two effective schemes including a data augmentation operation and a regularization term are proposed to train the SC-DopNet from scratch. Next, a factor analysis of the SC-DopNet is conducted based on various operating parameters in both the processing and radar operations. Third, to solve the problem of aspect angle diversity for the mu -DS classification, we design the MC-DopNet for multi-static mu -DS which is embedded with two new fusion schemes termed as greedy importance reweighting (GIR) and ell-{21} -Norm. These two schemes are based on two different strategies and have been evaluated experimentally. The GIR uses a win by sacrificing worst case approach, whereas ell -{21} -Norm adopts a win by sacrificing best case approach. The SC-DopNet outperforms the non-deep methods by 12.5% in average, and the proposed MC-DopNet with two fusion methods outperforms the conventional binary voting by 1.2% in average. Note that we also argue and discuss how to utilize the statistics of SC-DopNet results to infer the selection of fusion strategies for the MC-DopNet under different experimental scenarios.

AB - The work presented in this paper aims to distinguish between armed or unarmed personnel using multi-static radar data and advanced Doppler processing. We propose two modified deep convolutional neural networks (DCNNs) termed single channel DopNet (SC-DopNet) and multiple channel DopNet (MC-DopNet) for mono-static and multi-static micro-Doppler signature ( mu -DS) classification. Differentiating armed and unarmed walking personnel is challenging due to the effect of the aspect angle and the channel diversity in real-world scenarios. In addition, the DCNN easily overfits the relatively small-scale mu -DS dataset. To address these problems, the work carried out in this paper makes three key contributions. First, two effective schemes including a data augmentation operation and a regularization term are proposed to train the SC-DopNet from scratch. Next, a factor analysis of the SC-DopNet is conducted based on various operating parameters in both the processing and radar operations. Third, to solve the problem of aspect angle diversity for the mu -DS classification, we design the MC-DopNet for multi-static mu -DS which is embedded with two new fusion schemes termed as greedy importance reweighting (GIR) and ell-{21} -Norm. These two schemes are based on two different strategies and have been evaluated experimentally. The GIR uses a win by sacrificing worst case approach, whereas ell -{21} -Norm adopts a win by sacrificing best case approach. The SC-DopNet outperforms the non-deep methods by 12.5% in average, and the proposed MC-DopNet with two fusion methods outperforms the conventional binary voting by 1.2% in average. Note that we also argue and discuss how to utilize the statistics of SC-DopNet results to infer the selection of fusion strategies for the MC-DopNet under different experimental scenarios.

KW - DCNN

KW - armed personnel

KW - classification

KW - multi-static μ-DS

UR - http://www.scopus.com/inward/record.url?scp=85065392772&partnerID=8YFLogxK

U2 - 10.1109/JSEN.2019.2895538

DO - 10.1109/JSEN.2019.2895538

M3 - Article

VL - 19

SP - 4160

EP - 4172

JO - IEEE Sensors Journal

JF - IEEE Sensors Journal

SN - 1530-437X

IS - 11

M1 - 8626156

ER -