DBGC: Dimension-Based Generic Convolution Block for Object Recognition

Chirag Patel, Dulari Bhatt, Urvashi Sharma, Radhika Patel, Sharnil Pandya, Kirit Modi, Nagaraj Cholli, Akash Patel, Urvi Bhatt, Muhammad Ahmed Khan, Shubhankar Majumdar, Mohd Zuhair, Khushi Patel, Syed Aziz Shah, Hemant Ghayvat

    Research output: Contribution to journalArticlepeer-review

    31 Citations (Scopus)
    63 Downloads (Pure)

    Abstract

    The object recognition concept is being widely used a result of increasing CCTV surveillance and the need for automatic object or activity detection from images or video. Increases in the use of various sensor networks have also raised the need of lightweight process frameworks. Much research has been carried out in this area, but the research scope is colossal as it deals with open-ended problems such as being able to achieve high accuracy in little time using lightweight process frameworks. Convolution Neural Networks and their variants are widely used in various computer vision activities, but most of the architectures of CNN are application-specific. There is always a need for generic architectures with better performance. This paper introduces the Dimension-Based Generic Convolution Block (DBGC), which can be used with any CNN to make the architecture generic and provide a dimension-wise selection of various height, width, and depth kernels. This single unit which uses the separable convolution concept provides multiple combinations using various dimension-based kernels. This single unit can be used for height-based, width-based, or depth-based dimensions; the same unit can even be used for height and width, width and depth, and depth and height dimensions. It can also be used for combinations involving all three dimensions of height, width, and depth. The main novelty of DBGC lies in the dimension selector block included in the proposed architecture. Proposed unoptimized kernel dimensions reduce FLOPs by around one third and also reduce the accuracy by around one half; semi-optimized kernel dimensions yield almost the same or higher accuracy with half the FLOPs of the original architecture, while optimized kernel dimensions provide 5 to 6% higher accuracy with around a 10 M reduction in FLOPs.

    Original languageEnglish
    Article number1780
    Number of pages25
    JournalSensors
    Volume22
    Issue number5
    DOIs
    Publication statusPublished - 24 Feb 2022

    Bibliographical note

    Copyright: © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).
    Publisher Copyright:
    © 2022 by the authors. Licensee MDPI, Basel, Switzerland.

    Keywords

    • CNN
    • DBGC
    • Dimension-based kernels
    • Separable convolution

    ASJC Scopus subject areas

    • Analytical Chemistry
    • Information Systems
    • Atomic and Molecular Physics, and Optics
    • Biochemistry
    • Instrumentation
    • Electrical and Electronic Engineering

    Fingerprint

    Dive into the research topics of 'DBGC: Dimension-Based Generic Convolution Block for Object Recognition'. Together they form a unique fingerprint.

    Cite this