GPU parallelization of the sequential matrix diagonalization algorithm and its application to high-dimensional data

M. Carcenac, S. Redif, S. Kasap

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

This paper presents the parallelization on a GPU of the sequential matrix diagonalization (SMD) algorithm, a method for diagonalizing polynomial covariance matrices, which is the most recent technique for polynomial eigenvalue decomposition. We first parallelize with CUDA the calculation of the polynomial covariance matrix. Then, following a formal transformation of the polynomial matrix multiplication code—extensively used by SMD—we insert in this code the cublasDgemm function of CUBLAS library. Furthermore, a specialized cache memory system is implemented within the GPU to greatly limit the PC-to-GPU transfers of slices of polynomial matrices. The resulting SMD code can be applied efficiently over high-dimensional data. The proposed method is verified using sequences of images of airplanes with varying spatial orientation. The performance of the parallel codes for polynomial covariance matrix generation and SMD is evaluated and reveals speedups of up to 161 and 67, respectively, relative to sequential execution on a PC.
Original languageEnglish
Pages (from-to)3603–3634
Number of pages32
JournalJournal of Supercomputing
Volume73
Early online date18 Jan 2017
DOIs
Publication statusPublished - Aug 2017
Externally publishedYes

Keywords

  • Polynomial eigenvalue decomposition (PEVD)
  • Sequential matrix diagonalization (SMD)
  • MIMO convolution
  • GPU computing
  • CUDA
  • CUBLAS

Fingerprint

Dive into the research topics of 'GPU parallelization of the sequential matrix diagonalization algorithm and its application to high-dimensional data'. Together they form a unique fingerprint.

Cite this