Deep Reinforcement Learning-Based Grant-Free NOMA Optimization for mURLLC

Yan Liu, Yansha Deng, Hui Zhou, Maged Elkashlan, Arumugam Nallanathan

Research output: Contribution to journalArticlepeer-review

23 Citations (Scopus)

Abstract

Grant-free non-orthogonal multiple access (GF-NOMA) is a potential technique to support massive Ultra-Reliable and Low-Latency Communication (mURLLC) service. However, the dynamic resource configuration in GF-NOMA systems is challenging due to random traffics and collisions, that are unknown at the base station (BS). Meanwhile, joint consideration of the latency and reliability requirements makes the resource configuration of GF-NOMA for mURLLC more complex. To address this problem, we develop a novel learning framework for signature-based GF-NOMA in mURLLC service taking into account the multiple access signature collision, the UE detection, as well as the data decoding procedures for the K-repetition GF and the Proactive GF schemes. The goal of our learning framework is to maximize the long-term average number of successfully served users (UEs) under the latency constraint. We first perform a real-time repetition value configuration based on a double deep Q-Network (DDQN) and then propose a Cooperative Multi-Agent learning technique based DQN (CMA-DQN) to optimize the configuration of both the repetition values and the contention-transmission unit (CTU) numbers. Our results show the superior performance of CMA-DQN over the conventional load estimation-based uplink resource configuration approach (LE-URC) in heavy traffic and demonstrate its capability in dynamically configuring in long term for mURLLC service. In addition, with our learning optimization, the Proactive scheme always outperforms the K-repetition scheme in terms of the number of successfully served UEs, especially under the high backlog traffic scenario.

Original languageEnglish
Pages (from-to)1475-1490
Number of pages16
JournalIEEE Transactions on Communications
Volume71
Issue number3
Early online date19 Jan 2023
DOIs
Publication statusPublished - 1 Mar 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1972-2012 IEEE.

Funding

This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC), U.K., under Grant EP/R006466/1 and Grant EP/W004348/1 and in part by the Postgraduate Research and Practice Innovation Program of Jiangsu Province under Grant KYCX17_0785

FundersFunder number
Postgraduate Research and Practice Innovation Program of Jiangsu ProvinceKYCX17_0785
Engineering and Physical Sciences Research CouncilEP/W004348/1, EP/R006466/1

Keywords

  • NOMA
  • deep reinforcement learning
  • grant free
  • mURLLC
  • resource configuration

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Deep Reinforcement Learning-Based Grant-Free NOMA Optimization for mURLLC'. Together they form a unique fingerprint.

Cite this