Abstract
Advancement in UAV technologies have facilitated the development of lightweight airborne platforms capable of fulfilling a diverse range of tasks due to a varied array of mountable sensing and interaction modules available. To further advance UAVs and widen their application spectrum, providing them with fully autonomous operations capability is necessary. To address this challenge, we present Multiple Q-table Path Planning (MQTPP), a novel method specifically tailored for UAV path planning in urban environments. Unlike a conventional Qlearning approach that necessitates relearning in response to dynamic changes in urban landscapes or targets, MQTPP is designed to adaptively re-plan UAV paths with notable efficiency, utilising a singular learning phase executed prior to take-off. Results obtained through simulation demonstrate the exceptional capability of MQTPP to swiftly generate new paths or modify existing ones during flight. This performance significantly surpasses existing state-of-the-art methods in terms of computational efficiency, while still achieving near-optimal path planning results. Thus, demonstrating MQTPP’s potential as a robust solution for real-time, adaptive in-flight UAV navigation in complex urban settings.
Original language | English |
---|---|
Pages (from-to) | (In-Press) |
Number of pages | 12 |
Journal | IEEE Transactions on Intelligent Vehicles |
Volume | (In-Press) |
Early online date | 10 Apr 2024 |
DOIs | |
Publication status | E-pub ahead of print - 10 Apr 2024 |
Bibliographical note
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Copyright © and Moral Rights are retained by the author(s) and/ or other copyright owners. A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This item cannot be reproduced or quoted extensively from without first obtaining permission in writing from the copyright holder(s). The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the copyright holders.
This document is the author’s post-print version, incorporating any revisions agreed during the peer-review process. Some differences between the published version and this version may remain and you are advised to consult the published version if you wish to cite from it.
Keywords
- Autonomous aerial vehicles
- Heuristic algorithms
- Intelligent vehicles
- Path planning
- Planning
- Q-learning
- Vehicle dynamics
- multiple Q-table
- path planning
- reinforcement learning
- unmanned aerial vehicle (UAV)
- urban environment
ASJC Scopus subject areas
- Control and Optimization
- Artificial Intelligence
- Automotive Engineering