Manchester Metropolitan University's Research Repository

    Deep reinforcement learning based transmission policy enforcement and multi-hop routing in QoS aware LoRa IoT networks

    Muthanna, Mohammed Saleh Ali, Muthanna, Ammar, Rafiq, Ahsan, Hammoudeh, Mohammad, Alkanhel, Reem, Lynch, Stephen ORCID logoORCID: https://orcid.org/0000-0002-4183-5122 and Abd El-Latif, Ahmed A (2021) Deep reinforcement learning based transmission policy enforcement and multi-hop routing in QoS aware LoRa IoT networks. Computer Communications, 183. pp. 33-50. ISSN 0140-3664

    Accepted Version
    Download (2MB) | Preview


    The LoRa wireless connectivity has become a de facto technology for intelligent critical infrastructures such as transport systems. Achieving high Quality of Service (QoS) in cooperative systems remains a challenging task in LoRa. However, high QoS can be achieved via optimizing the transmission policy parameters such as spreading factor, bandwidth, code rate and carrier frequency. Yet existing approaches have not optimized the complete LoRa parameters. Furthermore, the star of stars topology used by LoRa causes more energy consumption and a low packet reception ratio. Motivated by this, this paper presents transmission policy enforcement and multi-hop routing for QoS-aware LoRa networks (MQ-LoRa). A hybrid cluster root rotated tree topology is constructed in which gateways follow a tree topology and Internet of Things (IoT) nodes follow a cluster topology. A ‘membrane’ inspired form the cell tissues which form clusters to sharing the correct information. The membrane inspired clustering algorithm is developed to form clusters and an optimal header node is selected using the influence score. Data QoS ranking is implemented for IoT nodes where priority and non-priority information is identified by the new field of LoRa frame structure (QRank). The optimal transmission policy enforcement uses fast deep reinforcement learning called Soft Actor Critic (SAC) that utilizes the environmental parameters including QRank, signal quality and signal-to-interference-plus-noise-ratio. The transmission policy is optimized with respect to the spreading factor, code rate, bandwidth and carrier frequency. Then, a concurrent optimization multi-hop routing algorithm that uses mayfly and shuffled shepherd optimization to rank routes based on the fitness criteria. Finally, a weighted duty cycle is implemented using a multi-weighted sum model to reduce resource wastage and information loss in LoRa IoT networks. Performance evaluation is implemented using a NS3.26 LoRaWAN module. The performance is examined for various metrics such as packet reception ratio, packet rejection ratio, energy consumption, delay and throughput. Experimental results prove that the proposed MQ-LoRa outperforms the well-known LoRa methods.

    Impact and Reach


    Activity Overview
    6 month trend
    6 month trend

    Additional statistics for this dataset are available via IRStats2.


    Repository staff only

    Edit record Edit record