e-space
Manchester Metropolitan University's Research Repository

    Reinforcement Learning-Based Dynamic Power Management for Energy Optimization in IoT-Enabled Consumer Electronics

    Khan, Muhammad Nawaz ORCID logoORCID: https://orcid.org/0000-0002-6682-7049, Ullah, Inam ORCID logoORCID: https://orcid.org/0000-0002-5879-569X, Bashir, Ali Kashif ORCID logoORCID: https://orcid.org/0000-0003-2601-9327, Al-Khasawneh, M. A., Arishi, Ali ORCID logoORCID: https://orcid.org/0009-0009-0586-3378, Alghamdi, Norah Saleh ORCID logoORCID: https://orcid.org/0000-0001-6421-6001 and Lee, Sokjoon (2025) Reinforcement Learning-Based Dynamic Power Management for Energy Optimization in IoT-Enabled Consumer Electronics. IEEE Transactions on Consumer Electronics. pp. 1-13. ISSN 0098-3063

    File not available for download.

    Abstract

    Cognitive sensors and reinforcement learning are integrated into home appliances and consumer electronics to make these devices more intelligent and interactive. They create a network of connected devices that receive data from their surroundings and collectively respond to provide services based on context and user mode. The Internet of Things (IoT) is one of the paradigms established in these smart spaces, providing greater comfort and ease of use of technology. With these vast IoT sensors and reinforcement learning, the environment becomes increasingly device-centric rather than user-centric. Sensors continuously generate data and transmit it to the controlling authorities in a timely and accurate manner to allow rapid response and immediate action. A massive volume of data is generated inside the network, with numerous redundant strings of bits and repeating patterns that not only cause congestion but also degrade system performance. Frequent broadcasting on radio links for repeated strings results in idle listening and sometimes overhearing. In this research work, we have proposed a novel scheme of "Reinforcement Learning-Based Dynamic Power Management for Energy Optimization in IoT-Enabled Consumer Electronics (CIoT-DPM)" to reduce idle listening and overhearing. The proposed solution uses advanced reinforcement learning and creates a reward function to reduce network traffic by dropping redundant bits and repeating patterns. Implement different sensor states by keeping some modules in active and sleep modes to adjust dynamic power management. Using the least amount of energy while providing complete coverage, ensuring that no events are missed. In evaluation, CIoT-DPM consumes less energy with increased traffic load; BER ranges from 0.1 to 0.001 in different numbers of episodes and exhibits a 20-25% better learning strategy than traditional learning approaches. In a comparative analysis, CIoT-DPM experiences lower computational complexities by 79.33%; detection probabilities are approximately 0.10 to 0.12, respectively, and an optimal successful learning strategy of 1.00 with 50 episodes.

    Impact and Reach

    Statistics

    Activity Overview
    6 month trend
    0Downloads
    6 month trend
    22Hits

    Additional statistics for this dataset are available via IRStats2.

    Altmetric

    Repository staff only

    Edit record Edit record