e-space
Manchester Metropolitan University's Research Repository

    Robot Shape and Location Retention in Video Generation Using Diffusion Models

    Wang, Peng ORCID logoORCID: https://orcid.org/0000-0001-9895-394X, Guo, Zhihao, Sait, Abdul Latheef and Pham, Minh Huy (2024) Robot Shape and Location Retention in Video Generation Using Diffusion Models. In: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 14 October 2024 - 18 October 2024, Abu Dhabi, United Arab Emirates.

    [img]
    Preview
    Accepted Version
    Available under License In Copyright.

    Download (533kB) | Preview

    Abstract

    Diffusion models have marked a significant mile-stone in the enhancement of image and video generation technologies. However, generating videos that precisely retain the shape and location of moving objects such as robots remains a challenge. This paper presents diffusion models specifically tailored to generate videos that accurately maintain the shape and location of mobile robots. The proposed models incorporate techniques such as embedding accessible robot pose information and applying semantic mask regulation within the scalable and efficient ConvNext backbone network. These techniques are designed to refine intermediate outputs, therefore improving the retention performance of shape and location. Through extensive experimentation, our models have demonstrated notable improvements in maintaining the shape and location of different robots, as well as enhancing overall video generation quality, compared to the benchmark diffusion model. Codes will be open-sourced at: https://github.com/PengPaulWang/diffusion-robots.

    Impact and Reach

    Statistics

    Activity Overview
    6 month trend
    3Downloads
    6 month trend
    8Hits

    Additional statistics for this dataset are available via IRStats2.

    Altmetric

    Repository staff only

    Edit record Edit record