e-space
Manchester Metropolitan University's Research Repository

    Automatic GUI Code Generation with Deep Learning

    Yao, Xulu (2022) Automatic GUI Code Generation with Deep Learning. Doctoral thesis (PhD), Manchester Metropolitan University.

    [img]
    Preview

    Available under License Creative Commons Attribution Non-commercial No Derivatives.

    Download (2MB) | Preview

    Abstract

    Manually converting the design of a graphical user interface (GUI) into code is a time-consuming and error-prone process. A feasible solution is to automatically generate code by designing images or text through a GUI. Recently, deep learning technology has shown promising results in detecting GUI elements for this automation. This project develops new approaches to GUI development and evaluation. First, a code semantic metric (CSM) is developed. It uses n-gram sequence features and cosine similarity to judge the accuracy of translated code. The results show that this metric has better performance than bilingual evaluation understudy (BLEU). Second, a modified framework is proposed to solve the problem of feature vector losses in a pix2code model, which generates the specific GUI code with a screenshot. The results of the empirical study outperform the state-of-the-art methods based on BLEU. Third, a UIGAN model that performs better than the traditional generative adversarial network (GAN) is proposed, and a new data augmentation method is introduced to overcome data deficiency in GUI generation. Fourthly, to address the problem of existing text-to-image generation models, a scene graph-to-UI (SG2UI) model is proposed for GUI generation. In this approach, a graph convolutional network (GCN) is used as the feature extraction network of the input scene graph. The Fréchet inception distance (FID) and perceptual loss are used to calculate the difference between the generated GUI and the real GUI. The experimental results demonstrate that the object details of the final GUI generated are more apparent and the model improves the quality and creativity of the generated GUI. Future research is needed to improve the model to directly generate complex scene layouts with the hypertext nature of GUI design.

    Impact and Reach

    Statistics

    Activity Overview
    6 month trend
    1,059Downloads
    6 month trend
    357Hits

    Additional statistics for this dataset are available via IRStats2.

    Repository staff only

    Edit record Edit record