Manchester Metropolitan University's Research Repository

    Towards a corpus for credibility assessment in software practitioner blog articles

    Williams, Ashley ORCID logoORCID: https://orcid.org/0000-0002-6888-0521, Shardlow, Matthew ORCID logoORCID: https://orcid.org/0000-0003-1129-2750 and Rainer, Austen (2021) Towards a corpus for credibility assessment in software practitioner blog articles. In: EASE 2021: Evaluation and Assessment in Software Engineering, 21 June 2021 - 23 June 2021, Trondheim, Norway.

    Accepted Version
    Download (510kB) | Preview


    Background: Blogs are a source of grey literature which are widely adopted by software practitioners for disseminating opinion and experience. Analysing such articles can provide useful insights into the state-of-practice for software engineering research. However, there are challenges in identifying higher quality content from the large quantity of articles available. Credibility assessment can help in identifying quality content, though there is a lack of existing corpora. Credibility is typically measured through a series of conceptual criteria, with 'argumentation' and 'evidence' being two important criteria. Objective: We create a corpus labelled for argumentation and evidence that can aid the credibility community. The corpus consists of articles from the blog of a single software practitioner and is publicly available. Method: Three annotators label the corpus with a series of conceptual credibility criteria, reaching an agreement of 0.82 (Fleiss' Kappa). We present preliminary analysis of the corpus by using it to investigate the identification of claim sentences (one of our ten labels). Results: We train four systems (Bert, KNN, Decision Tree and SVM) using three feature sets (Bag of Words, Topic Modelling and InferSent), achieving an F1 score of 0.64 using InferSent and a Linear SVM. Conclusions: Our preliminary results are promising, indicating that the corpus can help future studies in detecting the credibility of grey literature. Future research will investigate the degree to which the sentence level annotations can infer the credibility of the overall document.

    Impact and Reach


    Activity Overview
    6 month trend
    6 month trend

    Additional statistics for this dataset are available via IRStats2.


    Actions (login required)

    View Item View Item