Cui, Xia ORCID: https://orcid.org/0000-0002-1726-3814, Kojaku, Sadamori, Masuda, Naoki and Bollegala, Danushka (2018) Solving feature sparseness in text classification using core-periphery decomposition. In: The Seventh Joint Conference on Lexical and Computational Semantics, 05 June 2018 - 06 June 2018, New Orleans, Louisiana, USA.
|
Published Version
Available under License Creative Commons Attribution. Download (472kB) | Preview |
Abstract
Feature sparseness is a problem common to cross-domain and short-text classification tasks. To overcome this feature sparseness problem, we propose a novel method based on graph decomposition to find candidate features for expanding feature vectors. Specifically, we first create a feature-relatedness graph, which is subsequently decomposed into core-periphery (CP) pairs and use the peripheries as the expansion candidates of the cores. We expand both training and test instances using the computed related features and use them to train a text classifier. We observe that prioritising features that are common to both training and test instances as cores during the CP decomposition to further improve the accuracy of text classification. We evaluate the proposed CP-decomposition-based feature expansion method on benchmark datasets for cross-domain sentiment classification and short-text classification. Our experimental results show that the proposed method consistently outperforms all baselines on short-text classification tasks, and perform competitively with pivot-based cross-domain sentiment classification methods.
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.