Transactions on Machine Intelligence

Transactions on Machine Intelligence

A Model for Classifying Social Commerce Texts Using Deep Learning

Document Type : Original Article

Authors
1 Assistant Professor, Department of Computer Science, Zaran Higher Education Complex
2 Lecturer, Department of Engineering, Payame Noor University, Tehran, Iran
Abstract
With the rapid expansion of online commerce, a significant volume of data related to these activities is generated and shared daily on social media platforms. Analyzing and processing these data can have numerous applications in enhancing and strengthening social commerce. One such processing task is the classification of social commerce texts, which has notable effects in areas such as better customer experience management, online advertisement generation, and increasing customer demand. In this paper, we propose a model for classifying social commerce texts using deep learning and relevant pre-trained language models. This model first utilizes a pre-trained language model to extract text feature vectors and then uses them for accurate text classification. The results obtained from applying the proposed model to benchmark datasets show that the introduced classification algorithm performs well in classifying social commerce texts, with an average precision score of 0.725 and an average recall score of 0.708.
Keywords

  • Sheikh, Z., Liu, Y., Islam, T., Hameed, Z., & Khan, I. U. (2019). Impact of social commerce constructs and social support on social commerce intentions. Information Technology & People. https://doi.org/10.1108/ITP-04-2018-0195
  • Bugshan, H., & Attar, R. W. (2020). Social commerce information sharing and their impact on consumers. Technological Forecasting and Social Change, 153, 119875. https://doi.org/10.1016/j.techfore.2019.119875
  • Nakayama, M., & Wan, Y. (2019). The cultural impact on social commerce: A sentiment analysis on Yelp ethnic restaurant reviews. Information & Management, 56(2), 271-279. https://doi.org/10.1016/j.im.2018.09.004
  • Sun, C., Qiu, X., Xu, Y., & Huang, X. (2019). How to fine-tune BERT for text classification?. In China National Conference on Chinese Computational Linguistics (pp. 194-206). Springer, Cham. https://doi.org/10.1007/978-3-030-32381-3_16
  • Iyyer, M., Manjunatha, V., Boyd-Graber, J., & Daumé III, H. (2015). Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (pp. 1681-1691). https://doi.org/10.3115/v1/P15-1162
  • Cheng, J., Dong, L., & Lapata, M. (2016). Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733. https://doi.org/10.18653/v1/D16-1053
  • Liu, P., Qiu, X., Chen, X., Wu, S., & Huang, X. J. (2015). Multi-timescale long short-term memory neural network for modelling sentences and documents. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (pp. 2326-2335). https://doi.org/10.18653/v1/D15-1280
  • Le, H. T., Cerisara, C., & Denis, A. (2018). Do convolutional networks need to be deep for text classification?. In Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence.
  • Hughes, M., Li, I., Kotoulas, S., & Suzumura, T. (2017). Medical text classification using convolutional neural networks. In Informatics for Health: Connected Citizen-Led Wellness and Population Health (pp. 246-250). IOS Press.
  • Zhao, W., Ye, J., Yang, M., Lei, Z., Zhang, S., & Zhao, Z. (2018). Investigating capsule networks with dynamic routing for text classification. arXiv preprint arXiv:1804.00538.
  • Yang, M., Zhao, W., Chen, L., Qu, Q., Zhao, Z., & Shen, Y. (2019). Investigating the transferring capability of capsule networks for text classification. Neural Networks, 118, 247-261. https://doi.org/10.1016/j.neunet.2019.06.014
  • Yamada, I., & Shindo, H. (2019). Neural attentive bag-of-entities model for text classification. arXiv preprint arXiv:1909.01259. https://doi.org/10.18653/v1/K19-1052
  • Xiong, C., Merity, S., & Socher, R. (2016). Dynamic memory networks for visual and textual question answering. In International Conference on Machine Learning (pp. 2397-2406). PMLR.
  • Peng, H., Li, J., Wang, S., Wang, L., Gong, Q., Yang, R., ... & He, L. (2019). Hierarchical taxonomy-aware and attentional graph capsule RCNNs for large-scale multi-label text classification. IEEE Transactions on Knowledge and Data Engineering.
  • Reimers, N., & Gurevych, I. (2019). Sentence-BERT: Sentence embeddings using siamese BERT-networks. arXiv preprint arXiv:1908.10084. https://doi.org/10.18653/v1/D19-1410
  • Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
  • Bao, H., Dong, L., Wei, F., Wang, W., Yang, N., Liu, X., ... & Li, M. (2020). UniLMv2: Pseudo-masked language models for unified language model pre-training. In International Conference on Machine Learning (pp. 642-652). PMLR.
  • Chen, J., Yang, Z., & Yang, D. (2020). MixText: Linguistically-informed interpolation of hidden space for semi-supervised text classification. arXiv preprint arXiv:2004.12239. https://doi.org/10.18653/v1/2020.acl-main.194
  • Liu, X., Mou, L., Cui, H., Lu, Z., & Song, S. (2020). Finding decision jumps in text classification. Neurocomputing, 371, 177-187. https://doi.org/10.1016/j.neucom.2019.08.082
  • Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. arXiv preprint arXiv:1802.05365. https://doi.org/10.18653/v1/N18-1202
  • Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
  • Wu, F., & Zheng, Y. (2016). Adaptive normalized weighted KNN text classification based on PSO. Scientific Bulletin of the National Mining University, 1, 109-115.
  • Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1532-1543). https://doi.org/10.3115/v1/D14-1162
  • Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., & Soricut, R. (2019). ALBERT: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
  • Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Volume 3, Issue 3
Summer 2020
Pages 191-198

  • Receive Date 01 June 2020
  • Revise Date 08 July 2020
  • Accept Date 28 September 2020