Статистические методы оценки квартилей научных конференций

Обложка

Цитировать

Полный текст

Аннотация

В статье представлены результаты оценки квартилей научных конференций, выставленных ведущими рейтинговыми агентствами. Оценки получены на основе применения трёх методов многомерного статистического анализа: линейной регрессии, дискриминантного анализа и нейронных сетей. Для оценки использовалась обучающая выборка, включающая следующие факторы: возраст и периодичность конференции, количество участников и количество докладов, публикационная активность организаторов конференции, цитируемость докладов. В результате проведённого исследования линейная регрессионная модель подтвердила верность выставленных квартилей для 77% конференций, в то время как методы нейронных сетей и дискриминантного анализа дали близкие результаты, подтвердив верность выставленных квартилей для 81 и 85% конференций соответственно.

Полный текст

1. Introduction As it is known [1], quartile (quarter) is a category of scientific publications, which is determined by bibliometric indicators reflecting, first of all, the level of citation, that is, the relevance of the publication by the scientific community. And if the procedure for assigning quartiles to scientific journals has long been developed and successfully applied in practice [2-5]. In addition, many metrics have been introduced to assess the impact of journals, such as impact factor, 5-year impact factor, immediacy index, and impact factor without self cites, median impact factor, aggregate impact factor and others [6]. At the same time, this issue remains the subject of research for scientific conferences [7-11]. Some rating agencies have already begun to rank scientific conferences without disclosing the details of this procedure. For example, there is a CORE conference ranking [12], a CCF conference ranking [13], and a Microsoft Academic conference ranking (has been deleted) [14]. The disadvantages of the first two ratings are that they are expert, regional and do not fully disclose the procedure for ranking conferences. They also rank only computer science conferences. Researchers use various methods to compile new conference rankings, such as correlation analysis [7, 15], statistical analysis [15, 16], calculation of indicators similar to journal ones [9], graph and tree analysis [8, 17], regression analysis [16], [11]. Many of these studies involved the use of several of the listed methods. There were also works devoted to the search for methods for predicting the rating of a conference or predicting the impact of works presented at a particular conference [18]. Machine learning was used for these purposes [19, 20]. Therefore, this study is devoted to comparing two popular methods for predicting conference rankings, and I also included in the study such a statistical method as discriminant analysis, which is essentially a mathematical prerequisite for machine learning. We managed to find data on some conferences via the Internet, including their quartiles and a number of other indicators, which will be discussed below. As a result, we received a training sample from 23 conferences, on the basis of which we will try to assess the adequacy of the quartiles exposed using three methods of multidimensional statistical analysis: linear regression, discriminant analysis and neural networks. © Ermolayeva A. M., 2024 This work is licensed under a Creative Commons Attribution 4.0 International License https://creativecommons.org/licenses/by-nc/4.0/legalcode 2. Training sample Let’s introduce the notation: -
×

Об авторах

А. М. Ермолаева

Российский университет дружбы народов

Автор, ответственный за переписку.
Email: ermolaeva-am@rudn.ru
ORCID iD: 0000-0001-6107-6461

Assistant of Probability Theory and Cyber Security

ул. Миклухо-Маклая, д. 6, Москва, 117198, Российская Федерация

Список литературы

  1. Journal Quartiles https://www.manuscriptedit.com/scholar-hangout/quartilesof-the-journals-and-the-secret-of-publishing/.
  2. Garfield, E. Citation indexes for science: A new dimension in documentation through association of ideas. Science 122, 108-111 (1955).
  3. Bergstrom, C. T., West, J. D. & Wiseman, M. A. The eigenfactor metrics. Journal of neuroscience 28, 11433-11434 (2008).
  4. Moed, H. F. Measuring contextual citation impact of scientific journals. Journal of informetrics 4, 265-277. doi: 10.1016/j.joi.2010.01.002 (2010).
  5. González-Pereira, B., Guerrero-Bote, V. P. & Moya-Anegón, F. A new approach to the metric of journals’ scientific prestige: The SJR indicator. Journal of informetrics 4, 379-391. doi:10. 1016/j.joi.2010.03.002 (2010).
  6. Kim, K. & Chung, Y. Overview of journal metrics. Science Editing 5, 16-20 (2018).
  7. Freyne, J., Coyle, L., Smyth, B. & Cunningham, P. Relative status of journal and conference publications in computer science. Communications of the ACM 53, 124-132. doi: 10.1145/1839676.1839701 (2010).
  8. Jahja, I., Effendy, S. & Yap, R. H. Experiments on rating conferences with CORE and DBLP. D-Lib Magazine 20. doi: 10.1045/november14-jahja (2014).
  9. Meho, L. I. Using Scopus’s CiteScore for assessing the quality of computer science conferences. Journal of Informetrics 13, 419-433. doi: 10.1016/j.joi.2019.02.006 (2019).
  10. Effendy, S. & Yap, R. H. C. Investigations on rating computer sciences conferences
  11. Lee, D. H. Predictive power of conference-related factors on citation rates of conference papers. Scientometrics 118, 281-304. doi: 10.1007/s11192-018-2943-z (2019).
  12. Core conference ranking http://portal.core.edu.au/conf-ranks/.
  13. CCF conference ranking https://www.ccf.org.cn/en/.
  14. Microsoft Academic’s field ratings for conferences https://www.microsoft.com/en-us/research/project/academic/articles/microsoft-academic-analytics/.
  15. Vrettas, G. & Sanderson, M. Conferences versus journals in computer science. Journal of the Association for Information Science and Technology 66, 2674-2684 (2015).
  16. Li, X., Rong, W., Shi, H., Tang, J. & Xiong, Z. The impact of conference ranking systems in computer science: A comparative regression analysis. Scientometrics 116, 879-907. doi:10. 1007/s11192-018-2763-1 (2018).
  17. Küngas, P., Karus, S., Vakulenko, S., Dumas, M., Parra, C. & Casati, F. Reverse-engineering conference rankings: what does it take to make a reputable conference? Scientometrics 96, 651-665. doi: 10.1007/s11192-012-0938-8 (2013).
  18. Steck, H. Evaluation of recommendations: rating-prediction and ranking in Proceedings of the 7th ACM conference on Recommender systems (2013), 213-220. doi: 10.1145/2507157.2507160.
  19. Chowdhury, G. R., Al Abid, F. B., Rahman, M. A., Masum, A. K. M. & Hassan, M. M. Prediction of upcoming conferences ranking in Bangladesh based on analytic network process and machine learning in 2018 International Conference on Innovations in Science, Engineering and Technology (ICISET) (2018), 463-467. doi: 10.1109/ICISET.2018.8745590.
  20. Udupi, P. K., Dattana, V., Netravathi, P. & Pandey, J. Predicting global ranking of universities across the world using machine learning regression technique in SHS Web of Conferences 156 (2023), 04001.
  21. Scopus https://www.scopus.com.
  22. DBLP https://dblp.org/.
  23. Google Scholar https://scholar.google.com/.
  24. Kobzar, A. I. Applied mathematical statistics (Fizmatlit, 2006).
  25. Orlova, I. V., Kontsevaya, N. V., Turundaevsky, V. B., Urodovskikh, V. N. & Filonova, E. S. Multidimensional statistical analysis in economic problems: computer modeling in SPSS (textbook). International Journal of Applied and Fundamental Research, 248-250 (2014).
  26. Gafarov, F. M., Galimyanov, A. F., et al. Artificial neural networks and applications 2018.

Дополнительные файлы

Доп. файлы
Действие
1. JATS XML

© Ермолаева А.М., 2024

Creative Commons License
Эта статья доступна по лицензии Creative Commons Attribution-NonCommercial 4.0 International License.