<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE root>
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.2" xml:lang="en"><front><journal-meta><journal-id journal-id-type="publisher-id">Discrete and Continuous Models and Applied Computational Science</journal-id><journal-title-group><journal-title xml:lang="en">Discrete and Continuous Models and Applied Computational Science</journal-title><trans-title-group xml:lang="ru"><trans-title>Discrete and Continuous Models and Applied Computational Science</trans-title></trans-title-group></journal-title-group><issn publication-format="print">2658-4670</issn><issn publication-format="electronic">2658-7149</issn><publisher><publisher-name xml:lang="en">Peoples' Friendship University of Russia named after Patrice Lumumba (RUDN University)</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">24216</article-id><article-id pub-id-type="doi">10.22363/2658-4670-2020-28-2-105-119</article-id><article-categories><subj-group subj-group-type="toc-heading" xml:lang="en"><subject>Computer Science</subject></subj-group><subj-group subj-group-type="toc-heading" xml:lang="ru"><subject>Информатика и вычислительная техника</subject></subj-group><subj-group subj-group-type="article-type"><subject>Research Article</subject></subj-group></article-categories><title-group><article-title xml:lang="en">Comparative analysis of machine learning methods by the example of the problem of determining muon decay</article-title><trans-title-group xml:lang="ru"><trans-title>Сравнительный анализ методов машинного обучения на примере задачи определения мюонного распада</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="en"><surname>Gevorkyan</surname><given-names>Migran N.</given-names></name><name xml:lang="ru"><surname>Геворкян</surname><given-names>М. Н.</given-names></name></name-alternatives><bio xml:lang="en"><p>Candidate of Sciences in Physics and Mathematics, Assistant Professor of Department of Applied Probability and Informatics</p></bio><bio xml:lang="ru"><p>Кафедра прикладной информатики и теории вероятностей</p></bio><email>gevorkyan-mn@rudn.ru</email><xref ref-type="aff" rid="aff1"/></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="en"><surname>Demidova</surname><given-names>Anastasia V.</given-names></name><name xml:lang="ru"><surname>Демидова</surname><given-names>А. В.</given-names></name></name-alternatives><bio xml:lang="en"><p>Candidate of Sciences in Physics and Mathematics, Assistant Professor of Department of Applied Probability and Informatics</p></bio><bio xml:lang="ru"><p>Кафедра прикладной информатики и теории вероятностей</p></bio><email>demidova-av@rudn.ru</email><xref ref-type="aff" rid="aff1"/></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="en"><surname>Kulyabov</surname><given-names>Dmitry S.</given-names></name><name xml:lang="ru"><surname>Кулябов</surname><given-names>Д. С.</given-names></name></name-alternatives><bio xml:lang="en"><p>Docent, Doctor of Sciences in Physics and Mathematics, Professor at the Department of Applied Probability and Informatics</p></bio><bio xml:lang="ru"><p>Кафедра прикладной информатики и теории вероятностей; Лаборатория информационных технологий</p></bio><email>kulyabov-ds@rudn.ru</email><xref ref-type="aff" rid="aff1"/><xref ref-type="aff" rid="aff2"/></contrib></contrib-group><aff-alternatives id="aff1"><aff><institution xml:lang="en">Peoples’ Friendship University of Russia (RUDN University)</institution></aff><aff><institution xml:lang="ru">Российский университет дружбы народов</institution></aff></aff-alternatives><aff-alternatives id="aff2"><aff><institution xml:lang="en">Joint Institute for Nuclear Research</institution></aff><aff><institution xml:lang="ru">Объединённый институт ядерных исследований</institution></aff></aff-alternatives><pub-date date-type="pub" iso-8601-date="2020-12-15" publication-format="electronic"><day>15</day><month>12</month><year>2020</year></pub-date><volume>28</volume><issue>2</issue><issue-title xml:lang="en">VOL 28, NO2 (2020)</issue-title><issue-title xml:lang="ru">ТОМ 28, №2 (2020)</issue-title><fpage>105</fpage><lpage>119</lpage><history><date date-type="received" iso-8601-date="2020-07-20"><day>20</day><month>07</month><year>2020</year></date></history><permissions><copyright-statement xml:lang="en">Copyright ©; 2020, Gevorkyan M.N., Demidova A.V., Kulyabov D.S.</copyright-statement><copyright-statement xml:lang="ru">Copyright ©; 2020, Геворкян М.Н., Демидова А.В., Кулябов Д.С.</copyright-statement><copyright-year>2020</copyright-year><copyright-holder xml:lang="en">Gevorkyan M.N., Demidova A.V., Kulyabov D.S.</copyright-holder><copyright-holder xml:lang="ru">Геворкян М.Н., Демидова А.В., Кулябов Д.С.</copyright-holder><ali:free_to_read xmlns:ali="http://www.niso.org/schemas/ali/1.0/"/><license><ali:license_ref xmlns:ali="http://www.niso.org/schemas/ali/1.0/">http://creativecommons.org/licenses/by/4.0</ali:license_ref></license></permissions><self-uri xlink:href="https://journals.rudn.ru/miph/article/view/24216">https://journals.rudn.ru/miph/article/view/24216</self-uri><abstract xml:lang="en"><p>The history of using machine learning algorithms to analyze statistical models is quite long. The development of computer technology has given these algorithms a new breath. Nowadays deep learning is mainstream and most popular area in machine learning. However, the authors believe that many researchers are trying to use deep learning methods beyond their applicability. This happens because of the widespread availability of software systems that implement deep learning algorithms, and the apparent simplicity of research. All this motivate the authors to compare deep learning algorithms and classical machine learning algorithms. The Large Hadron Collider experiment is chosen for this task, because the authors are familiar with this scientific field, and also because the experiment data is open source. The article compares various machine learning algorithms in relation to the problem of recognizing the decay reaction  <italic>τ<sup>–</sup></italic><italic> →μ<sup>–</sup> + μ<sup>–</sup> + μ<sup>+</sup></italic> at the Large Hadron Collider. The authors use open source implementations of machine learning algorithms. We compare algorithms with each other based on calculated metrics. As a result of the research, we can conclude that all the considered machine learning methods are quite comparable with each other (taking into account the selected metrics), while different methods have different areas of applicability.</p></abstract><trans-abstract xml:lang="ru"><p>Применение алгоритмов машинного обучения для анализа статистических моделей имеет достаточно длинную историю. Развитие компьютерной техники дало этим алгоритмам новое дыхание. Особенно громкую известность получило такое направление машинного обучения, как глубинное обучение. Однако авторы полагают, что многие исследователи пытаются использовать методы глубинного обучения за пределами их применимости. Этому способствуют как широкая распространённость программных комплексов, реализующих алгоритмы глубинного обучения, так и кажущаяся простота исследования. Всё это стало побудительным мотивом для проведения сравнения алгоритмов глубинного обучения и классических алгоритмов машинного обучения. В качестве задачи был выбран эксперимент на Большом адронном коллайдере, поскольку авторы знакомы с данной научной областью, а также потому, что данные эксперимента доступны публично. В статье проводится сравнение различных алгоритмов машинного обучения применительно к задаче распознания реакции распада <italic>τ<sup>–</sup></italic><italic> →μ<sup>–</sup> + μ<sup>–</sup> + μ<sup>+</sup></italic> на Большом адронном коллайдере. Используются готовые свободные реализации алгоритмов машинного обучения. Алгоритмы сравниваются друг с другом на основе вычисляемых метрик. В результате исследования можно сделать вывод, что все рассмотренные методы машинного обучения вполне сопоставимы друг с другом (с учётом выбранных метрик), при этом разные методы имеют разные области применимости.</p></trans-abstract><kwd-group xml:lang="en"><kwd>muon decay</kwd><kwd>machine learning</kwd><kwd>neural networks</kwd></kwd-group><kwd-group xml:lang="ru"><kwd>мюонный распад</kwd><kwd>машинное обучение</kwd><kwd>нейронные сети</kwd></kwd-group><funding-group/></article-meta></front><body></body><back><ref-list><ref id="B1"><label>1.</label><mixed-citation>M. N. Gevorkyan, A. V. Demidova, T. S. Demidova, and A. A. Sobolev, “Review and comparative analysis of machine learning libraries for machine learning,” Discrete and Continuous Models and Applied Computational Science, vol. 27, no. 4, pp. 305-315, Dec. 2019. DOI: 10.22363/ 2658-4670-2019-27-4-305-315.</mixed-citation></ref><ref id="B2"><label>2.</label><mixed-citation>L. A. Sevastianov, A. L. Sevastianov, E. A. Ayrjan, A. V. Korolkova, D. S. Kulyabov, and I. Pokorny, “Structural Approach to the Deep Learning Method,” in Proceedings of the 27th Symposium on Nuclear Electronics and Computing (NEC-2019), V. Korenkov, T. Strizh, A. Nechaevskiy, and T. Zaikina, Eds., ser. CEUR Workshop Proceedings, vol. 2507, Budva, Sep. 2019, pp. 272-275.</mixed-citation></ref><ref id="B3"><label>3.</label><mixed-citation>P. Langacker, The standard model and beyond, ser. Series in High Energy Physics, Cosmology and Gravitation. CRC Press, 2009.</mixed-citation></ref><ref id="B4"><label>4.</label><mixed-citation>I. Lakatos, “Falsification and the Methodology of Scientific Research Programmes,” in Criticism and the growth of Knowledge, I. Lakatos and A. Musgrave, Eds., Cambr. University Press, 1970, pp. 91-195.</mixed-citation></ref><ref id="B5"><label>5.</label><citation-alternatives><mixed-citation xml:lang="en">R. Aaij et al., “Search for the lepton flavour violating decay τ– → μ– + μ+ + μ −,” Journal of High Energy Physics, vol. 2015, no. 2, p. 121, Feb. 2015. DOI: 10.1007/JHEP02(2015)121. arXiv: 1409.8548.</mixed-citation><mixed-citation xml:lang="ru">R. Aaij et al., “Search for the lepton flavour violating decay τ– → μ– + μ+ + μ −,” Journal of High Energy Physics, vol. 2015, no. 2, p. 121, Feb. 2015. DOI: 10.1007/JHEP02(2015)121. arXiv: 1409.8548.</mixed-citation></citation-alternatives></ref><ref id="B6"><label>6.</label><mixed-citation>(2018). “Flavours of Physics: Finding τ→ μμμ (Kernels Only),” [Online]. Available: https://www.kaggle.com/c/flavours-of-physicskernels-only.</mixed-citation></ref><ref id="B7"><label>7.</label><mixed-citation>F. Pedregosa et al., “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825-2830, 2011.</mixed-citation></ref><ref id="B8"><label>8.</label><mixed-citation>F. Chollet. (2020). “Keras,” [Online]. Available: https://keras.io/.</mixed-citation></ref><ref id="B9"><label>9.</label><mixed-citation>(2020). “XGBoost Documentation,” [Online]. Available: https:// xgboost.readthedocs.io.</mixed-citation></ref><ref id="B10"><label>10.</label><mixed-citation>(2020). “Hep_ml,” [Online]. Available: https://arogozhnikov.github. io.</mixed-citation></ref><ref id="B11"><label>11.</label><mixed-citation>(2020). “CNTC official repository,” [Online]. Available: https://github. com/Microsoft/cntk.</mixed-citation></ref><ref id="B12"><label>12.</label><mixed-citation>Theano Development Team, “Theano: A Python framework for fast computation of mathematical expressions,” arXiv e-prints, vol. abs/1605.0, 2016.</mixed-citation></ref><ref id="B13"><label>13.</label><mixed-citation>I. H. Witten, E. Frank, M. A. Hall, and C. J. Pal, Data Mining: Practical Machine Learning Tools and Techniques, ser. The Morgan Kaufmann Series in Data Management Systems. Elsevier, 2011. DOI: 10.1016/ C2009-0-19715-5.</mixed-citation></ref><ref id="B14"><label>14.</label><mixed-citation>A. Bruce and P. Bruce, Practical Statistics for Data Scientists: 50 Essential Concepts. O’Reilly Media, 2017.</mixed-citation></ref><ref id="B15"><label>15.</label><mixed-citation>J. VanderPlas, Python Data Science Handbook: Essential Tools for Working with Data. O’Reilly Media, 2016.</mixed-citation></ref><ref id="B16"><label>16.</label><mixed-citation>(2020). “Scikit-learn home site,” [Online]. Available: https://scikitlearn.org/stable/.</mixed-citation></ref><ref id="B17"><label>17.</label><mixed-citation>D. W. Hosmer, S. Lemeshow, and R. X. Sturdivant, Applied Logistic Regression, ser. Wiley Series in Probability and Statistics. Wiley, 2013.</mixed-citation></ref><ref id="B18"><label>18.</label><mixed-citation>J. M. Hilbe, Logistic Regression Models, ser. Chapman &amp; Hall/CRC Texts in Statistical Science. Chapman and Hall/CRC, May 2009. DOI: 10.1201/9781420075779.</mixed-citation></ref><ref id="B19"><label>19.</label><mixed-citation>D. Ruppert, “The Elements of Statistical Learning: Data Mining, Inference, and Prediction,” Journal of the American Statistical Association, Springer Series in Statistics, vol. 99, no. 466, p. 567, 2004. DOI: 10. 1198/jasa.2004.s339.</mixed-citation></ref><ref id="B20"><label>20.</label><mixed-citation>R. Collins, Machine Learning with Bagging and Boosting. Amazon Digital Services LLC - Kdp Print Us, 2018.</mixed-citation></ref><ref id="B21"><label>21.</label><mixed-citation>J. H. Friedman, “Greedy function approximation: A gradient boosting machine,” Annals of Statistics, vol. 29, no. 5, pp. 1189-1232, 2001. DOI: 10.2307/2699986.</mixed-citation></ref><ref id="B22"><label>22.</label><mixed-citation>A. W. Kemp and B. F. J. Manly, Randomization, Bootstrap and Monte Carlo Methods in Biology. Ser. Chapman &amp; Hall/CRC Texts in Statistical Science 4. CRC Press, Dec. 1997, vol. 53. DOI: 10.2307/2533527.</mixed-citation></ref><ref id="B23"><label>23.</label><mixed-citation>O. Soranson, Python Data Science Handbook: The Ultimate Guide to Learn How to Use Python for Data Analysis and Data Science. Learn the Essential Tools for Beginners to Work with Data, ser. Artificial Intelligence Series. Amazon Digital Services LLC - KDP Print US, 2019.</mixed-citation></ref><ref id="B24"><label>24.</label><mixed-citation>M. Abadi, A. Agarwal, Paul Barham, EugeneBrevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, and Jeffrey Dean. (2015). “TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems,” [Online]. Available: http://tensorflow.org/.</mixed-citation></ref><ref id="B25"><label>25.</label><mixed-citation>(2020). “TensorFlow home site,” [Online]. Available: https://www. tensorflow.org/.</mixed-citation></ref><ref id="B26"><label>26.</label><mixed-citation>A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 2017.</mixed-citation></ref></ref-list></back></article>
