## Vol 28, No 2 (2020)

**Year:**2020**Articles:**4**URL:**http://journals.rudn.ru/miph/issue/view/1348**DOI:**https://doi.org/10.22363/2658-4670-2020-28-2

###### Abstract

The history of using machine learning algorithms to analyze statistical models is quite long. The development of computer technology has given these algorithms a new breath. Nowadays deep learning is mainstream and most popular area in machine learning. However, the authors believe that many researchers are trying to use deep learning methods beyond their applicability. This happens because of the widespread availability of software systems that implement deep learning algorithms, and the apparent simplicity of research. All this motivate the authors to compare deep learning algorithms and classical machine learning algorithms. The Large Hadron Collider experiment is chosen for this task, because the authors are familiar with this scientific field, and also because the experiment data is open source. The article compares various machine learning algorithms in relation to the problem of recognizing the decay reaction *τ ^{–}*

*→μ*at the Large Hadron Collider. The authors use open source implementations of machine learning algorithms. We compare algorithms with each other based on calculated metrics. As a result of the research, we can conclude that all the considered machine learning methods are quite comparable with each other (taking into account the selected metrics), while different methods have different areas of applicability.

^{–}+ μ^{–}+ μ^{+}**Discrete and Continuous Models and Applied Computational Science**. 2020;28(2):105-119

###### Abstract

In the recent years thanks to the modern and sophisticated technologies the astronomers and astrophysicists were able to look deep into the Universe. This vast data poses some new problem to the cosmologists. One of the problems is to develop an adequate theory. Another one is to fit the theoretical results with the observational one. In this report within the scope of the isotropic and homogeneous Friedman-Lemaitre-Robertson-Walker (FLRW) cosmological model we study the evolution of the Universe filled with dust or cosmological constant. The reason to consider this model is the present universe surprisingly homogeneous and isotropic in large scale. We also compare our results with the data from the SAI Supernovae Catalog. Since the observational data are given in terms of Hubble constant (????) and redshift (????) we rewrite the corresponding equations as a functions of ????. The task is to find the set of parameters for the mathematical model of an isotropic and homogeneous Universe that fits best with the astronomical data obtained from the study of supernovae: magnitude (????), redshift (????).

**Discrete and Continuous Models and Applied Computational Science**. 2020;28(2):120-130

###### Abstract

In recent years spinor field is being used by many authors to address some burning issues of modern cosmology. The motive behind using the spinor field as a source for gravitational field lies on the fact that the spinor field not only can describe the different era of the evolution but also can simulate different substances such as perfect fluid and dark energy. Moreover, the spinor field is very sensitive to the gravitational one and depending on the gravitational field the spinor field can react differently and change the spacetime geometry and the spinor field itself differently. This paper provides a brief description of the nonlinear spinor field in the FriedmannLemaitre-Robertson-Walker (FLRW) model. The results are compared in Cartesian and spherical coordinates. It is shown that during the transition from Cartesian coordinates to spherical ones, the energy-momentum tensor acquires additional nonzero non-diagonal components that can impose restrictions on either spinor functions or metric ones.

**Discrete and Continuous Models and Applied Computational Science**. 2020;28(2):131-140

###### Abstract

**Discrete and Continuous Models and Applied Computational Science**. 2020;28(2):141-153