Discrete and Continuous Models and Applied Computational ScienceDiscrete and Continuous Models and Applied Computational Science2658-46702658-7149Peoples' Friendship University of Russia3095110.22363/2658-4670-2022-30-2-127-138Research ArticleMultistage pseudo-spectral method (method of collocations) for the approximate solution of an ordinary differential equation of the first orderLovetskiyKonstantin P.<p>Candidate of Physical and Mathematical Sciences, Assistant Professor of Department of Applied Probability and Informatics</p>lovetskiy-kp@rudn.ruhttps://orcid.org/0000-0002-3645-1060KulyabovDmitry S.<p>Docent, Doctor of Sciences in Physics and Mathematics, Professor at the Department of Applied Probability and Informatics</p>kulyabov-ds@rudn.ruhttps://orcid.org/0000-0002-0877-7063HisseinAli Weddeye<p>student of Department of Applied Probability and Informatics</p>1032209306@rudn.ruhttps://orcid.org/0000-0003-1100-4966Peoples’ Friendship University of Russia (RUDN University)Joint Institute for Nuclear Research0305202230212713803052022Copyright © 2022, Lovetskiy K.P., Kulyabov D.S., Hissein A.W.2022<p style="text-align: justify;">The classical pseudospectral collocation method based on the expansion of the solution in a basis of Chebyshev polynomials is considered. A new approach to constructing systems of linear algebraic equations for solving ordinary differential equations with variable coefficients and with initial (and/or boundary) conditions makes possible a significant simplification of the structure of matrices, reducing it to a diagonal form. The solution of the system is reduced to multiplying the matrix of values of the Chebyshev polynomials on the selected collocation grid by the vector of values of the function describing the given derivative at the collocation points. The subsequent multiplication of the obtained vector by the two-diagonal spectral matrix, inverse with respect to the Chebyshev differentiation matrix, yields all the expansion coefficients of the sought solution except for the first one. This first coefficient is determined at the second stage based on a given initial (and/or boundary) condition. The novelty of the approach is to first select a class (set) of functions that satisfy the differential equation, using a stable and computationally simple method of interpolation (collocation) of the derivative of the future solution. Then the coefficients (except for the first one) of the expansion of the future solution are determined in terms of the calculated expansion coefficients of the derivative using the integration matrix. Finally, from this set of solutions only those that correspond to the given initial conditions are selected.</p>initial value problemspseudo spectral collocation methodChebyshev polynomialsGauss-Lobatto setsnumerical stabilityначальные задачиметод псевдоспектральных коллокациймногочлены Чебышевамножества Гаусса-Лобатточисленная устойчивость1. Introduction Spectral methods are a class of methods used in applied mathematics and scientific computing to solve many differential equations numerically [1]- [4]. The main idea of the method is to represent the desired solution of a differential equation as a sum of certain ‘basis functions’ [5] (e.g., as an expansion into a sum in power functions - a Taylor series, or a sum of sinusoids, which is a Fourier series), and then calculate the coefficients in the sum to satisfy the differential equation in the best possible way. Spectral and finite element methods are closely related and are based on the same ideas. The main difference between them is that spectral methods use nonzero basis functions over the entire domain, while finite element methods use nonzero basis functions only on small subdomains. In other words, spectral methods use a global approach, while finite element methods use a local approach. It is for this reason that spectral methods provide excellent convergence, their ‘exponential convergence’ being the fastest possible when the solution is smooth. Spectral methods for the numerical solution of ordinary differential equations with given initial conditions are often reduced to solving a system of linear algebraic equations (SLAE), which includes both the initial conditions and conditions that ensure the fulfillment of differential relations [6]. However, a priori embedding of the initial (boundary) conditions into the system of linear equations leads to a significant increase in the filling of the matrices and, consequently, to the complication of the algorithm and method for solving the problem [7]. A more interesting approach is to select a basis that automatically takes into account the boundary conditions [1], [5], [6]. This is a frequently used trick when formulating the SLAE of the initial problem, and it reduces to taking into account the required initial/boundary conditions when creating the basis (a set of good basis functions-orthogonal, etc.) in a natural way, i.e., a basis is selected in which each basis function satisfies the initial conditions. The solution obtained using this approach is automatically sought in the class of functions satisfying the initial conditions. However, in this case it becomes much more difficult to work with new basis functions. The novelty of the approach proposed by the authors is that first, a class (set) of functions that satisfy the differential equation is selected using a stable and computationally simple method of interpolation (collocation) of the derivative of the future solution. Then the coefficients (except for some) of the expansion of the future solution are determined in terms of the calculated expansion coefficients of the derivative using the integration matrix. Only after that, from this set of solutions those that correspond to the given initial conditions are selected. Here we propose to divide the main problem into independent subproblems and to calculate the solution components in parts - separately those that determine the behavior of the derivative of the solution, and separately those that are determined by the boundary conditions. Thus, the problem is divided into two independent subproblems, each allowing stable and simple solution. The solution of the first problem in the simplest case is reduced to multiplying the vector of the right-hand side by the matrix of the Chebyshev functions values on the Gauss-Lobatto grid. At the next step, we solve the SLAE with a diagonal positive definite matrix and, multiplying the resulting vector on the left by the two-diagonal matrix, inverse (anti-derivative) with respect to the spectral Chebyshev matrix of differentiation, we obtain all the expansion coefficients of the desired solution, except for the first one. At the second, ‘most difficult’ stage, we determine the first coefficient of the expansion of the solution in terms of basis polynomials, solving a linear algebraic equation of the first order with respect to this coefficient. 2. Numerical solution of ordinary differential equations Exact solution of a trivial ordinary differential equation for a given initial (boundary) condition[J. P. Boyd, Chebyshev and Fourier spectral methods, 2nd ed. Dover Books on Mathematics, 2013.][J. C. Mason and D. C. Handscomb, Chebyshev polynomials. London: Chapman and Hall/CRC Press, 2002.][B. Fornberg, A practical guide to pseudospectral methods. Cambridge, England: Cambridge University Press, 1996. DOI: 10.1017/cbo9780511626357.][M. Planitz et al., Numerical recipes: the art of scientific computing, 3rd. New York: Cambridge University Press, 2007.][J. Shen, T. Tang, and L.-L. Wang, Spectral methods. Berlin, Heidelberg: Springer, 2011, vol. 41.][S. Olver and A. Townsend, “A fast and well-conditioned spectral method”, SIAM Rev., vol. 55, no. 3, pp. 462-489, 2013. DOI: 10.1137/120865458.][S. Chandrasekaran and M. Gu, “Fast and stable algorithms for banded plus semiseparable systems of linear equations”, SIAM Journal on Matrix Analysis and Applications, vol. 25, no. 2, pp. 373-384, 2003. DOI: 10.1137/S0895479899353373.][A. Iserles, A first course in the numerical analysis of differential equations, 2nd edition. Cambridge: Cambridge University Press, 2008. DOI: 10. 1017/CBO9780511995569.][X. Zhang and J. P. Boyd, “Asymptotic coefficients and errors for Chebyshevpolynomialapproximationswithweakendpointsingularities: effects of different bases”, Online. https://arxiv.org/pdf/2103.11841.pdf, 2021.][J. P. Boyd and D. H. Gally, “Numerical experiments on the accuracy of the Chebyshev-Frobenius companion matrix method for finding the zeros of a truncated series of Chebyshev polynomials”, Journal of Computational and Applied Mathematics, vol. 205, no. 1, pp. 281-295, 2007. DOI: 10.1016/j.cam.2006.05.006.][D. Dutykh, “A brief introduction to pseudo-spectral methods: application to diffusion problems”, Online. https://arxiv.org/pdf/1606.05432.pdf, 2016.][A. Amiraslani, R. M. Corless, and M. Gunasingam, “Differentiation matrices for univariate polynomials”, Numer. Algorithms, vol. 83, no. 1, pp. 1-31, 2020. DOI: 10.1007/s11075-019-00668-z.][E. Hairer, G. Wanner, and S. P. Nørsett, Solving ordinary differential equations I. Berlin: Springer, 1993. DOI: 10.1007/978-3-540-78862-1.][L. N. Trefethen, Spectral methods in MATLAB. Philadelphia: SIAM, 2000.][K. P. Lovetskiy, L. A. Sevastianov, D. S. Kulyabov, and N. E. Nikolaev, “Regularized computation of oscillatory integrals with stationary points”, Journal of Computational Science, vol. 26, pp. 22-27, 2018. DOI: 10. 1016/j.jocs.2018.03.001.][L. A. Sevastianov, K. P. Lovetskiy, and D. S. Kulyabov, “An effective stable numerical method for integrating highly oscillating functions with a linear phase”, Lecture Notes in Computer Science, vol. 12138, pp. 29- 43, 2020. DOI: 10.1007/978-3-030-50417-5_3.]