О методах минимизации рисков внедрения искусственного интеллекта в финансовый бизнес компании
- Авторы: Щетинин Е.Ю.1, Севастьянов Л.А.2,3, Демидова А.В.2, Велиева Т.Р.2
-
Учреждения:
- Финансовый университет при Правительстве Российской Федерации
- Российский университет дружбы народов
- Объединённый институт ядерных исследований
- Выпуск: Том 33, № 1 (2025)
- Страницы: 103-111
- Раздел: Письма
- URL: https://journals.rudn.ru/miph/article/view/44735
- DOI: https://doi.org/10.22363/2658-4670-2025-33-1-103-111
- EDN: https://elibrary.ru/AFJUOE
- ID: 44735
Цитировать
Полный текст
Аннотация
Эффективное применение моделей искусственного интеллекта (ИИ) в различных областях в сфере финансовых рисков позволяет повысить скорость обработки данных, углубить степень их анализа и снизить трудозатраты, тем самым эффективно повышая эффективность контроля финансовых рисков. Применение ИИ в сфере управления финансовыми рисками выдвигает новые требования к конфигурации системы и режиму работы финансового надзора. В условиях быстрого роста компьютерных и сетевых технологий, увеличения частоты рыночных операций, диверсификации источников данных, а также развития и применения больших данных это создает новые проблемы для управления финансовыми рисками на основе больших данных. В данной статье анализируется роль искусственного интеллекта в содействии реформированию и росту финансовой отрасли, а также предлагаются контрмеры по рациональному использованию ИИ в сфере управления финансовыми рисками.
Ключевые слова
Полный текст
1. Introduction The annual economic growth rate in most developed countries could double in the near future due to the widespread implementation of artificial intelligence. However, the spread of innovative technologies also brings new challenges. Insurance companies need new risk management strategies to maximize the benefits of the implementation of artificial intelligence in society and business. By 2035, AI-based technologies are expected to increase corporate productivity by an average of 38% across 16 industries in 12 countries, according to a 2018 Allianz survey of 1,911 risk experts worldwide. The proliferation of AI-based technologies, from chatbots to autonomous vehicles, is inexorably transforming industries and society [1]. Artificial intelligence is already being used to increase productivity through unique insights gained through data analysis and through automation of simple tasks. Expectations for AI-based technologies are growing, and private corporations have begun to invest more and more to be the first to take advantage of its benefits. Experts estimate the impact of AI and other innovative technologies on the economy to be higher than, say, the impact of political risks and climate change. At the same time, many of them note possible negative effects from the introduction of innovations. For example, the penetration of artificial intelligence into manufacturing could make automated, autonomous, or self-learning machines more vulnerable to cyber threats, as well as the potential for large-scale disruptions and losses, especially when it comes to critical infrastructure. Artificial intelligence, for example, could reduce traffic accidents by up to 90%, but it also brings with it uncertainty about liability and ethics in the event of an accident. Uber Technologies recently halted testing of self-driving cars after one of them struck and killed a woman, marking the first fatal accident involving a pedestrian and self-driving cars. Immediately after the incident, Uber announced it was suspending all of its autonomous vehicle testing in Pittsburgh, San Francisco, Toronto, and much of Phoenix [2]. Healthcare is another sector of the economy where expectations for artificial intelligence are very high. There is a hypothesis that the use of advanced data analysis will help to overcome many diseases that are currently incurable, diagnose diseases that require detection and cross-checking through a large number of medical tests. At the same time, the problem of protecting patients’ personal data is obvious, for example, with the widespread use of medical records by artificial intelligence to study new diseases. This problem has already drawn attention to the need to change the legislative regulation of data protection and patient rights [3]. The threat landscape in the digital security sphere is also likely to change. New technologies will reduce cyber risks by better detecting attacks, but will also increase their likelihood if, for example, hackers gain control. Artificial intelligence will pave the way for them to carry out more serious incidents, reducing the cost of organizing cyber attacks and allowing them to be carried out more targeted. Social issues will become acute. According to a study by the consulting company McKinsey, today more than 1.1 billion full-time jobs in the world are related to functions that can be automated, of which more than 100 million are in the United States and Europe [4]. In the financial sector, AI has also played a significant role, providing a number of benefits, but also causing certain risks. The prospects and risks of using artificial intelligence in the financial sector and the expected changes it will bring to this industry are the subject of research in this paper. 2. An overview of potential risks of implementing AI in financial business Possibility of systematic errors. It is important to note that AI operates on the basis of algorithms, and its operation may become unstable if the data used is distorted or incorrect, which may lead to serious financial losses. Threat of replacing human resources. The introduction of AI may lead to job losses in the financial sector, which will cause social and economic problems. Insufficient regulation. It is necessary to develop appropriate legislative norms and rules that would regulate the use of AI in the financial sector to minimize risks and protect the interests of clients. The following are the most important risks in the implementation of AI: 1. Bias, unreliability of results due to inappropriate or unrepresentative data 2. Inability to interpret/explain the results of an AI model 3. Inappropriate use of data 4. Vulnerabilities to cyber-attacks to obtain and/or manipulate data Shchetinin, E. Y. et al. On the methods of minimizing the risks of implementing artificial intelligence … 105 5. Social consequences of rapid transformation because of the transition to AI technologies. The practical consequences of materializing these risks can take many forms: damage to reputation, reduction in organizational values, fines, legal costs. Organizations are afraid of risks, hence, more than half (56%) of respondents admitted that the implementation of AI technologies in their case is slow. However, this argument cannot remain dominant for long, because otherwise organizations risk losing their competitive opportunities. Rather than constantly putting on the brakes, it is better to use more thoughtful approaches to implementation, involving effective risk management [5]. Despite certain risks, the application of artificial intelligence in the financial sector is inevitable and is one of the key areas of development in this industry. It provides unique opportunities for growth and improvement of the quality of financial institutions, as well as for improving customer service. In any case, it is important to balance the prospects and risks to ensure safe and effective implementation of AI in the financial sector. 3. Basic methods for managing the risks of implementing AI in a company’s financial business To successfully and safely implement AI in the financial sector, companies must take proactive measures to minimize the risks associated with it. The following strategies may be proposed for this purpose: Improving the company’s cybersecurity. When using AI in the financial sector, there is a risk of leaking sensitive information, which can have serious consequences for customers and financial institutions. Financial institutions should invest in modern cybersecurity systems, including data encryption, multi-factor authentication, and regular security audits. It is also important to train employees in the basics of cybersecurity so that they can recognize potential threats. Testing and validation of algorithms. Before deploying AI systems, algorithms must be thoroughly tested and validated. This includes using historical data to assess the accuracy of predictions and identify potential errors. Companies must also regularly update their models based on new data. Transparency and explainability of AI algorithms and models. Developing “explainable” AI models will help increase trust with customers and regulators. Transparency in how decisions are made can reduce the risk of customer dissatisfaction and ensure regulatory compliance. Ethical standards. With the rapid development of artificial intelligence (AI), businesses have incorporated generative AI into their operations and are enjoying numerous benefits such as workflow efficiency, cost reduction through task automation, reduction in human errors, rapid development of products and services, and improved data-driven decision making. Although numerous benefits can be achieved through the implementation of generative AI, the problem is that integrating AI into the business strategies of different parts of an organization comes with serious ethical challenges. These contradictions can disrupt competitive advantages and hinder customer engagement, and lead to decreased loyalty, trust, and brand building. Ethical dilemmas in AI integration include data bias that tends to distort results and is discriminatory, misuse of customer data obtained from AI, complex liability determination, and reliability and security risks during rapid deployment [6]. Financial firms should develop ethical guidelines for the use of AI, including mechanisms to prevent discrimination and ensure fairness. This could include the creation of ethics committees that monitor the use of AI and its impact on customers. Combined approach to customer service. Although AI can automate many processes, it is important to maintain the human element in customer service. Combining AI with traditional customer service can provide a better customer experience and increase customer satisfaction. 106 Letters DCM&ACS. 2025, 33 (1), 103-111 Regulatory Compliance. Companies need to stay abreast of changes in legislation and regulation around AI and fintech. This includes compliance with data protection requirements, as well as reporting and transparency standards. Personnel training and development. Investing in employee training will help them better understand AI technologies and their application in business. It will also help create a culture of innovation within the company and increase trust in new technologies. Monitoring and auditing of AI systems. Regular monitoring and auditing of AI systems will help identify potential problems and deviations in the operation of algorithms. This includes analyzing their performance, checking for bias, and assessing the impact on business processes. Creating feedback mechanisms will allow for prompt responses to emerging problems [7-9]. Data management. Effective data management is a key aspect of successful AI implementation. Companies must ensure that data is of high quality, up-to-date, and secure. This includes developing strategies for collecting, storing, and processing data while respecting privacy regulations [10, 11]. Collaboration with external experts. Engaging with external consultants and AI experts can help companies avoid common pitfalls and adopt best practices. This may also include participating in joint projects with universities or research organizations. Adaptation to market changes. The financial sector is constantly changing, and companies must be prepared to adapt to new conditions. This may include being flexible in the use of AI technologies, as well as being prepared for changes in legislation and consumer preferences. Creating a culture of innovation. Creating a culture that encourages innovation will help employees feel comfortable adopting new technologies. This can be achieved by encouraging experimentation, openly discussing ideas, and implementing training and development initiatives. Strategic planning. Companies should develop long-term strategies for the implementation of AI that take into account both current needs and future trends. This will help avoid impulsive decisions and ensure a consistent approach to technology integration. Employee training and development. One of the key aspects of successful AI implementation is employee training. Companies should invest in upskilling programs to ensure employees are able to work effectively with new technologies. This includes both technical skills and an understanding of the ethical aspects of using AI. Integrating AI into Business Processes. To achieve maximum effectiveness, AI should be integrated into existing business processes. This may include automating routine tasks, improving data analytics, and using predictive models to make more informed decisions [11, 12]. Change Management. The introduction of AI may cause resistance from employees, especially if they fear job losses. It is important to develop a change management strategy that includes open communication, support from management, and employee involvement in the transformation process. Cybersecurity of company information systems. As the use of AI increases, so does the risk of cyberattacks. Financial institutions must invest in cybersecurity to protect their data and systems from potential threats. This includes regular security audits and software updates [13]. Partnerships with technology companies. Collaboration with technology companies can accelerate the implementation of AI. Partnerships allow for the use of ready-made solutions and technologies, as well as the exchange of experience and knowledge. Data analysis and decision making. One of the main advantages of AI is its ability to process and analyze huge amounts of data. Financial institutions can use AI to identify patterns and trends, which allows for more informed decisions. This may include assessing customer creditworthiness, analyzing investment risks, and optimizing portfolios [14-16]. Shchetinin, E. Y. et al. On the methods of minimizing the risks of implementing artificial intelligence … 107 Process automation. Automating routine tasks with AI frees up employees’ time for strategic tasks. This may include automating loan application processing, account management, and other administrative processes, which improves the overall efficiency of the organization. Risk Management. AI can significantly improve risk management in the financial sector. AI-based systems can predict potential financial crises, identify fraudulent transactions, and assess risks in real time, allowing for a quick response to threats. Key AI Tools in Risk Management. Machine Learning and Deep Learning: Used to identify risk factors and create models that can predict the likelihood of adverse events. Time Series Analysis: Used to predict market movements and identify trends. NLP (Natural Language Processing): Used to analyze news, financial reports, and other text data that may impact market risks. Examples of AI in Risk Management. Credit Risk. Example: Bank A uses machine learning models to assess credit risk. Algorithms analyze credit history, solvency, and other factors to predict the probability of loan default. Tools: logistic regression, decision trees. Market risks. Example: Trading company B uses deep learning algorithms to analyze market data and predict short-term price movements in stocks and currencies. Tools: Recurrent neural networks, long short-term memory (LSTM). Liquidity risk. Example: Financial institution C uses AI to predict liquidity risk by analyzing asset and liability turnover and market conditions. Tools: Bayesian networks, time series analysis. Operational risk. Example: Company D uses AI-based systems to monitor and manage operational risks, including fraud, cyber-attacks, and system failures. Tools: Anomaly detection systems, data clustering. The use of AI in risk management allows financial institutions to more effectively identify, assess, and mitigate various types of risks. These technologies provide deeper data analysis, which leads to more informed and accurate decisions. However, limitations and challenges related to the accuracy of AI data and models, as well as the ethical aspects of their use, must be taken into account. Innovative products and services. With the introduction of AI, financial institutions can develop new innovative products and services that meet customer needs. This may involve the creation of new investment instruments, personal finance management applications, or lending platforms. Customer Feedback. Using AI to analyze customer feedback helps to better understand their needs and expectations. Companies can use this data to improve their services and products, as well as to develop new offers based on customer preferences. Investment in Research and Development. For the successful implementation of AI, it is important to invest in research and development. This allows you to stay at the forefront of technology and create competitive solutions that meet modern market requirements. Monitoring and Adapting Strategies. Finally, it is important to constantly monitor the results of AI implementation and adapt strategies depending on the data received. Regularly analyzing the effectiveness of technologies and their impact on business will allow companies to remain competitive and develop successfully. 4. Research Results The financial services sector worldwide is one of the leaders in the use and development of AI. However, AI poses numerous technical, ethical, and legal challenges that may undermine the data, cybersecurity, systemic risk, and ethics objectives of financial regulation, particularly regarding black 108 Letters DCM&ACS. 2025, 33 (1), 103-111 box issues. As the research in this paper shows, traditional externally focused financial supervision is unlikely to be able to adequately address the risks posed by AI due to: (1) increased information asymmetry; (2) data dependence; and (3) interdependence. Accordingly, even if supervisors have exceptional resources and expertise, overseeing the use of artificial intelligence in the financial sector using traditional methods is extremely challenging. To address this shortcoming, it is necessary to strengthen the internal governance of financial institutions and introduce requirements for personal human accountability. This approach builds on the existing framework for executive accountability that was developed in the wake of the 2008 global financial crisis and the continuing stream of ethically questionable conduct around the world in finance. This framework should consider and be consistent with broader approaches to data privacy and human-in-the-loop outside finance. From a financial oversight perspective, internal governance could be strengthened through an increased focus on the personal accountability of senior management (or key functional holders) for regulated areas and activities as defined for regulatory purposes. These rules for key functional holders - especially if complemented by specific requirements for AI due diligence and explainability - would help ensure that key personnel at financial firms are ensuring that any AI is operating in a manner consistent with the personal responsibilities of senior managers. This direct personal responsibility encourages due diligence in studying new technologies, their use and impact, and demands fairness and explainability as part of any AI system, with correspondingly severe personal consequences for failure. For a financial services professional with direct responsibility, demonstrating due diligence and explainability will be key to personal protection in the event of regulatory claims. 5. Conclusion Implementing AI in the financial sector is a complex but necessary process to ensure competitiveness in the modern world. Successful integration of technologies requires a comprehensive approach, including staff training, change management, compliance with ethical standards, and cybersecurity. Companies that can adapt to new conditions and effectively use the capabilities of AI will have a significant advantage in the market. It is important to remember that technology is only a tool; success depends on how it is applied to create value for customers and the business as a whole. Implementing AI in the financial sector opens up new horizons for increasing efficiency, improving customer experience, and optimizing business processes. However, companies must be prepared for the challenges and risks associated with this technology. Applying risk mitigation strategies such as strengthening cybersecurity, ensuring algorithm transparency, and compliance with ethical standards will allow financial institutions to successfully integrate AI into their operations, while maintaining customer trust and regulatory compliance. Therefore, the approach to AI implementation must be comprehensive and balanced, which will ensure long-term sustainability and competitiveness in the market. Financial institutions that can effectively use AI will have a significant advantage in the market, providing their clients with better services and adapting to the constantly changing conditions. It is important to remember that technology is only a tool; success depends on how it is applied to create value for both the business and the clients.Об авторах
Е. Ю. Щетинин
Финансовый университет при Правительстве Российской Федерации
Email: riviera-molto@mail.ru
ORCID iD: 0000-0003-3651-7629
Scopus Author ID: 16408533100
ResearcherId: O-8287-2017
Doctor of Physical and Mathematical Sciences, Lecturer of Department of Mathematics
Ленинградский проспект, д. 49, Москва, 125993, Российская ФедерацияЛ. А. Севастьянов
Российский университет дружбы народов; Объединённый институт ядерных исследований
Email: sevastianov-la@rudn.ru
ORCID iD: 0000-0002-1856-4643
Professor, Doctor of Sciences in Physics and Mathematics, Professor at the Department of Computational Mathematics and Artificial Intelligence of RUDN University, Leading Researcher of Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research
ул. Миклухо-Маклая, д. 6, Москва, 117198, Российская ФедерацияА. В. Демидова
Российский университет дружбы народов
Email: demidova-av@rudn.ru
ORCID iD: 0000-0003-1000-9650
Candidate of Physical and Mathematical Sciences, Associate Professor of Department of Probability Theory and Cyber Security
ул. Миклухо-Маклая, д. 6, Москва, 117198, Российская ФедерацияТ. Р. Велиева
Российский университет дружбы народов
Автор, ответственный за переписку.
Email: velieva-tr@rudn.ru
ORCID iD: 0000-0003-4466-8531
Candidate of Physical and Mathematical Sciences, Assistant Professor of Department of Probability Theory and Cyber Security
ул. Миклухо-Маклая, д. 6, Москва, 117198, Российская Федерация; ул. Жолио-Кюри, д. 6, Дубна, 141980, Российская ФедерацияСписок литературы
- Hong, J. The Impact of Artificial Intelligence, Machine Learning, and Big Data on Finance Analysis/Jingqi Hong. Advances in Economics Management and Political Sciences 27, 39-43. doi:10. 54254/2754-1169/27/20231208 (2023).
- Agarwal, A., Singhal, C. & Thomas, R. AI-powered decision making for the bank of the future 2021.
- Guan, J. Artificial Intelligence in Healthcare and Medicine: Promises, Ethical Challenges and Governance. Chinese Medical Sciences Journal 34, 76-83. doi: 10.24920/003611 (2019).
- Boukherouaa, E. B., Shabsigh, M. G., AlAjmi, K., Deodoro, J., Farias, A., Iskender, E. S. & Ravikumar, R. Powering the digital economy: Opportunities and risks of artificial intelligence in finance 34 pp. (International Monetary Fund, 2021).
- Chan, L., Hogaboam, L. & Cao, R. Applied artificial intelligence in business: Concepts and cases 368 pp. doi: 10.1007/978-3-031-05740-3 (Springer Cham, 2022).
- Santosh, K. C. & Wall, C. AI, Ethical Issues and Explainability-Applied Biometrics doi: 10.1007/978-981-19-3935-8 (Springer Singapore, 2022).
- Charles, V., Rana, N. P. & Carter, L. Artificial Intelligence for data-driven decision-making and governance in public affairs. Government Information Quarterly 39, 101742. doi: 10.1016/j.giq. 2022.101742 (2022).
- Duft, G. & Durana, P. Artificial Intelligence-based Decision-Making Algorithms, Automated Production Systems, and Big Data-driven Innovation in Sustainable Industry 4.0. Economics, Management, and Financial Markets 15, 9-18. doi: 10.22381/EMFM15420201 (2020).
- Lee, J. Access to finance for artificial intelligence regulation in the financial services industry. European Business Organization Law Review 21, 731-757. doi: 10.1007/s40804-020-00200-0 (2020).
- Mogaji, E. & Nguyen, N. P. Managers’ understanding of artificial intelligence in relation to marketing financial services: Insights from a cross-country study. International Journal of Bank Marketing 40, 1272-1298. doi: 10.1108/IJBM-09-2021-0440 (2021).
- Truby, J., Brown, R. & Dahdal, A. Banking on AI: Mandating a proactive approach to AI regulation in the financial sector. Law and Financial Markets Review 14, 110-120. doi: 10.1080/17521440.2020. 1760454 (2020).
- Xie, M. Development of artificial intelligence and effects on financial system. Journal of Physics: Conference Series 1187, 032084. doi: 10.1088/1742-6596/1187/3/032084 (2019).
- Camacho, J., Couce-Vieira A. andArroyo, D. & D., R. A Cybersecurity Risk Analysis Framework for Systems with Artificial Intelligence Components 2024.
- Lee, J. Access to finance for artificial intelligence regulation in the financial services industry. European Business Organization Law Review 24, 731-757. doi: 10.1007/s40804-020-00200-0 (2020).
- Rajagopal, N. K., Qureshi, N. I., Durga, S., Ramirez Asis, E. H., Huerta Soto, R. M., Gupta, S. K. & Deepak, S. Future of business culture: An artificial intelligence-driven digital framework for organization decisionmaking process. Complexity, 1-14. doi: 10.1155/2022/7796507 (2022).
- Daiya, H. AI-Driven Risk Management Strategies in Financial Technology. Journal of Artificial Intelligence General science 5, 194-216. doi: 10.60087/jaigs.v5i1.194 (2024).
Дополнительные файлы










