The Challenges and Risks Brought by AI to the Digital Economy
Research Article
Open Access
CC BY

The Challenges and Risks Brought by AI to the Digital Economy

Xuehan Wang 1* Wanqi Hu 2, Xuanle Pan 3
1 Chengdu Foreign Languages School International Department
2 North Cross School Shanghai
3 Living Word Shanghai
*Corresponding author: vicky0113007@163.com
Published on 28 August 2024
Volume Cover
AEMPS Vol.106
ISSN (Print): 2754-1177
ISSN (Online): 2754-1169
ISBN (Print): 978-1-83558-541-2
ISBN (Online): 978-1-83558-542-9
Download Cover

Abstract

This article includes study on AI hazards in healthcare and finance, with the goal of exploring the many varieties in this subject. To uncover widespread AI dangers in healthcare, twenty publications were extensively reviewed using selection criteria. This study aims to provide future healthcare researchers and financial institution researchers with a valuable resource for gaining a general understanding of the complex challenges caused by artificial intelligence that may cause something to be placed in healthcare and financial environments. We encourage evidence-based techniques to help other academics handle these difficulties successfully and address the obstacles and hazards connected with artificial intelligence in healthcare by identifying and analyzing these kinds.

Keywords:

Artificial intelligence, Digital economy, The healthcare industry, The financial industry

View PDF
Wang,X.;Hu,W.;Pan,X. (2024). The Challenges and Risks Brought by AI to the Digital Economy. Advances in Economics, Management and Political Sciences,106,105-110.

1. Introduction

Artificial intelligence brings human quite number of opportunities and risks in the context of digital economy. Artificial intelligence is a field with computer science and robust datasets. Also, in order to solve the existed problems, the digital economy is now playing an important role in the world’s system. In Russia, over 24 billion rubles have been invested in AI research in less than a decade, signifying the growing momentum in this field. AI can bring human opportunities. Besides, Recent generations of artificial intelligence (AI)-based technologies, characterized by their ability to perform cognitive functions associated with the human mind, have changed the nature of the symbiotic relationship between humans and technology: AI enables a higher degree of intelligence than previous technologies Interactivity and intelligence to improve human task performance. Artificial Intelligence differs from previous technologies in its ability to act (semi-)autonomously. In addition to this, the fast-developing conversational AI systems in recent years has newly shaped human-machine interaction, pushing us to communicate with machines and to make it common. In many artificial intelligence systems, OpenAI’s product ChatGPT has become a model, demonstrating excellent language generation capabilities. Additionally, the ability to take large amounts of data from the environment and process the data by using artificial intelligence and machine learning is changing the landscape of the industry that is related to finance. Artificial intelligence/machine learning contributes to the capability to forecast the trend of the economy growth, risks and challenges of an event related to finance that may happen in the future; improve the scale of the financial markets; help the government and other political establishment govern the society and improve governance and strengthen control; strengthen prudential supervision, offer central banks with new techniques to fulfill their monetary and macro-prudential missions Numbers of learners do researches on how AI makes the difference to people’s life, but what are the risks that AI brings to us? In this paper, we discuss what risks and challenges that AI brings to humans in healthcare and finance.

2. Methodology

We view twenty articles written by other learners. And we search some articles related to AI distribution in different aspect and make connection with our life in order to find not only the benefit that AI brings, but also the risks and challenges. Usually, we search articles through ResearchGate, ScienceDirect and International Monetary Fund. We have refined and classified existing scholars’ literature. These articles are used to prove that AI does make lots of changes and improvement and help us to think deeply about the risks.

3. Risks of AI in Healthcare

3.1. Clinical Risks

First, when utilizing AI tools, noise in the input data can have a substantial impact on the accuracy of AI forecasts. Second, data set skew, a difficulty in AI learning, can lead to AI misclassification since the statistical distribution of data utilized in medicine deviates, although slightly, from the original data set used to train AI. Group differences may be to blame for this change. Finally, predictions are prone to inaccuracy since AI systems are incapable of adapting to rapid changes in the environment and application context. For example, frequent artifacts in medical imaging may be misinterpreted by AI models as observation mistakes, resulting in false positives.

3.2. Technical Risks

3.2.1. Abuse of AI in Medicine

Medical AI, like any other health technology, is prone to misapplication and human mistake. Even though AI algorithms are extremely trustworthy and reliable, end users such as patients, healthcare professionals, and physicians must utilize them appropriately in the real world. Incorrect use can result in incorrect medical assessments and judgments, potentially causing injury to the patient. [1] Therefore, doctors and the general public need to know when and how to use medical AI technologies; simply having access to them is not sufficient. Often, computer/data scientists develop these systems without much input from end users or clinical specialists. As a result, emerging AI systems require learning and adaptation on the part of users including nurses, doctors, data managers, and patients, leading to complicated interactions and experiences. This complexity can hinder the effective application of AI algorithms in day-to-day clinical practice, reducing the potential for informed decision-making and increasing the likelihood of human error [1].

3.2.2. Risks of Bias

These medical care prejudices and inequities underline the need of addressing structural issues and human biases in healthcare systems worldwide. Regardless of one's background or characteristics, we can all work toward a more just and equitable healthcare system if we recognize and actively endeavor to eliminate these disparities. There is growing concern that if future AI solutions are not adequately accepted, appraised, and managed, they could embed and potentially amplify structural gaps and human biases that contribute to healthcare disparities [1].

3.2.3. Issues with Security and Privacy

The ongoing development of AI solutions and technology in the healthcare sector may pose concerns to patient and citizen security, data privacy, and confidentiality, especially in light of the COVID-19 pandemic. Among these concerns are those of sensitive data exposure and abuse, which can lead to rights violations and the exploitation of patient data for purposes unrelated to medicine. AI's use in healthcare also brings up data security issues, including the risk of identity theft and privacy violations from data breaches. Cyberattacks on AI systems and personal medical gadgets controlled by AI are also important problems, emphasizing technical flaws [1].

3.2.4. Diminished Transparency

Medical AI has come a long way, but a lot of people, including professionals, still find the current algorithms confusing and hard to grasp. This makes it hard to completely trust and use these technologies. Since the health and well-being of the public are at risk in delicate fields like medicine and healthcare, this lack of openness is especially worrisome. As a consequence, there is a considerable lack of trustworthiness associated with AI, especially in the medical domain [1].

3.2.5. Gaps in AI Accountability

The term 'algorithmic accountability' has gained significance in addressing the legal implications of AI algorithms' introduction and use in various aspects of human life [2]. Contrary to what the word implies, it underlines that algorithms are the product of human design and machine learning, and that whomever developed, introduced, or used them has responsibility for any mistakes or wrong behavior. AI systems cannot be held ethically or legally responsible for their own conduct. The uptake, dependability, and potential uses of medical AI in healthcare depend on accountability. Even if they did not build the algorithms, doctors could be hesitant to adopt these AI solutions if they could be held accountable for medical errors brought on by AI. Similarly, if people believe that no one can be held accountable for any potential harm caused by AI, they may start to mistrust the system and patients.

4. Risks of AI in Finance

4.1. Embedded Bias

Talks concerning the possibility of inherent bias have been sparked by the growing application of AI/ML in the highly regulated banking industry, where public confidence is crucial. Nissenbaum and Friedman (1996) Embedded bias are the term for computer systems that unfairly and persistently favor some individuals or groups of individuals over others. Thirteen Al/ML customer classification techniques may introduce bias into the financial industry by differentiating prices or service standards. Bias in Al/ML decisions is often the result of biased training data that comes from existing biased processes and data sets, which will teach Al/ML models to be biased too [3]. Inaccurate and inadequate information, or "data bias," can exacerbate financial marginalization and foster mistrust of technology, particularly among the poorest. Bias can arise from data collecting in two ways:

a. The system may have been trained with inaccurate or insufficient data. Predictive algorithms, for instance, that make predictions about loan approvals, give preference to groups who do better in the training data since those forecasts are linked with less uncertainty.

b. These findings could validate pervasive prejudice. For instance, because the internal recruitment tool had been educated on past hiring choices that favored males over women, Amazon found that it was terminating female candidates.

4.2. Unboxing the “Black Box”: Explain ability and Complexity

An essential concern is the interpretability of AI/ML system outputs, particularly in the financial industry. As ML models are not interpretable by humans directly, they are sometimes referred to as "black boxes." This characteristic may make it difficult to assess if ML conclusions are appropriate, expose companies to risks such as skewed data, incorrect modeling techniques, or poor decisions, and reduce trust in the soundness of the decisions made. Explainability is a multifaceted, intricate concept. For several reasons, machine learning models are sometimes viewed as "black boxes":

a. They are difficult to understand and intricate.

b. They may not be aware of their incoming signals.

c. Instead of being a single, stand-alone model, they are an ensemble of models. Furthermore, higher explainability might make it simpler for outsiders to manipulate the algorithm and jeopardize the stability of the financial system.

The ability of a model to approximate various functions, or its explainability, often decreases as the number of parameters increases. Machine learning models are more flexible and accurate, but are less explainable than, for example, linear models that produce easily interpretable results but are less accurate [3].

4.3. Cybersecurity

Adoption of AI/ML creates new and distinct cyber hazards as well as broadens the spectrum of cyber threats. AI/ML systems are susceptible to novel dangers in addition to the usual ones posed by people or by bugs in software. In order to take advantage of the intrinsic constraints of AI/ML algorithms, these attacks concentrate on altering data at a specific point in the AI/ML lifecycle. Attackers can evade detection by using AI/ML to generate false conclusions or extract data. Owing to their intricacy and possible influence on financial sector establishments, machine learning models want continuous supervision to guarantee that these kinds of assaults are reliably identified and promptly addressed.

4.4. Data Privacy

AI/ML raises novel and distinct privacy concerns. Big data privacy issues were well-known even before AI and ML became widely used. A total of twenty technologies have been created to support the preservation of data subjects' privacy and anonymity. Globally, frameworks for legal data policies are being developed to deal with these problems. Nevertheless, avoiding data leaks from the training set presents additional privacy issues. Al/ML, for instance, may make inferences (i.e., infer identities from behavioral patterns) to uncover anonymised data. Similarly, Al/ML may remember information about individuals in the training set after the data is used, or Al/ML's outcome may leak sensitive data directly or by inference [3].

4.5. Impact on Financial Stability

The banking industry will undergo a radical change as a result of the widespread use of AI/ML systems, and the full extent of this influence on financial stability is still unknown. As was previously mentioned, AI/ML systems can, on the one hand, increase efficiency through carefully thought out and tested algorithms that adhere to strict controls to limit risks and performance issues; on the other hand, they can enhance regulatory compliance; better assess, manage, and price risks; and provide new tools for prudential supervision and enforcement. All of these benefits will have a positive impact on financial stability. Nevertheless, because of its opaqueness, manipulation susceptibility, robustness, and privacy concerns, AI/ML systems present novel and distinct hazards. These might erode public confidence in the security and integrity of financial systems powered by AI and ML. Furthermore, AI/ML may introduce new sources and channels of systemic risk.

4.6. Fairness of AI-based Models

It is possible to reproduce and magnify biases in the data that is used to train AI systems. When data used to train AI models is not carefully examined, it can lead to anomalies and false patterns in the data, which can cause the AI models to draw incorrect and prejudiced conclusions that reinforce existing discrimination and bias in society. Moreover, historical data—which is typically utilized for AI and ML training—has intrinsic limits when it comes to accurately projecting the future, particularly when significant extreme events are absent from the financial data that is now accessible. This raises the possibility that an AI model will malfunction in an emergency.

4.7. Insufficient Responsibility for AI Results

The lack of responsibility for AI outputs is one of the primary dangers of utilizing AI systems within financial institutions. This becomes more troublesome when AI is used to make key judgments with serious consequences, such as giving an inaccurate negative credit score, which might prevent access to a loan. It can be difficult to pinpoint the individual in charge of significant AI choices that are based on skewed or faulty data or incorrect training. Artificial intelligence techniques like machine learning and others use past training data to understand how to behave in different scenarios. To include updated information, they frequently update their training materials and databases. To raise knowledge of these technologies, there are two primary challenges that must be solved. First, choices are made automatically, without human inspection, and errors are not detectable. Second, auditors might not always be able to see how decisions are made. A wide range of processes, including IT, HR, damage assessment, and legislative reform, employ artificial intelligence. Artificial intelligence (AI) systems are able to identify petitions, policies, and changes resulting from those policies quickly. They make decisions quickly as well. Concerns have been raised concerning security, social, economic, and political risks, as well as decision-making accountability. This further erodes trust in AI-based systems and reinforces the need for AI transparency and explain ability [4].

4.8. Job Displacement

When AI is widely used in the financial sector, especially in commercial banks, jobs will almost definitely be lost. Financial organizations will need fewer workers as technology takes over routine tasks. This could mean fewer recruitment drives, early retirements, and maybe even layoffs. This could lead to discontent among bank employees, resulting in productivity losses that could offset some of the gains from technological advancement [5].

Artificial intelligence may be able to automate a large number of the non-routine jobs that people now undertake. This might result in inequality, polarization in the labor market, and large changes in the demand for labor. For example, depending on how AI automates non-routine activity, the relative job increase in high-wage and low-wage occupations may change. This course of action has the potential to generate economic instability, worsen existing imbalances, and result in major changes to the labor force. Though AI capabilities are more needed than ever in the UK banking sector, especially in financial trading, estimates for the next 5, 10, and 20 years show large predicted net employment losses. This prompt worries about the loss of jobs and the requirement for financial institutions and legislators to create plans to lessen unfavorable effects [6].

4.9. Systemic Risk

Although there is currently limited evidence, researchers and practitioners have warned that using AI can increase systemic risk in the finance sector [7]. Algorithmic trading, for example, is primarily reliant on modern artificial intelligence technology, which enables computers to independently learn and change their trading methods. This can lead to heightened volatility in volatile markets, which can have spillover effects and increase systemic risks as financial markets grow more linked. The researchers also issue a warning, arguing that the similarity of AI outputs could significantly improve the trading behavior of human traders when using comparable AI-based models trained on large amounts of similar types of financial data sets, upsetting the financial system's stability.

5. Conclusion

In conclusion, this comprehensive literature review 20 articles on the risks of artificial intelligence in healthcare as well as finance. Through systematic analysis and summary of literature, the remaining risks and challenges of artificial intelligence are revealed. As artificial intelligence continues to advance and permeate healthcare and the financial sectors, the underlying dangers must be recognized and managed. Going ahead, the conclusions of this analysis can serve as a springboard for future research, encouraging researchers to dive further into certain subgenres and investigate new concerns that may emerge as AI applications expand. As technology evolves, we should be aware of the potential risks and possibilities that artificial intelligence may provide to a variety of enterprises. Based on this detailed examination, researchers in the field of artificial intelligence healthcare may promote a more enduring and patient- and consumer-centric AI-driven system.

Authors’ Contributions

Wanqi Hu, and Xuanle Pan contributed equally to this work and should be considered co-second authors.

References

[1]. Muley, A. (2023). Risk of AI in Healthcare: A comprehensive literature review and study ... Retrieved from https://www.researchgate.net/publication/373443837_Risk_of_AI_in_Healthcare_A_Comprehensive_Literature_Review_and_Study_Framework

[2]. Y. (2021). Socio-ethical risk Lack of transparency Explain ability Data Fusion: A Mini-Review, Two Showcases and Beyond. Lecture. Retrieved from

[3]. FUND, I. M., & AlAjm, K. (2021). Powering the digital economy: Opportunities and risks of Artificial Intelligence in finance. INTL MONETARY FUND.

[4]. Maple, C., Szpruch, L., Epiphaniou, G., Staykova, K., & Singh, S. (2023). The ethics of Artificial Intelligence for the sustainable development goals. Springer.

[5]. P, J. (2021). MSG Management Study Guide. Retrieved from https://www.managementstudyguide.com/disadvantages-of-artificial-intelligence-in-commercial-banking.htm

[6]. Jón, D., Robert, M., & Andreas, U. (2022). Artificial intelligence and systemic risk. Retrieved from https://www.sciencedirect.com/

[7]. The Potential Impact of Artificial Intelligence on UK Employment and the Demand for Skills.

Cite this article

Wang,X.;Hu,W.;Pan,X. (2024). The Challenges and Risks Brought by AI to the Digital Economy. Advances in Economics, Management and Political Sciences,106,105-110.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

About volume

Volume title: Proceedings of the 3rd International Conference on Business and Policy Studies

ISBN: 978-1-83558-541-2(Print) / 978-1-83558-542-9(Online)
Editor: Arman Eshraghi
Conference website: https://www.confbps.org/
Conference date: 27 February 2024
Series: Advances in Economics, Management and Political Sciences
Volume number: Vol.106
ISSN: 2754-1169(Print) / 2754-1177(Online)