1. Introduction
In recent years, Artificial Intelligence (AI) has rapidly evolved from an auxiliary tool to a core element in the decision-making processes of modern organisations. It is applied in front-line positions, such as screening job applications and generating performance feedback. For instance, ChatGPT has been integrated into customer service platforms to achieve real-time response automation. AI-generated résumé screening systems use pre-trained algorithms to assess thousands of applications [1, 2]. These changes are expected to bring higher efficiency, but they have also raised urgent questions regarding trust and fairness in the workplace.
The involvement of AI in decision-making is increasing, and the boundary between human and machine responsibilities is beginning to blur. This transformation has also brought about organisational behaviour challenges, such as employees’ lack of understanding of AI interpretability, lack of transparency, and resistance triggered by hierarchical cultures. These have already emerged in workplaces that introduce AI without adequate communication or design considerations, manifesting as employee resistance, ethical disputes, and a decline in work morale [3, 4]. Against this backdrop, this study seeks to answer a core question: How is AI reshaping the operational logic of organisations, and what governance strategies are required to adapt to this transformation?
The remainder of this paper is organised as follows: Chapter 2 discusses typical scenarios of AI applications in organisational management, including AI in recruitment, smart performance evaluation, and the role of AI in predicting employees’ emotions and turnover risks. Chapter 3 identifies the key challenges faced by AI in organisational management applications, namely the difficulty in interpretation, lack of transparency, and employee resistance influenced by organisational culture. Chapter 4 proposes practical solution pathways, including the role of interpretive AI, managerial communication, emotional support mechanisms, and cultural transformation towards participatory governance.
Understanding these dynamics is of utmost importance. AI is not merely a tool; it is also a participant in the human work system. If the integration strategy between humans and machines is not carefully formulated, the potential of AI may be undermined. Therefore, this research contributes to the literature on the interaction between AI and humans. It also provides actionable insights for managers who wish to deploy AI in an efficient, ethical, and people-oriented manner.
2. Typical scenarios of AI application in organisational management
Organisational management now features new characteristics such as data-driven approaches and automated decision-making. AI holds great potential in specific scenarios such as recruitment and performance evaluation [5].
2.1. AI in recruitment: from smart screening to virtual interviews
AI systems are now widely used in various recruitment processes, from résumé screening to interview management [6]. Firstly, Natural Language Processing (NLP) technology is used in the initial screening of résumés to extract matching information from candidates’ submissions. AI systems such as hireEZ can achieve precise matching between candidates and jobs at the semantic level. Moreover, in the pre-interview evaluation stage, platforms such as Pymetrics extract data through cognitive tests to evaluate candidates’ risk tolerance and emotional management abilities. The dimensions of AI evaluation are closely aligned with the specific requirements of positions, enabling more accurate recruitment results [7].
HireVue employs multimodal AI technology during the interview process to analyse the behaviour of candidates. This system not only interprets applicants’ language but also captures subtle expressions and movements such as the frequency of smiles and the duration of gaze pauses. The algorithm weights these multimodal signals to generate predictive interview scores. Unilever reported that after introducing HireVue, the average recruitment duration from job posting to final hire was shortened from four months to two weeks. This indicates that the application of AI management systems in recruitment has significantly improved the quality of candidates [8].
2.2. Smart performance evaluation: efficiency enhancement
AI application in performance evaluation has transformed previous subjective, manager-dominated evaluation methods into objective and data-driven approaches. AI systems can process large amounts of employee data in real time, such as task completion rates, timeliness, and participation in collaboration tools. These data inputs are used to generate dashboards and performance scores, thereby influencing decisions regarding employee promotions, bonuses, or training opportunities.
AI systems are typically integrated with enterprise platforms such as Slack, Microsoft Teams, Asana or Jira to automatically track metrics including message frequency, response latency, task completion rate, schedule activities, and version control logs [4]. These multi-dimensional data are input into machine learning models, often using clustering or regression techniques, to generate performance indicators aligned with organisational benchmarks. Some platforms utilise NLP to assess tone and emotions in workplace communication, identifying potential signs of employee alienation or burnout [9]. These assessments are not conducted once but updated continuously or periodically (e.g. weekly or monthly), and visualised through dashboards to track trends over time at the individual and team levels.
2.3. Role of AI in predicting employees’ emotions and turnover risks
AI management systems can predict employees’ emotions and voluntary turnover risk by analysing absence records and system usage data. Some systems offer intervention methods such as mental health tools or work plan adaptation according to the emotional status of employees. AI can also track behavioural cues such as meeting load and the frequency of interactions with colleagues. NLP can collect emotional content from text materials, such as questionnaires and feedback forms [9]. This vectorised data is then fed into supervised learning models (e.g., random forest classifiers) trained on past turnover data, which can express job burnout or voluntary turnover risk.
Such predictions have both diagnostic and guiding value. Advanced platforms such as Workday Peakon and Microsoft Viva Insights utilise these predictions to trigger personalised intervention paths. These may range from proactive reminders (e.g., suggesting a manager check in) to formal action plans (e.g., workload reallocation or career path re-planning). Recommendations are typically presented to HR staff or department managers through dashboards, often with confidence scores and interpretability layers explaining the key drivers of predictions [4]. Integrating AI into organisational management can reduce voluntary turnover rates by 20% to 30%. When AI is transparent and combined with human supervision, this effect is even more significant [10].
3. Issues and challenges of AI application in organisational management
The extensive application of AI in organisational management has also brought about issues and challenges. Recent studies have identified difficulties in AI interpretability, a lack of transparency, and employee resistance triggered by hierarchical cultures.
3.1. Difficulty of employees in AI interpretability
One pressing issue at present is that employees find it difficult to understand AI. The application of AI in organisational management often lacks sufficient explanation, resembling a “black box”. Therefore, even systems developed with good intentions may be perceived by employees as biased. This, in turn, can negatively impact proactive behaviour. From a technical perspective, the origin of this issue lies in the complexity of advanced machine learning algorithms. Models such as deep neural networks and ensemble decision trees learn complex nonlinear patterns in high-dimensional data. These models encompass hundreds of elements ranging from numerical performance indicators to qualitative inputs, and generate single output results through multiple layers of transformation. In such complex structures, it is often difficult to determine how a single input affects a specific prediction outcome. If employees have little understanding of a model’s internal workings, they will have great difficulty comprehending the AI-generated evaluation results [11].
The issue of interpretability can also raise concerns at both the ethical and practical application levels. Scholars have noted that algorithmic decisions lacking an interpretable mechanism are particularly detrimental in high-stakes contexts, including employee evaluation or promotion, eroding employees’ trust. Without such interpretability, AI systems can misinterpret employee data, which reduces employees’ trust in the organisation [4]. Similarly, Binns et al. explained that when interpretability mechanisms are insufficient, employees cannot query AI results, leading to feelings of alienation from the organisation [12].
3.2. Lack of transparency
When the internal operation mechanism of AI tools is opaque, employees inevitably question the fairness of AI decisions. This not only undermines trust in these tools but also damages the reputation of the institutions using them.
The low transparency of AI tools is due to the proprietary nature of commercial platforms. Their underlying architecture, training data, and decision rules are usually inaccessible to employees and HR professionals. Even when organisations attempt to enhance transparency through techniques such as Shapley Additive Explanations (SHAP) or Local Interpretable Model-Agnostic Explanations (LIME), the output results are often abstract or technical. For instance, learning that “38% of your performance score is driven by the density of internal communication within the department” may be meaningless unless employees understand how this metric is defined, tracked, and compared.
Furthermore, AI-generated evaluations often fail to cover the emotional aspects of human work. AI systems have difficulty quantifying performance in resolving interpersonal conflicts or team collaboration, but these elements are indispensable for achieving fair evaluation. If ignored, they weaken employees’ perception of the fairness of the AI evaluation. Colquitt et al. found that when employees do not understand the background of the AI evaluation or lack channels for expression, they are less likely to accept the results [13]. Supporting this, Binns et al. found that employees who did not receive adequate explanations were often reluctant to accept the assessment results [12]. Similarly, Shin argued that AI decision-making opacity was likely to reduce motivation [3]. Algorithmic inscrutability also undermines the legitimacy of digital systems by preventing effective communication between users and technology [14].
3.3. Employee resistance triggered by hierarchy culture
In organisational cultures where decision-making power is centralised, introducing AI for performance evaluation can make employees feel overly monitored. This sense of surveillance intensifies if staff are not involved in designing or rolling out the system. Burrell pointed out that in hierarchical organisational cultures, employees often accept algorithmic systems out of obedience, but this may lead to feelings of alienation from the organisation [11]. Such contradictions between obedience and unease suggest that hierarchical culture is not conducive to AI adoption.
Empirical research confirms that cultural background influences employees’ internalisation of algorithmic authority. Agrawal et al. found in a cross-national comparison of OECD countries and India that respondents from India tended to attribute moral responsibility to both humans and machines simultaneously [15]. India is known for its high-context communication and strict hierarchical norms, and these respondents had lower trust in AI decisions.
Hofstede’s cultural dimension research further confirmed these phenomena. In cultures with high uncertainty avoidance, employees demand clearer explanations of algorithmic results. Conversely, in environments with low power distance, AI may be seen as a tool that breaks down hierarchies and promotes fairness [3]. These findings indicate that resistance is closely linked to organisational and social cultural structure. Therefore, effective integration of artificial intelligence requires culturally sensitive and inclusive design practices.
4. Solutions of AI application in organisational management
4.1. Technique solution: enhancing AI interpretability
Improving the interpretability of AI systems can reduce employees’ resistance. XAI encompasses model techniques and interpretability tools that reveal the reasons behind decision-making, rather than merely presenting the results. From a technical perspective, interpretability can be achieved through the following three strategies:
1. Using models such as decision trees or linear models with clearly interpretable structures for organisational management predictions, because the structures of these models are more conducive to human examination.
2. Feature attribution tools such as SHAP and LIME can assign importance scores to features, which help users understand the impact of changes in input variables (such as the task delay indicator) on the prediction results.
3. Counterfactual explanations, which answer “if this attribute had been different, would the outcome change?” This helps employees envision what actions could influence AI assessments (e.g. “if your communication frequency had been 10% higher, your retention risk would drop by X%).
Empirical findings show that when interpretability is improved, users report greater trust and willingness to accept AI‑driven decisions. Chaudhary et al. investigated the negative consequences of non-transparent AI and found that organisations which embed interpretability features in employee-facing analytics see higher levels of perceived fairness and lower resistance [16].
4.2. Communication solution: clear explanation, training, and role clarity
Beyond model-level transparency, organisational communication plays a pivotal role in shaping how employees perceive and interact with AI tools. Clear communication strategies—including transparent disclosure of AI’s role, function, and limitations—help reduce uncertainty and foster trust. When employees fully understand the working principle and reasons for the AI application, they are more likely to regard it as a beneficial resource. For example, when organisations emphasise training during the introduction and promotion of AI systems, employees show higher acceptance of AI management systems [17].
Organisations should enhance employee training to build a practical understanding of AI. Training should be repeated regularly to allow employees to express their views and adjust their working methods accordingly. Research shows that effective communication improves employee performance in the context of AI management. Florea and Croitoru found that when managers clearly explain the purpose and decision-making logic of AI, employees display higher task commitment and acceptance [18]. Similarly, Behn et al. found that providing employees with training on the AI management system by the organisation helps increase trust in emotional AI analysis tools [19]. Therefore, communication and training must be regarded as important measures for successful AI integration.
4.3. Trust solution: enhancing emotional support of AI for employees
Employees’ resistance to AI often stems from feelings of being monitored and from the lack of emotional sensitivity in AI systems. To alleviate this resistance, organisations can design AI systems that incorporate elements of emotional support. For example, platforms such as Humu (used by Google and Intel) consider employees’ communication styles and interactions with leaders before providing performance feedback. Employees who prefer directness may receive straightforward messages such as, “You’ve increased your deadline efficiency this quarter by 8%. Keep it up.” They may also receive motivating messages such as, “You really brought a burst of energy into our team this week.” Similarly, platforms like Lattice and Workday (Peakon) allow managers to create custom feedback templates and automate feedback based on communication style or level of engagement.
Including human-in-the-loop evaluation alongside AI outputs can promote employees’ acceptance of AI recommendations. Human judgment can also strengthen fairness in AI evaluations. Watanabe et al. found that combining AI-generated feedback with human emotional support improves employees’ self-efficacy and motivation [20]. Leadership support can further help mitigate feelings of isolation and job exhaustion caused by long-term AI monitoring [21]. These findings show that embedding emotional cognition in AI-human interaction design helps foster employees’ well-being and long-term acceptance of AI systems in organisational management.
4.4. Culture solution: cultivating participatory, trusting organisational culture
Improvements in AI system technology or communication can help employees accept AI management. However, for sustainable, long-term application, organisations must also cultivate an inclusive and participatory organisational culture [22]. Participation involves including employees in the selection, training, and practical application of AI tools. It also promotes values such as psychological safety, openness to feedback, and shared learning (mistakes allowed, questions encouraged).
Some practical initiatives include:
1. Employee Involvement in Decision-Making
Establishing cross-functional committees that include employee representatives during AI system selection and rollout can increase legitimacy and perceived fairness [23].
2. Feedback Loops for Continuous Improvement
Encouraging frontline workers to contribute insights or suggest refinements to AI tools creates a sense of ownership and adaptive learning [24].
3. Embedding Psychological Safety into Organisational Norms
Formally integrating values that support open dialogue about AI errors—such as admitting when AI “gets it wrong”—reinforces a collective learning mindset [25].
4. Celebrating Early Success Stories Together
Publicly recognising instances where AI tools have helped individuals or teams—not just improved KPIs—helps shift the narrative from surveillance to support [26].
Ultimately, when employees feel their voices are heard and their concerns are addressed, they are more likely to engage constructively with AI systems. This reinforces a virtuous cycle where trust and participation amplify the positive impacts of technological innovation.
5. Conclusion
AI reshapes organisational behaviour by increasing HRM efficiency, but introduces risks like poor interpretability, low transparency, and hierarchy-driven resistance. To address these, organisations can enhance AI interpretability (via XAI), strengthen communication/training, incorporate emotional support mechanisms, and foster participatory cultures. Future research could explore cross-cultural AI acceptance. Policies should support transparent AI governance and algorithm accountability, ensuring AI serves organisations sustainably.