A framework for accountable AI-Human hybrid analytical thinking and decision making
Alan Turing defined intelligent behavior as the ability to achieve human level performance in all cognitive tasks, sufficient to fool an interrogator.
Russell & Norvig, 1995
The aim of this article is to develop a human-AI hybrid analytical thinking framework for strategic decisions under uncertainty. Strategic value is created not by handing decisions to AI, but by designing a disciplined hybrid workflow in which AI expands analytical reach while humans preserve causal reasoning, ethical judgement, contextual interpretation, and accountability.
Artificial intelligence (AI) is poised to significantly alter societal dynamics. It is rapidly changing decision-making processes in diverse sectors. Nonetheless, there is no clear agreement among experts regarding the trajectory of AI’s future. Rather than replacing human judgment, existing research also suggests that businesses are using AI in hybrid form, where humans and computers interact to produce outcomes (Chen et al., 2023; H. Liu et al., 2021; Reverberi et al., 2022). Early expectations positioned AI as a substitute for human cognition, particularly in analytical tasks. However, one of the most compelling gains of thinking with AI is scale. Many believe AI will eventually outperform humans in reasoning and decision-making as data access grows, but current opinions also favour complementary roles for humans and AI (Jones, 2024; Trammell & Korinek, 2023).
AI-powered research tools have revolutionised how vast collections of literature is analysed that would be impossible otherwise. Such rapid advancement of artificial intelligence has led to a widespread fear that machines will replace human intelligence. Consequently, human and machine intelligence are fundamentally different but complementary. While the algorithms excel in processing and extracting relevant information at scale, human expertise remains indispensable in interpreting these results and providing meaningful context (Jarrahi et al., 2022). Machines excel at repetitive, routine tasks in closed systems where rules are clear. In contrast, humans possess intelligence, which includes creativity, intuition, and the ability to handle unpredictable and open-ended situations. Combining these two types of intelligence, organizations can create “hybrid intelligence” which leads to better performance than either humans or machines could achieve alone.
The Need for Human Intelligence in Strategic Decisions
Many organisations today are deploying artificial intelligence to perform tasks such as screening job applications, forecasting demand, pricing products, managing risk, and surfacing strategic decision options. According to a McKinsey Global Survey (Cao et al., 2024), AI adoption has surged worldwide, reshaping how decisions are made. But according to Wu et al. (2023), strategic decisions are inherently uncertain, ambiguous, risky, and complex in ways that resist algorithmic encapsulation. They require reasoning not just about what has happened, but about what could happen in unseen circumstances.
Strategic decision making features:
- Managers do not predict outcomes they intervene in systems. That means they must reason about what will happen if a chosen action or alternative is taken.
- Business decisions are value-driven where leaders need to decide which metric should matter or how to balance profit against fairness or reputation.
- Businesses operate in dynamic environments where competitors react, customers learn, regulators intervene, supply chains break, fraudsters adapt, and macro conditions shift.
Felin & Holweg (2024) argued that the analogy between AI and human cognition breaks down sharply when it comes to understanding the emergence of novelty and new knowledge under uncertainty. Large language models are fundamentally trained to predict the next word given a corpus of past text. The way an LLM gets at truth and knowledge is via a statistical exercise of finding more frequent mentions of a true claim and less frequent mentions of a false claim. Their outputs are probabilistically drawn from the statistical associations of words it has encountered when being trained. It is a process that mirrors existing knowledge rather than generating new theory. This implies that when market conditions diverge from historical patterns, AI-augmented strategy may not simply underperform; it may confidently recommend paths that are structurally wrong. This is where human cognition offers capabilities that are fundamentally different in kind, not just in degree.
Humans may be limited in what data they can process but excel at making rare decisions where data is scarce. This is possible because human cognitive thinking involves theory-based logic and forward-looking causal reasoning along with mental models to understand a problem (Morris et al., 2023). Causal reasoning is the ability to distinguish causes from correlations and then reason the effects of interventions that are beyond bounded rationality.
Identifying and structuring a business problem from symptoms and complaints, asking a question that has never been asked, and reasoning about the importance of evidence that does not yet exist in a training set.
This is supported empirically in a systematic literature review by Merioumi et al. (2025) which surveyed the integration of AI in decision-making across business contexts. It concluded that human judgment remains crucial precisely in situations of high uncertainty. Moreover, business environments do not obey the statistical regularities of training datasets. Markets face discontinuities, technological disruptions, geopolitical shocks, pandemics, regulatory pivots that produce situations without any historical precedent. Thus, leadership roles involve interpreting ambiguous information and navigating dynamic relational environments. Consequently, AI’s fundamental reliance on historical data presents significant limitations when confronting such unprecedented events. According to Dattijo & Jo (2025), it is because AI pattern recognition cannot extrapolate beyond the boundaries of what has previously occurred. This further results in homogeneous outcomes that lack competitive advantage in the business world. Whereas the human capacity for theory-based logic and causal reasoning allows for more targeted searching and strategic hacking of competitive markets (Morris et al., 2023).
Firms do not just want to know which customers churn, which workers quit, or which assets fail; they want to know what action will change those outcomes.
Widespread business adoption of AI does not make human intelligence less important; it makes better-defined human intelligence more important. Grace et al. (2024) further highlighted a structural limitation in how AI systems are currently built and governed. Modern AI models operate through high-dimensional, non-linear transformations. Even when outputs are accurate, it is difficult to trace the internal reasoning. This makes high-value strategic AI implementation not just “human versus AI” but careful organizational design of who frames the problem, who gets the recommendation, when cases escalate, how drift is detected, and who owns the consequences. Thus, the strongest case for keeping human’s central is not that AI is useless, but it is where AI is weak and cannot be held accountable for its choices.
Building a Human-AI Hybrid Strategic Decision Making Framework
Strategic decision making is classified as a form of decisions under uncertainity. Humans and machines cannot be compared in intelligence, as they have different strengths and weaknesses (Bolander, 2019). While contemporary AI tools are designed to save time and improve accuracy, they often fail to push human cognitive thinking. This is primarily due to:
- hidden biases
- struggle with new situations
- prioritization of user engagement over helpfulness.
The debate over whether AI should replace human judgment in business has largely been settled by the evidence that neither humans nor AI alone consistently outperforms a well-designed partnership between the two (Vaccaro et al., 2024). The harder and more urgent question is not whether to combine them, but how. The practical implication is that businesses should pair AI-driven quantitative analytics with human qualitative judgement in a layered decision system. To make this partnership work in practice across different sectors, human trust needs to be calibrated. Therefore, this article proposes a four-pillar framework for building human-AI decision-making workflow that is durable, sector-sensitive, and grounded in evidence. It operationalises a pipeline that moves from AI-driven quantitative forecasting through structured qualitative validation, bias identification and targeted model recalibration. A closed-loop system where AI-driven quantitative forecasting and structured multi-stakeholder qualitative judgement continuously calibrate each other, with targeted node-level model updates rather than full retraining.

1. Decision Classification
The foundational error of most human-AI integration efforts is treating all decisions as the same kind of problem. Li & Tian (2026) identified four distinct paradigms of human-AI collaborative decision-making.

Each paradigm requires a different balance of AI and humans. Deploying the wrong mode in the wrong context is a structural failure that no amount of algorithmic sophistication can rescue. Therefore, the first practical step in building any human-AI framework will be to define:
- Decision Structure: how well-defined are the data model, inputs, rules, and success criteria. Highly structured decisions with clear data requirements such as fraud detection, inventory replenishment, demand forecasting are natural candidates for AI-led processing.
- Decision Stake: what is the cost of an error, and who bears it. Sauer & Burggräf (2025), found that a structured approach to classify decisions by complexity and outcomes is essential before any automation is introduced.
For structured, high-volume, data-rich, low-stakes decisions prioritise AI-led execution with human monitoring. For decisions with moderate complexity and stakes prioritise human-AI hybrid with sequential information presentation. For strategic decisions that is novel and ethically sensitive prioritise human-led with AI in a scouting and synthesis role, never in a deciding role.
2. Role Clarity
Once the decisions are classified, distinguish specific roles that exploit the irreducible strengths of human cognitive thinking and AI’s analytical capability at scale. It is also well established that AI systems structurally lack tacit knowledge. Mentzas et al. (2021) demonstrated that AI systems can identify non-linear patterns beyond human cognitive span. Whereas, Huang and Peissl (2023) identified tacit knowledge, contextual judgment, empathy, and social intelligence as capacities AI systems structurally lack. While the human role in a well-designed framework should not be just residual supervision but substantive, and irreplaceable in specific cognitive territories. Novel business problems require the ability to map relational structures from one domain to another and apply lessons from prior experience. Especially in strategic and leadership decisions, this is the mechanism how new options are generated (Jarrahi et al., 2022). These are the difference between a correct recommendation and a defensible and contextually appropriate decision. Thus, the role architecture must make these distinctions operationally concrete.
Neither AI nor the human counterpart should perform the other’s function. The boundary between AI and human roles is not a matter of organizational preference but of cognitive capability, and the framework must enforce it structurally.
Defining clear roles is a necessary but insufficient condition for an effective human-AI collaboration. Furthermore, sequence and the form in which AI outputs are presented to human decision-makers also shapes whether assigned roles are actually exercised.
3. Interaction Design
This pillar is the most technically demanding and the most frequently neglected. It is not enough to assign roles; the sequence and the form in which human and AI outputs are exchanged has a profound effect on decision quality. Gomez et al. (2025) found that the core failure happens when AI recommendations are presented to a human before they have processed the problem independently making the human’s reasoning prematurely anchored around biases. It is a well-known phenomenon where a person’s judgment is influenced by initial or partial information. Humans then either defer uncritically or react reflexively but fail to engage in genuine independent reasoning. This is because the AI has already bounded the problem for them. Therefore, the interaction should be such that the human first commits to a preliminary judgment, then receives the AI’s output as a second opinion. P. Liu et al. (2025) found that in clinical settings, simultaneous independent review of AI and human outputs led to better results than a sequential review where the human judgement was influenced by AI results.
The next imperative in the interaction design is transparency. Von Zahn et al. (2025) demonstrated that Explainable AI (XAI) affects not only human trust in AI outputs but also the quality of human metacognitive monitoring. It is the accuracy with which humans assess their own confidence in a decision. When AI explains its reasoning, humans become better calibrated. They are more likely to override the AI when it is wrong, and more likely to accept its recommendations when it is right. Furthermore, Wen et al. (2025) found that the interpretability of AI plays a key role in the formation of human trust. Effective human-AI decision-making requires knowing not just what the AI recommends, but under what conditions its recommendations are reliable and when they should be treated with scepticism. Zhang & Lee (2025) further found that change management lacking clear communication and trust leads to employee rejection of AI integration altogether. Workers need to understand what AI is, what it is not, and what applying critical thinking to its outputs means in practice.
An AI that says “approve, with 78% confidence based on payment history and income ratio” is more useful than one that simply says “approve”.
4. Drift Detection and Governance
The fourth pillar addresses the governance problem created when AI-assisted decisions continue to operate after deployment in changing business environments. A hybrid system should not assume that a model remains valid simply because it performed well during development. Markets shift, stakeholders react, regulations change, and operational patterns evolve. Governance is therefore the institutional discipline that keeps the human-AI workflow accountable, auditable, and responsive to new evidence. Batool et al. (2025) similarly emphasise that human involvement is necessary because AI systems must remain adaptable to changing ethical, social, and legal contexts.
The technical foundation for this governance layer can be a directed acyclic graph (DAG). In simple terms, a DAG is a causal map in which variables are represented as connected nodes, and the arrows show assumed directional relationships among them. In a business forecasting model, these nodes may represent demand volume, price sensitivity, income level, supply delay, customer churn risk, or fraud probability. Modeling the system in this way prevents the AI component from becoming a single opaque block. It allows the organisation to monitor specific variables and recalibrate only the affected node when evidence changes, rather than retraining the whole model every time performance falls.

A second governance requirement is to distinguish two types of bias. The first is training-time bias, which enters the model when historical datasets, sampling procedures, feature choices, or labelling practices already contain distortions. This kind of bias must be addressed before model deployment through careful data curation, representational checks, fairness-aware machine learning, and pre-deployment bias audits (Julien Kiesse Bahangulu & Louis Owusu-Berko, 2025).
The second is drift-related bias also known as concept drift. It appears after deployment when the real world changes and the relationship between inputs and outcomes no longer behaves as expected. Concept drift occurs when a model that was once reliable begins to make biased or inaccurate predictions because the underlying business environment has shifted (Tsymbal, 2004; Hinder et al., 2024).
A demand forecasting model trained on stable purchasing behaviour may become unreliable after a competitor changes prices, a regulator alters rules, or a customer segment changes its behaviour.
Weak data slices provide one practical way to detect where this degradation begins. A weak data slice is a subset of cases where the model has historically made more errors than usual, such as low-income borrowers, unusually large orders, late-night transactions, or demand from a specific region. Ackerman et al. (2021) propose monitoring whether the proportion of new cases falling into such weak regions changes over time. In the proposed framework, every weak slice should then be mapped back to the DAG node that defines it. This makes the diagnosis actionable rather than descriptive.
If a weak slice is defined by sudden changes in demand volume, then the demand volume node should be flagged for expert review. If the weak slice is defined by payment history, then the payment reliability node should be examined. This clarification is important because a weak slice tells the organisation where the model is failing, but the DAG tells the organisation which part of the causal structure may need recalibration.
Furthermore, explainability tools should be used to translate the model’s behaviour into a form that business experts can challenge. While LIME explains an individual prediction by approximating how nearby changes in input features affect the output, SHAP explains feature importance using Shapley values that estimate each feature’s contribution to the prediction (Lundberg & Lee, 2017). A high-priority recalibration can be further triggered when three signals converge:
- statistical drift,
- human-LIME conflict, and
- agreement between LIME and SHAP that a disputed feature is a major driver.
When a human reviewer disputes a feature that LIME identifies as important, SHAP can be used as a second explanation check. If both LIME and SHAP identify the same feature as important, the model’s reliance on that feature becomes more credible, although not automatically correct (Kumar et al., 2026). The reviewer should then examine whether their disagreement comes from missing information or contextual knowledge outside the model. If SHAP does not support LIME’s attribution, the LIME explanation may be unstable or too dependent on the local case. In such cases, the human objection should be treated seriously, and the decision should be escalated for further review before recalibration.
Consequently, a Structured Interrogative Reasoning (SIR) is needed to generate a controlled questioning process in which an LLM converts feature attributions into targeted questions for human reviewer. By following this approach, human qualitative insights can then be transformed into specific recalibration updates to relevant nodes within the model.
A practical SIR prompt can be: For node [N], the model predicted [X] with [C]% confidence. The three features weighted most heavily were [F1] at weight [w1], [F2] at weight [w2], and [F3] at weight [w3]. Features weighted least include [F4] at weight [w4]. Please indicate whether these weights are consistent with current business conditions, and identify any contextual factor the model may be missing.
The four pillars discussed above provide a universal architecture, but their weights and implementation should differ across business environments. The key operational variable here is calibrated trust where humans should neither reject AI outputs reflexively nor accept them uncritically. The framework also operationalises a sequential explanatory mixed-methods logic within a recurring organisational decision cycle rather than a one-time research project (Creswell & Plano Clark, 2018). Its effectiveness will depend on the quality of the initial DAG specification, the reliability of the data pipeline, the maturity of human review protocols, and the organisation’s willingness to treat AI governance as an ongoing decision practice.
Creating Strategic Value With AI
Strategic decision making in AI-enabled organisations should be understood as a hybrid analytical process rather than a contest between human and machine intelligence. AI is powerful where decisions require scale, pattern recognition, forecasting, and rapid processing of complex data. Human intelligence remains indispensable where decisions require problem framing, causal reasoning, stakeholder interpretation, ethical judgment, and accountability for consequences.
The proposed framework addresses this challenge through four linked pillars. Decision classification prevents organisations from automating problems that require judgement. Role clarity ensures that AI and humans perform tasks aligned with their strengths. Interaction design reduces anchoring, over reliance, and misplaced trust. Bias monitoring and adaptive governance add the technical and institutional controls needed to detect drift, explain model behaviour, and recalibrate specific decision nodes when the environment changes.
The framework also has limitations. Its accuracy depends on the quality of the initial DAG, the relevance of historical data, the ability of experts to provide disciplined feedback, and the organisation’s capacity to maintain governance routines after deployment. Future research should test the framework empirically across sectors such as finance, healthcare, supply chain management, and human resource analytics. Such studies should examine whether node-level recalibration, structured expert interrogation, and calibrated trust improve decision quality more effectively than conventional human review or fully automated AI recommendations.
References
- Ackerman, S., Dube, P., Farchi, E., Raz, O., & Zalmanovici, M. (2021). Machine Learning Model Drift Detection Via Weak Data Slices. https://doi.org/10.48550/ARXIV.2108.05319
- Batool, A., Zowghi, D., & Bano, M. (2025). AI governance: A systematic literature review. AI and Ethics, 5(3), 3265–3279. https://doi.org/10.1007/s43681-024-00653-w
- Bolander, T. (2019). What do we loose when machines take the decisions? Journal of Management and Governance, 23(4), 849–867. https://doi.org/10.1007/s10997-019-09493-x
- Cao, Z., Li, M., & Pavlou, P. A. (2024). AI in business research. Decision Sciences, 55(6), 518–532. https://doi.org/10.1111/deci.12655
- Chen, V., Liao, Q. V., Wortman Vaughan, J., & Bansal, G. (2023). Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW2), 1–32. https://doi.org/10.1145/3610219
- Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (Third Edition). SAGE.
- Dattijo, A., & Jo, S. (2025). Human strategic innovation against AI systems—Analyzing how humans develop and implement novel strategies that exploit AI limitations. Discover Artificial Intelligence, 5(1), 321. https://doi.org/10.1007/s44163-025-00439-x
- Felin, T., & Holweg, M. (2024). Theory Is All You Need: AI, Human Cognition, and Causal Reasoning. Strategy Science, 9(4), 346–371. https://doi.org/10.1287/stsc.2024.0189
- Gomez, C., Cho, S. M., Ke, S., Huang, C.-M., & Unberath, M. (2025). Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review. Frontiers in Computer Science, 6, 1521066. https://doi.org/10.3389/fcomp.2024.1521066
- Grace, K., Stewart, H., Sandkühler, J. F., Thomas, S., Weinstein-Raun, B., Brauner, J., & Korzekwa, R. C. (2024). Thousands of AI Authors on the Future of AI (Version 3). arXiv. https://doi.org/10.48550/ARXIV.2401.02843
- Hinder, F., Vaquet, V., & Hammer, B. (2024). One or two things we know about concept drift—a survey on monitoring in evolving environments. Part A: Detecting concept drift. Frontiers in Artificial Intelligence, 7, 1330257. https://doi.org/10.3389/frai.2024.1330257
- Huang, L., & Peissl, W. (2023). Artificial Intelligence—A New Knowledge and Decision-Making Paradigm? In L. Hennen, J. Hahn, M. Ladikas, R. Lindner, W. Peissl, & R. Van Est (Eds.), Technology Assessment in a Globalized World (pp. 175–201). Springer International Publishing. https://doi.org/10.1007/978-3-031-10617-0_9
- Jarrahi, M. H., Lutz, C., & Newlands, G. (2022). Artificial intelligence, human intelligence and hybrid intelligence based on mutual augmentation. Big Data & Society, 9(2), 20539517221142824. https://doi.org/10.1177/20539517221142824
- Jones, C. I. (2024). The AI Dilemma: Growth versus Existential Risk. American Economic Review: Insights, 6(4), 575–590. https://doi.org/10.1257/aeri.20230570
- Kumar, C., Khan, P. S., Samal, U., Srinivas, M., & Mishra, R. K. (2026). XAI-driven requirement risk classification using transformer-based models. International Journal of System Assurance Engineering and Management. https://doi.org/10.1007/s13198-026-03178-z
- Li, H., & Tian, F. (2026). Advancing Decision-Making through AI-Human Collaboration: A Systematic Review and Conceptual Framework. Group Decision and Negotiation, 35(2), 26. https://doi.org/10.1007/s10726-026-09980-1
- Liu, H., Lai, V., & Tan, C. (2021). Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–45. https://doi.org/10.1145/3479552
- Liu, P., Zhang, J., Chen, S., & Chen, S. (2025). Human-AI teaming in healthcare: 1 + 1 > 2? Npj Artificial Intelligence, 1(1), 47. https://doi.org/10.1038/s44387-025-00052-4
- Lundberg, S., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions (Version 2). arXiv. https://doi.org/10.48550/ARXIV.1705.07874
- Lunn, D. J., Wei, C., & Hovorka, R. (2011). Fitting dynamic models with forcing functions: Application to continuous glucose monitoring in insulin therapy. Statistics in Medicine, 30(18), 2234–2250. https://doi.org/10.1002/sim.4254
- Mentzas, G., Lepenioti, K., Bousdekis, A., & Apostolou, D. (2021). Data-Driven Collaborative Human-AI Decision Making. In D. Dennehy, A. Griva, N. Pouloudi, Y. K. Dwivedi, I. Pappas, & M. Mäntymäki (Eds.), Responsible AI and Analytics for an Ethical and Inclusive Digitized Society (Vol. 12896, pp. 120–131). Springer International Publishing. https://doi.org/10.1007/978-3-030-85447-8_11
- Merioumi, W., Ibrahimi, G., & Benchekroun, B. (2025). Exploration of the Integration of Artificial Intelligence in Decision-Making Process: A Thematic Literature Review. In H. Hagras, Y. Bennani, & M. Nemiche (Eds.), Intelligent Systems and Advanced Computing Sciences (Vol. 2255, pp. 346–357). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-93448-3_28
- Morris, M. R., Sohl-Dickstein, J., Fiedel, N., Warkentin, T., Dafoe, A., Faust, A., Farabet, C., & Legg, S. (2023). Levels of AGI for Operationalizing Progress on the Path to AGI (Version 5). arXiv. https://doi.org/10.48550/ARXIV.2311.02462
- Reverberi, C., Rigon, T., Solari, A., Hassan, C., Cherubini, P., GI Genius CADx Study Group, Antonelli, G., Awadie, H., Bernhofer, S., Carballal, S., Dinis-Ribeiro, M., Fernández-Clotett, A., Esparrach, G. F., Gralnek, I., Higasa, Y., Hirabayashi, T., Hirai, T., Iwatate, M., Kawano, M., … Cherubini, A. (2022). Experimental evidence of effective human–AI collaboration in medical decision-making. Scientific Reports, 12(1), 14952. https://doi.org/10.1038/s41598-022-18751-2
- Russell, S. J., & Norvig, P. (1995). Artificial intelligence: A modern approach. Prentice Hall.
- Sauer, C. R., & Burggräf, P. (2025). Hybrid intelligence – systematic approach and framework to determine the level of Human-AI collaboration for production management use cases. Production Engineering, 19(3–4), 525–541. https://doi.org/10.1007/s11740-024-01326-7
- Trammell, P., & Korinek, A. (2023). Economic Growth under Transformative AI (No. W31815; p. w31815). National Bureau of Economic Research. https://doi.org/10.3386/w31815
- Tsymbal, A. (2004). The problem of concept drift: Definitions and related work. Computer Science Department, Trinity College Dublin, 106(2), 58.
- Vaccaro, M., Almaatouq, A., & Malone, T. (2024). When combinations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour, 8(12), 2293–2303. https://doi.org/10.1038/s41562-024-02024-1
- Von Zahn, M., Liebich, L., Jussupow, E., Hinz, O., & Bauer, K. (2025). Knowing (Not) to Know: Explainable Artificial Intelligence and Human Metacognition. Information Systems Research, isre.2024.1431. https://doi.org/10.1287/isre.2024.1431
- Wen, Y., Wang, J., & Chen, X. (2025). Trust and AI weight: Human-AI collaboration in organizational management decision-making. Frontiers in Organizational Psychology, 3, 1419403. https://doi.org/10.3389/forgp.2025.1419403
- Wu, C., Zhang, R., Kotagiri, R., & Bouvry, P. (2023). Strategic Decisions: Survey, Taxonomy, and Future Directions from Artificial Intelligence Perspective. ACM Computing Surveys, 55(12), 1–30. https://doi.org/10.1145/3571807
- Zhang, A., & Lee, M. K. (2025). Knowledge Workers’ Perspectives on AI Training for Responsible AI Use. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–18. https://doi.org/10.1145/3706598.3714100
I am an interdisciplinary educator, researcher, and technologist with over a decade of experience in applied coding, educational design, and research mentorship in fields spanning management, marketing, behavioral science, machine learning, and natural language processing. I specialize in simplifying complex topics such as sentiment analysis, adaptive assessments and data visualizatiion. My training approach emphasizes real-world application, clear interpretation of results and the integration of data mining, processing, and modeling techniques to drive informed strategies across academic and industry domains.
Discuss