Exploring Approaches to Keep an AI-Enabled Workplace Safe for Workers
Posted on by
Artificial intelligence (AI)—the field of computer science that designs machines to perform tasks that typically require human intelligence—has seen rapid advances leading to cutting‐edge innovations in language, vision, reasoning, and human‐machine collaboration across industries, economies, and labor markets.[1] [2]
In the workplace, the adoption of AI technologies can result in a broad range of hazards and risks to workers, as illustrated by the recent growth in industrial robotics and algorithmic management. [3- 8] Sources of risk from deployment of AI technologies across society and in the workplace have led to numerous government and private sector guidelines that propose principles governing the design and use of trustworthy and ethical AI. As AI capabilities become integrated in devices, machines, and systems across industry sectors, employers, workers, and occupational safety and health practitioners will be challenged to manage AI risks to worker health, safety, and wellbeing.
A new commentary in the American Journal of Industrial Medicine, Managing workplace AI risks and the future of work, discusses these challenges and presents five risk management options to promote the use of trustworthy and ethical AI in workplace devices, machinery, and processes. Excerpts and synopses from the commentary are presented here.
Trustworthy and Ethical AI
The “black‐box” nature of AI systems poses challenges to users in understanding their underlying decision‐making process. This degrades trust in AI’s operations and outcomes, resulting in users being wary about trusting AI. [9] Developing and deploying AI that is “trustworthy” has become the conceptual basis of nearly all approaches that are aimed at promoting the benefits of AI to society while managing its hazards and risks. [10] [11] In its AI Risk Management Framework, the National Institute for Standards and Technology lists seven key components of trustworthy AI. Conversely, the absence of these trustworthiness components can signal that an AI system may present hazards or risks that need to be managed.
Ethical AI guidelines are aimed at ensuring AI systems respect autonomy, safeguard fairness in AI systems operations, promote the benefits of AI and prevent harm. [12] [13] As with trustworthy AI, multiple ethical AI guidelines have been released. [14] These ethical guidelines emphasize a similar set of principles that mirror those of trustworthy AI, and several have addressed ethical AI systems for the workplace. [15-20] Ethical approaches are centered on the operation of AI systems in general. In the workplace, ethical AI systems could reflect the following ethical principles:
(1) Autonomy by promoting respect for a worker’s right to make their own decisions and not be controlled by an AI system [21] [15]
(2) Privacy by ensuring a worker’s right to control their own data and a right to be informed about how their personal data is being used by their employer [21];
(3) Data bias‐free by preventing discriminatory AI system outcomes that can result in direct or indirect harm to the worker [13]; and
(4) Transparency and accountability by supporting a worker’s ability to make sense of how the AI system operates and the meaning of its actions [15] [13]; and by ensuring that there are clear lines of responsibility for undesirable consequences even when the AI system’s mathematically complex decision‐making process may not be easily interpretable. [22]
AI Risk Management
Many of the existing AI risk reduction guidelines are general in nature and lack detailed methods that can lead to actionable risk management practices that are tailored to specific industry use cases and worker protection. [23] Deepening the current high‐level guidelines to align better with industry sectors would help assure the safety of AI systems in work settings. [24] [25]
Who should assure that AI systems are safe for use in a workplace? How can a safety and health practitioner develop a risk‐centered evidence base to justify that deployment of a workplace AI‐enabled system does not present risks to worker safety, health, and wellbeing? The following five options are presented in the article as suggested ways to prioritize the use of trustworthy AI or ethical AI to enable workplace devices, machinery, or processes. [26]
First, while eliminating risks of an AI‐enabled device or system in a workplace before its deployment is consistent with prevention-through‐design principles,[27 – 29] it would be technically burdensome for employers, workers, and safety and health practitioners to acquire the technical skill set needed to independently verify that AI system deployment will be safe. Effective AI risk management may require reskilling or upskilling to acquire a set of computer science competencies that would be challenging for employers, workers, and safety and health practitioners. These new skills would give safety and health practitioners greater ability to manage AI systems. If current efforts toward providing only explainable AI (XAI) to users by developers are successful, safety and health practitioners would be better able to interpret, trust, and more effectively manage AI systems in the workplace.[30] While AI‐related technical proficiencies are important, it is also important to augment skills to ensure that worker interaction with an AI system reflects human‐centered values.[31]
Second, AI developers and safety and health practitioners could conduct collaborative AI system evaluations assessing the safety, capabilities, and alignment of AI systems. An alignment evaluation is focused on ensuring that the operational outcomes of an AI system match those intended by a developer’s design parameters.[32] Examples of evaluation methods have been recommended as concrete action items for each trustworthy AI principle. [33]
Collaborative AI system evaluations between developers and deployers are important for both operational alignment and for AI safety. Being able to understand the inner workings of a neural network in an AI model would provide deployers with “mechanistic interpretability,” a highly sought‐after goal for society and workplace. [34] For example, research efforts for extracting interpretable features from a LLM language model might lead in the future to converting an AI system from an incomprehensible “black box” into a machine with an understandable safety profile. [35] [36]
Responsible deployment is best served not just by technical alignment alone but by emphasis consistent with the sociotechnical systems principle of joint optimization. [37] In addition to a focus on the technology, a similar focus needs to be on the workers who are interacting with AI systems. [38] [39] A sociotechnical approach ensures that technical operations are ethically aligned to the human‐centered values of worker safety, health, and well‐being. [38]]
Third, an independent audit could be used to assess the risks of AI system capabilities through tools like algorithmic transparency.[40] Until AI auditing methods and standards responsive to the rapidly evolving nature of AI technologies are developed,[41] AI auditing against established consensus risk assessment and management standards could be conducted.[42] Prospective risk assessments and audit results can then be used to (1) determine if system components permit transparent auditing or are proprietary and protected as a trade secret; (2) review any risks discovered in the audit trail; and (3) develop risk mitigation measures before deployment.[43]
Fourth, AI system certification is a way to incentivize AI developers to adopt trustworthy AI principles in the design and development phase and to enable downstream users to validate the inclusion of trustworthy AI in a deployed system.[44] While private sector organizations are beginning to offer AI system certification, the ultimate success of system certification depends on strength of customer demand or a government mandate. [44] Furthermore, auditing and certification as a way to develop the evidence base to justify safe workplace AI options have their limitations. Design risks may remain dormant in an AI system until deployment, and even then, an emergent or unexpected issue may be difficult to trace back to its algorithmic source.[45]
Fifth, how does a safety practitioner develop the detailed evidence that a workplace system is indeed safe to operate and what does that evidence base look like? Very little detailed guidance is available related to the “how” of workplace risk evaluation of AI systems. Two approaches, the “safety system approach” and the “safety case approach” (explained in the article) bear consideration as methodologies for the identification, analysis, and evaluation of high‐risk AI system risks.
AI technologies will play a big role in the future of work. The occupational safety and health practice and research communities need to ensure that the promise of these new AI technologies result in benefit, not harm, to worker safety, health, and well‐being.
If you are using AI in your workplace, please comment below if you have you taken any of the steps discussed in this article, or other steps, to ensure its safe implementation.
John Howard, MD, is the Director of the National Institute for Occupational Safety and Health.
Paul A. Schulte, PhD, is a NIOSH contractor with Advanced Technologies and Laboratories International, Inc.
References
[1] Czerwonko A, White J. Life after the hype: how AI is transforming industries and economies. World Econ Forum. 2023. Accessed July 25, 2024. https://www.weforum.org/agenda/2023/12/life-after-the-hype-how-ai-is-transforming-industries-and-economies/
[2] Cazzaniga M, Jaumotte F, Li L, et al. IMF Staff Discussion Note. Gen-AI: Artificial Intelligence and the Future of Work. SDN2024/001 International Monetary Fund; 2024. Accessed July 25, 2024. https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379
[3] Kellogg KC, Valentine MA, Christin A. Algorithms at work: the new contested terrain of control. Acad Manag Ann. 2020; 14(1): 366-410. doi:10.5465/annals.2018.0174
[4] Koeszegi ST AI @ work: human empowerment or disempowerment? In: H Werther, C Ghezzi, J Kramer, et al., eds. Introduction to Digital Humanism. A Textbook. Springer Nature; 2024: 175-196. doi:10.1007/978-3-031-45304-5_12
[5] Mirbabaie M, Brünker F, Möllmann Frick NRJ, Stieglitz S. The rise of artificial intelligence—understanding the AI identity threat at the workplace. Electronic Markets. 2022; 32: 73-99. doi:10.1007/s12525-021-00496-x
[6] Tang PM, Koopman J, Mai KM, et al. No person is an island: unpacking the work and after work consequences of interacting with artificial intelligence. J Appl Psychol. 2023; 108(11): 1766-1789. doi:10.1037/apl0001103
[7] Patel PC, Devaraj S, Hicks MJ, Wornell EJ. County-level job automation risk and health: evidence from the United States. Soc Sci Med. 2018; 202: 54-60. doi:10.1016/j.socscimed.2018.02.025
[8] Gihleb R, Giuntella O, Stella L, Wang T. Industrial robots, workers’ safety, and health. Labour Economics. 2022; 78:102205. j.labeco.2022.102205
[9] Gillespie N, Lockey S, Curtis C, Pool J, Akbari A. Trust in Artificial Intelligence: A global study. The University of Queensland and KPMG Australia; 2023. doi:10.14264/00d3c94
[10] Thiebes S, Lins S, Sunyaev A. Trustworthy artificial intelligence. Electronic Markets. 2021; 31: 447-464. doi:10.1007/s12525-020-00441-4
[11] Reimaging secure infrastructure for advanced AI. San Francisco, CA; Open AI. May 3, 2024. Accessed July 25, 2024. https://openai.com/index/reimagining-secure-infrastructure-for-advanced-ai/
[12] Bernhardt A, Kresge L, Suleiman R. The data-driven workplace and the case for worker technology rights. ILR Review. 2023; 76(1): 3-29. doi:10.1177/00197939221131558
[13] Siau K, Wang W. Artificial intelligence (AI) ethics: ethics of AI and ethical AI. J Database Manag. 2020; 31(2): 74-87. doi:10.4018/JDM.2020040105
[14] Hagendorff T. The ethics of AI ethics: an evaluation of guidelines. Minds Mach. 2020; 30: 99-120. doi:10.1007/s11023-030-09517-8
[15] Bankins S, Formosa P. The ethical implications of artificial intelligence (AI) for meaningful work. J Bus Ethics. 2023; 185: 725-740. doi:10.1007/s10551-023-05339-7
[16] Cebulla A, Szpak Z, Howell C, Knight G, Hussain S. Applying ethics to AI in the workplace: the design of a scorecard for Australian workplace health and safety. AI Soc. 2023a; 38: 919-935. doi:10.1007/s00146-022-01460-9
[17] Cebulla A, Szpak Z, Knight G. Preparing to work with AI: assessing WHS when using AI in the workplace. Int J Workplace Health Manage. 2023b; 16(4): 294-312. doi:10.1108/IJWHM-09-2022-0141
[18] United National Educational, Scientific and Cultural Organization (UNESCO). Recommendation on the Ethics of Artificial Intelligence. UNESCO Digital Library; November 23, 2021. Accessed July 25, 2024. https://unesdoc.unesco.org/ark:/48223/pf0000381137
[19] Ramos G. A.I.s impact on jobs, skills, and the future of work: the UNESCO perspective on key policy issues and the ethical debate. New Eng J Public Policy. 2022; 34(1):3. https://scholarworks.umb.edu/nejpp/vol34/iss1/3
[20] Cole M, Cant C, Ustek Spilda F, Graham M. Politics by automatic means? A critique of artificial intelligence ethics at work. Front Artif intell. 2022; 5: 869114frai.2022.8069114.
[21] Brey P, Dainow B. Ethics by design for artificial intelligence. AI Ethics (2023).Published online on September 21, 2023. doi:10.1007/s43681-023-00330-4
[22] Rodrigues R. Legal and human rights issues of AI: gaps, challenges and vulnerabilities. J Resp Tech. 2020; 4:100005. doi:10.1016/j.jrt.2020.100005
[23] Gardner C, Robinson K-M, Smith CJ, Steiner A Contextualizing end-user needs: how to measure the trustworthiness of a system. Software Engineering Institute Blog. Carnegie Mellon University. July 17, 2023. Accessed July 6, 2024. https://insights.sei.cmu.edu/blog/contextualizing-end-user-needs-how-to-measure-the-trustworthiness-of-an-ai-system/
[24] Neto AVS, Camargo JB, Almeida JR, Cugnasca PS. Safety assurance of artificial Intelligence-Based systems: A systematic literature review on the state of the art and guidelines for future work. IEEE Access. 2022; 10: 130733-130770. doi:10.1109/ACCESS.2022.3229233
[25] Lu Q, Zhu L, Xu X, Whittle J. Responsible-AI-by-Design. A pattern collection for designing responsible artificial intelligence systems. IEEE Software. 2023; 40(3): 63-71. doi:10.1109/MS.2022.3233582
[26] Batarseh FA, Freeman L, Huang C-H. A survey on artificial intelligence assurance. J Big Data. 2021; 8: 60. doi:10.1186/sho537-021-00445-7
[27] Schulte PA, Rinehart R, Okun A, Geraci CL, Heidel DS. National prevention through design (PTD) initiative. J Saf Res. 2008; 39(2): 115-121. doi:10.1016/j.jsr.2008.02.021
[28] Manuele FA. Prevention through design (PtD): history and future. J Saf Res. 2008; 39: 127-130. doi:10.1016/j.jsr.2008.02.019
[29] American National Standards Institute/American Society of Safety Professionals (ANSI/ASSP) Z590.3-2011(R2016). Prevention Through Design: Guidelines for Addressing Occupational Hazards and Risks in Design and Redesign Processes. Park Ridge, IL: American Society of Safety Professionals; 2016. Accessed July 25, 2024. https://www.assp.org/standards/standards-topics/prevention-through-design-z590-3
[30] Nagahisarchoghaei M, Nur N, Cummins L, et al. An empirical survey on explainable AI technologies: recent trends, use-cases, and categories from technical and application perspectives. Electronics. 2023; 12: 1092-1151. doi:10.3390/electronics12051092
[31] Zirar A, Ali SI, Islam N. Worker and workplace: artificial intelligence (AI) coexistence: emerging themes and research agenda. Technovation. 2023; 124:102747. doi:10.1016/j.technovation.2023.102747
[32] Ji J, Qiu T, Chen B, et al. AI alignment: a comprehensive survey. arXiv preprint. 2024. doi:10.48550/arXiv.2310.19852
[33] Li B, Qi P, Liu B, et al. Trustworthy AI: from principles to practices. ACM Comput. Surv. 2023; 55(9): 1-46. doi:10.1145/3555803
[34] Bereska L, Gavves E. Mechanistic interpretability—a review. arXiv preprint. 2024. doi:10.48550/arXiv.2404.14082
[35] Templeton A, Conerly T, Marcus J, et al. Scaling monosemanticity: extracting interpretability from claude 3 sonnet. AI Transformer Circuits Thread. 2024. Accessed July 25, 2024. https://transformer-circuits.pub/2024/scaling-monosemanticity/
[36] Inside the mind of an AI. Researchers are trying to figure out how large language models work. The Economist. July 13, 2024. https://www.economist.com/science-and-technology/2024/07/11/researchers-are-figuring-out-how-large-language-models-work
[37] Parker SK, Grote G. Automation, algorithms, and beyond: why work design matters more than ever in a digital world. Applied Psychology. 2022; 71: 1171-1204. doi:10.1111/apps.12241
[38] Bogen M, Winecoff A. Applying a sociotechnical approaches to AI governance in practice. Center for Democracy and Technology; May 15, 2024. Accessed June 20, 2024. https://cdt.org/insights/applying-sociotechnical-approaches-to-ai-governance-in-practice/
[39] Schulte PA, Leso V, Iavicoli I. Responsible development of emerging technologies: extensions and lessons from nanotechnology for worker protection. J Occup Environ Med. 2024; 66: 528-535. Published online ahead of print doi:10.1097/JOM.0000000000003100
[40] National Telecommunications and Information Administration. AI Accountability Policy Report. U.S. Department of Commerce; March 27, 2024. Accessed July 25, 2024. https://www.ntia.gov/issues/artificial-intelligence/ai-accountability-policy-report/recommendations
[41] Manheim D, Martin S, Bailey M, Samin M, Greutzmacher R. The necessity of AI audit standards board. arXiv preprint. 2024. Accessed July 25, 2024. doi:10.48550/arXiv.2404.13060
[42] International Electrotechnical Commission. IEC 31010:2019. Risk management—risk assessment techniques. International Organization on Standardization; June 2019. Accessed July 25, 2024. https://www.iso.org/standard/72140.html
[43] Falco G, Shneiderman B, Badger J, et al. Governing AI safety through independent audits. Nat Mach Intell. 2021; 3: 566-571. doi:10.1038/s42256-021-00370-7
[44] Cihon P, Kleinaltenkamp MJ, Schuett J, Baum SD. AI certification: advancing ethical practice by reducing information asymmetries. arXiv preprint. May 20, 2021 Accessed July 6, 2024. doi:10.48550/arXiv.2105.10356
[45] Raji ID, Smart A, White RN, et al. Closing the accountability gap: defining an end-to-end framework for internal algorithmic auditing. arXiv preprint. January 3, 2020. doi:10.48550/arXiv.2001.00973
Post a Comment