Advertisement
Advertisement
Asian businesswoman in formal suit working with computer laptop for Polygonal brain shape of an artificial intelligence with various icon of smart city Internet of Things, AI and business IOT concept
ATD Blog

Ethical AI in Training to Shape an Inclusive Government Workforce

Thursday, March 21, 2024
Advertisement

Presently, there is much anxiety among educators and organizations around the current artificial intelligence (AI) hype cycle, where claims of the hazards and benefits are often wildly exaggerated. As usual, the truth lies in the middle. Like any technology, new or old, AI requires government acquisition professionals to educate themselves and to take a balanced and informed approach to new systems. It requires new policies for developing, maintaining, and evaluating training systems.

AI can potentially transform how we train and educate the government workforce. AI-based systems can make the workforce more agile, innovative, and responsive to the changing needs of government organizations. With the advent and adoption of AI, we have a unique opportunity to hit reset on how we train and educate. We can eliminate lackluster, click-through, passive learning experiences in favor of dynamic, engaging, and more effective methods.

Still, we must take a moment to reflect on where we are today and where we wish to go. Training and education can become less effective if AI is not developed and implemented carefully and ethically. AI systems have already been shown to increase workplace disparities when trained with biased or low-quality data. In addition to taking this moment to apply solid learning engineering principles carefully and intentionally, we must recognize that prioritizing underlying algorithms and data quality is critical and an immediate concern.

Data is the fuel that powers AI. If the data or algorithms are biased or low-quality, the results will also be flawed and ineffective. Without proper design and oversight, AI systems may reflect and amplify existing biases and prejudices in the data, algorithms, and prior human judgments. Bias at every level may result in unequal outcomes for certain groups of learners, such as women, minorities, the neurodiverse, and people with disabilities. As we start to fuel our AI applications with data, we must ensure that data is high quality and reflects requirements, while developing an effective and fair work environment. Organizations must adopt a human-centered and ethical approach to AI and ensure that the AI systems are transparent, explainable, accountable, and fair.

Advertisement

Specifically, government agencies must be smart consumers of AI training and education systems. First, we must apply effective learning engineering principles and data science in training systems to provide the workforce with engaging and effective experiences. The data used to train these systems must reflect the diversity of the workforce and the diversity of situations in which the training will be applied.

Advertisement

One way to combat AI bias is to continuously monitor and audit these AI systems and their outcomes. Education outcomes must be regularly measured to ensure that systems are effective, unbiased, and updated with fresh data and improved algorithms as they become available. Most importantly, we must solicit feedback from the learners, employees, and other relevant stakeholders, such as trainers, managers, or experts, to evaluate the effectiveness of the AI systems. Likewise, when selecting developers for new training platforms, we should be mindful of their development practices and ability to understand workforce diversity to avoid bias.

As AI reshapes the workforce training and development landscape, organizations must proactively adapt and innovate, crafting new best practices and policies for acquiring training systems. The journey toward fully leveraging AI in education necessitates immediate action and strategic planning, laying the groundwork for a future where technology and learning engineering transform the workforce.

About the Author

Russell Shilling, Ph.D., is an accomplished social impact innovator whose career has been dedicated to using technology to improve education, training, and psychological health. He is a former Navy Captain, aerospace experimental psychologist, and program officer at the Defense Advanced Research Projects Agency (DARPA), where he led innovative programs on the use of AI and affective computing for psychological health and created new approaches to using games and analytics strategies for STEM education.

Prior to his time at DARPA, Shilling was a pioneer in serious games and Virtual Reality for education and psychological health. He developed award-winning programs with Sesame Workshop to help military children cope with deployments, injury, and grief. Later, he served as the Executive Director of STEM Initiatives at the U.S. Department of Education during the Obama Administration, where he worked on education policy issues and helped coordinate STEM activities with federal agencies and the White House Office of Science and Technology Policy.

Shilling is a strong advocate for a DARPA for Education (ARPA-ED) and has consulted with startups and major philanthropies on applying innovative processes to research and development. He continues to work on policy issues in education, focusing on the use of AI in education and health technologies, including an advisor to the EdSafe AI Alliance.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.