Many of us exploring the state of artificial intelligence (AI) in our work environments inevitably reach a moment of transformation—that moment when we stop saying “AI can’t do that” and begin hedging our bets a bit by saying “AI can’t do that—yet.” We are also asking a basic question we need to address: What are we doing now to prepare workers to survive and thrive in a world where substantial numbers of existing jobs are being automated and potentially leaving workers unemployed?
We have seen Google Translate transformed from the subject of well-deserved jokes to becoming a serviceable tool that helps us communicate with colleagues who speak languages other than those we speak. We have seen Google Duplex, which successfully completed customer-to-business interactions in which Duplex was used to schedule a haircut appointment and interact with a restaurant worker regarding a reservation for a prospective customer. We have seen IBM’s Project Debater hold its own in a debate between the machine and two experienced human debaters.
If you’re still wondering how much of this remains in the realm of science fiction and how much is actively working its way into contemporary workplaces and talent development, Paul Daugherty and H. James Wilson have written the book you need: Human + Machine: Reimagining Work in the Age of AI.
This engaging, wonderfully narrative-driven book, with a consistently applied framework for action, leads us through some of the basic questions we’re asking each other about AI—such as how “intelligent” it is, whether it “thinks,” and whether it “learns” (terms we may need to rethink as AI applications become increasingly sophisticated); explores what the authors call “the missing middle”—that underexplored area in which machines driven by AI applications augment humans at work and humans benefit from what AI can provide; and is full of examples of how innovative companies are applying five “crucial principles”—mindset, experimentation, leadership, data, and skills (producing an approach that the authors call “MELDS”)—to look toward a future of work and talent development that is already developing in many positive and interesting ways.
“In the missing middle,” they write, “humans and machines aren’t adversaries, fighting for each other’s jobs. Instead, they are symbiotic partners, each pushing the other to higher levels of performance. Moreover, in the missing middle, companies can reimagine their business processes to take advantage of collaborative teams of humans working alongside machines.”
We’re not in AI evil-overlord world-takeover mode here, nor are we in a fantasy world in which change doesn’t produce challenges—some of them potentially devastating to workers whose jobs are and will be disappearing with all-too-little notice. The authors consistently cite the need for training—and retraining—that helps employees prepare for the changing nature of the work they do, and for new jobs that develop in response to changes that AI applications are introducing into our workplaces: “In fact, investing in people must be a core part of any company’s AI strategy,” they remind us.
Daugherty and Wilson share plenty of interesting and encouraging examples of how some employers are taking a positive approach to talent development rather than simply laying off employees whose jobs are being automated. Warehouse workers are trained to fix robots. Sales representatives are trained to use “huge amounts of data” generated through analysis completed by AI applications to better “assess if and when a customer might be ready to buy and even preempt objections in the sales process.” And those involved in training are developing new skills so they can work with machines-as-learners to develop the algorithms needed for those machines to deal more effectively with human responses, including empathy and sarcasm—a position the authors refer to as “interaction modelers.”
Our entire concept of talent development may, in fact, be on the verge of changing and expanding as we not only adapt to AI developments capable of taking over rudimentary (or even advanced) aspects of instructional design, but also adapt to learning environments in which our learners include those machines being taught how to better interact with the humans with whom they work. We clearly are going to have to rethink some of our assumptions about what AI really means in terms of “thinking,” “learning,” and responding in ways that humans respond to workplace situations and challenges, even if they are not—yet—doing them as humans do.
What remains important at this point, however, is the need to recognize the leadership roles we are playing and will need to play as we reimagine the world in which we dive into and thrive in Daugherty and Wilson’s missing middle:
“…to be successful at process reimagination, executives must first have the right mindset to envision novel ways of performing work in the missing middle, using AI and real-time data to observe and address major pain points. They should then focus on experimentation in order to test and refine that vision, all the while building, measuring, and learning. Throughout that process, though, they need to consider how to build trust in the algorithms deployed. That takes leadership—managers who promote responsible AI by fostering a culture of trust toward AI through the implementation of guardrails, the minimization of moral crumple zones, and other actions that address the legal, ethical, and moral issues that can arise when these types of systems are deployed. And last but certainly not least, process reimagination requires good data, and companies need to develop data supply chains that can provide a continuous supply of information from a wide variety of sources. All this, then, represents the MELD part of our MELDS framework.”
The authors provide that stimulating and thoughtful MELDS framework. It’s up to us to provide the intelligent approach that allows us to best serve our workplace learners, the organizations for which they work, and the customers who ultimately benefit from all they produce and do.