“If we try to compete with machines on their turf, we lose.”
Joseph Aoun (President, Northeastern University)
MIT hosted the AI & the Future of Work conference last week to explore how exponential innovations driven by artificial intelligence will fundamentally disrupt the nature of the workforce and force society to grapple with important questions about both our labor and our livelihood.
Eric Schmidt, former CEO of Google, reflected in his opening “Fireside Chat” about the importance of Google’s innovative decision in 2010 to go completely “mobile-first” (designing all of their products around the mobile experience — an approach that swarms of technology companies now emulate) and articulated his strong belief that the next wave of companies will coalesce around a different approach: “AI-first”. That is, while AI may currently be seen by some to be a set of technologies that is applicable only for a narrow group of highly specialized products, the reality is that companies should and likely will start adopting some form of AI into every one of their products — and indeed start with that as a framework when designing new products. Such an approach would stem from not only the undeniable value that AI can bring upfront, but also over time as it is uniquely positioned to generate an incredible amount of training data to be able to continue to improve upon itself.
There was a noticeable consistency in the views conference speakers shared in arguing that AI and automation would not necessarily lead to mass unemployment in the workforce — with individuals suggesting some combination of (a) the future of work will offer more new jobs than the amount of jobs that will go away, (b) AI will act as a supplement to human productivity, not a replacement for the human, and (c) societal challenges will be much more of a barrier to implementation than the progress of technology itself (self-driving cars, for example).
Despite this fairly unanimous message, the overall tone of the day remained gloomy, highlighting challenges that a rapidly and continuously changing workforce will bring — particularly in finding ways to avoid accelerating income inequality and to empower people with the right set of skills they will need to succeed in the future of work. As one panelist summarized, “The robots aren’t going to take all of our jobs, but there’s plenty of other things that we need to worry about.”
The conference highlighted the overarching challenge that exists looking forward: we must ascertain how to harness the remarkable benefits of AI in a way that will minimize the potential negative side-effects of such technologies, as such side-effects certainly aren’t going to be minimized on their own. Below is my summary and perspective on three specific aspects of that challenge:
Income Inequality: Rising Tides, Sinking Boats
The U.S. economy is booming. Unemployment is hitting record lows. And productivity (in terms of output per hour) is hitting record highs. So what’s the problem? Rising tides lift all boats… right? Wrong — according to a plethora of measures.
Median real family income is stagnant. Total share of income in the overall economic output that is going to labor (as compared to capital) has decreased. The MIT Living Wage Lab conducted a study in 2017 that determined the living wage in America to be $16/hour — but 42% of Americans make less than that. It’s not an employment problem; it’s a wage problem.
Companies have long created economic value that is heavily driven by the number of employees they have (where adding more employees equals additional output), but this dynamic is changing with more value now being created in many companies by the customers themselves (where adding more customers equals more data or more advertising revenue, while keeping employees constant). One question that arises from this dynamic is — do customers deserve a portion of this additional economic value that their own data has generated?
Addressing this income inequality, which if not course corrected will only be amplified with the increasing adoption of AI and automation in the workforce, should be amongst the most important issues center stage in policy discussion. But it’s not. Until it is, we continue to have a tax system that encourages companies to use capital over labor, often times making it an even easier decision for companies on where to invest.
Education to Employment: A System Locked in Rigidity
There have never been more job openings that can’t be filled — it’s driven by a skills mismatch in the education to employment pipeline, and it’s only going to accelerate with further disruption in the workforce. The task at hand becomes how to modularize skills and make them widely accessible. Despite AI being at the center of this disruption (i.e. part of the issue), AI can also be part of the solution — with plenty of opportunity to leverage it to modularize skills, make those modules accessible, and make the learning more personalized.
There have been numerous studies about what skills will be most important in the age of automation (and their findings are typically consistent), but there is a chasm between the identification of those skills and any action on them. From K-12 systems, to higher education institutions, to government education policy, there is incredibly rigidity in the education system, that begs the question of if we can afford to wait for the entire system to be reformed or if we need innovation from the private sector to spur continuous agility and facilitate the necessary Just-In-Time processing of skills.
One education-specific conversation at the conference was a Fireside Chat with Joseph Aoun, the President of Northeastern University, who articulated the importance of making students ‘Robot-Proof’. He argued that in order to achieve this aim, education providers must focus on integrating technological literacy, data literacy, and human literacy (what we as humans do that machines are not able to replicate).
Aoun admitted that higher education providers can be stubborn to change, but remained confident that a “sense of competition and innovation” in the higher ed ecosystem would enable the sector to overcome the challenges that will be presented by the future of work. There is however an argument against this claim of innovation, given the bureaucratic higher ed accreditation system that currently exists (with an incredibly burdensome upfront process and a principle of self-oversight), acting as a massive barrier to new entrants and innovation.
No matter what the solution to the current rigidity in the education system, it’s clear that lifelong learning must be a central component. And as we develop a lifelong learning model, it’s important that it does not take on a problem that so many other parts of the education system have: inaccessibility. It’s unclear who will be funding the upskilling of employees as more and more people need to shift jobs, which is a question of heightened importance when considering employers are investing even less in workforce training as average employee tenure declines.
Uniquely Human Skills: For Now or Forever?
Throughout the conference, there was a large emphasis on human partnership with Artificial Intelligence — about it being humans “plus” machines, not humans “versus” machines. There was a consistent notion that we must simply identify what AI will be better than us at and what uniquely human skills exist that we can reign supreme at, and then organize future labor around those parameters. And the narrative around what skills are “uniquely human” was largely consistent with other existing research: creativity, empathy, judgment, leadership, teamwork, etc.
However, there seemed to be some degree of inconsistency between this narrative of uniquely human skills existing and some of the innovations in Artificial Intelligence the conference was simultaneously highlighting. In one session in particular that focused on “The AI-Enabled Organization”, panelists Sophie Vandebroek from IBM and Gabi Zijderveld from Affectiva illustrated AI’s incredible recent advances in judgment and empathy.
Earlier this year, IBM rolled out Project Debater, the first AI system that can debate humans on complex topics, which was able to win an argument against a champion debater. Vandebroek emphasized the huge range of future applications of such technology, including having the system in the boardroom to enhance a company’s executive judgment and decision-making.
While IBM did allude to its technology still not being able to master human emotion, Affectiva seemed to plug that exact gap with its emotion measurement technology — allowing it to gauge what emotions individuals are feeling based on facial and vocal patterns. Zijderveld emphasized Affectiva’s ability to humanize technology, with possible future applications such as call center bots that are able to detect the tone of a customer’s voice (possibly as well as or even better than humans) and adapt responses accordingly.
So are we sure that it’s really robots “plus” humans? Are skills like creativity, empathy, and judgment really “uniquely human”? Or are they just concepts that are more challenging to replicate, but will soon be conquered as the capabilities of artificial intelligence increase at a rapid pace? At the very least, we can take solace in the fact that there are still humans in the CEO seat of companies… for now.