Prominent tech leaders have been sounding the alarm about the potential dangers of artificial intelligence (AI) for quite some time. Tesla’s Elon Musk, a vocal proponent of AI safeguards, views the technology as the “biggest risk we face as a civilization,” even calling on the government to regulate AI proactively before robots start “going down the street killing people.”1
Economist Anton Korinek foresees a less dramatic but equally consequential scenario for humanity: Super-intelligent entities — either machines or artificially enhanced humans — may command a growing share of scarce resources in the economy, pushing ordinary people below their subsistence level — leading to modern version of a Malthusian catastrophe.2 To illuminate this possible future, Korinek, who holds joint appointments at the University of Virginia’s Department of Economics and Darden School of Business, has introduced a novel economic framework that explains AI’s growing impact on the economy.
The New Stakeholders in the Economy
Computer systems were initially designed as support tools for decision-makers. Not anymore. “Machines and computer programs,” says Korinek, “are no longer just objects and tools. They increasingly take on agency and play an independent role in the economy. They behave like Artificially Intelligent Agents (AIAs).”
Artificially Intelligent Agents (AIAs)
To be considered an intelligent agent, a computer system must have the following properties:
- Autonomous behavior
- The ability to sense its environment and other agents
- The ability to act upon its environment
- Be goal-driven
Intelligent agents may also learn or use knowledge to achieve their goals.
See Michael J. Wooldridge, “Intelligent Agents,” in Weiss Gerhard's (ed.) Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence, Cambridge, MA, MIT.
Today, machine-learning algorithms determine a growing number of corporate decisions. “In some sense, AI algorithms are one of the most important contributors to what a company does,” says Korinek. “They frequently are what makes or breaks a company and what gives it a competitive edge. So, a lot of the things that we used to say about employees are now true about AI algorithms.”
The investment firm Vanguard, for example, uses algorithms to provide customers with investment advice, including constructing a customized portfolio and tax-efficient investment selection.3 BP, a global producer of oil and gas, runs proprietary algorithms on its supercomputer to identify elusive oil reserves.4
As a result of AIAs’ rising ability to act autonomously and determine a growing number of corporate decisions, at least according to Korinek, they are on the verge of becoming significant stakeholders in the economy with unprecedented implications that we have yet to appreciate.
The Limits of the Human-Centric Perspective
Korinek proposes that we move beyond the simplistic view of human-technology interactions in which humans have complete agency, wherein they are always in control of the machines and machines don’t exert influence over their users.
Traditionally, “agency” has connoted a distinctive human ability and something that by definition only humans possess.5 This anthropocentric frame of reference, according to Korinek, is exactly what prevents us from understanding the scope and significance AI’s impact on the economy. “We have this human-centric mindset,” says Korinek, “which blinds us to what is going on in AI nowadays and to how easily many distinctly human skills are in fact replaceable by machines.”
Because of this bias, most people see machines only as objects that humans own and control. According to Korinek, we’re overlooking AI’s growing influence over its users. “Algorithms are already manipulating humans,” says Korinek. “They tell us what to think, what to like, and even how to vote.”
Take Facebook, which for many people is a primary source of news and information. Its News Feed algorithm, designed to increase user engagement in order to sell more advertising, shows only content with which users agree. This can create a “filter bubble,” in which it appears to the user that most others share their views. It’s now believed that this filtering may increase political polarization. As we’ve recently learned, the News Feed algorithm may have influenced the Brexit referendum in the U.K. and the 2016 election in the U.S.6
Korinek suggests that we break free of the anthropocentric perspective. To better study interactions of humans and AIAs on a symmetric basis, and to understand AI’s growing impact on the economy, Korinek developed a novel economic framework that describes humans and AIAs in parallel as goal-oriented entities that each absorb scarce resources and contribute to the economy.
“What I’m proposing,” says Korinek “is an evolutionary view of humanity and AI. We now have humans and machines as two different types of intelligent, agent-like entities living in symbiosis. They depend on each other and affect each other, co-evolving together.”
For more on Professor Korinek’s framework, please see "The Rise of Artificially Intelligent Agents: AI's Growting Effect on the Economy, Part 2," which discusses into the mechanisms that enable Artificially Intelligent Agents (AIAs) to control resources, which may lead either to unprecedented prosperity for humans and AIAs or to an existential race between the two.
- 1James Vincent, James. “Elon Musk Says We Need to Regulate AI Before It Becomes a Danger to Humanity,” The Verge, 17 July 2017, https://tinyurl.com/y9st9swo.
- 2Anton Korinek and Joseph E. Stiglitz, “Artificial Intelligence and Its Implications for Income Distribution and Unemployment,” NBER Chapters, National Bureau of Economic Research Inc. (2018).
- 3Thomas Davenport and Rajeev Ronanki, “Artificial Intelligence for the Real World,” Harvard Business Review, no. 96(1) (2018): 108-116.
- 4Tom Tyler, Tom. 2019. “The Surge in Oil & Gas Supercomputing,” EnterpriseTech.com, 2019, https://tinyurl.com/y2ljozcy.
- 5Agency is generally viewed as a capacity to act, produce and anticipate a desired outcome within a particular context. Various theories connect agency to intentionality and to the ability to achieve one’s goals. See J. Caston,
- 6Roger McNamee,“I Mentored Mark Zuckerberg. I Loved Facebook. But I Can't Stay Silent About What's Happening,” Time Magazine, 2019, http://time.com/author/roger-mcnamee/.