The hype has eclipsed the limitations of third-wave artificial intelligence

Nitzberg MarkSeppälä TimoZysman John

The near-term potential of “artificial intelligence” is often overestimated.¹  The potential may be exaggerated in discourse and company strategy, but the risks and difficulties are real.

Today, AI refers colloquially to the combination of big data, increased computing capacity, and machine learning algorithms, especially deep learning. Few AI researchers believe that this combination alone will lead to general artificial intelligence.²  General AI (that can perform at human levels in all cognitive tasks) is far beyond today’s narrow or 3rd-generation AI.

Digital devices in our pockets give the appearance of human behavior: names like “Siri” and “Alexa” build on the anthropomorphic sway of the term AI itself.³ This all feeds into the hype that often eclipses the fundamental problems of third-generation AI. Relying heavily on past data and opaque algorithms, it has important failings. Algorithms that decide who gets a loan, a job, a medical treatment, or deliverance from jail 4 have by turns exhibited the biases of the past, and yielded decisions whose logic cannot be explained, largely because they are based on millions of parameters adjusted over billions of training cycles.

The failings of the algorithms, along with the societal and economic disruption that has followed their widespread adoption,5 have usefully, whatever the potential of these tools, led to a growing group of initiatives around the globe to develop “responsible” and “trustworthy” AI, including the EU principles.6

Today’s AI software is only reliably suited to certain narrow applications.  While its potential is therefore bounded, the risks generated, though not our focus here, must be addressed. Governments and industry are devoting vast resources to apply data driven AI to new areas, but the question remains whether it is even possible to make it trustworthy.

Some third-generation AI software has been designed, tested and deployed in settings that allow for a high level of error, such as suggesting products to customers in retail settings. But can today’s AI be more widely applied in nuclear safety systems, automated securities trading, or central bank tactics?  Are we betting on the wrong horse for high-risk, general applications? It is becoming apparent that third-generation narrow AI software and applications are precarious, cryptic and often unreliable, which creates as much difficulty as potential.

When discussing the current development of AI, we advocate a fresh start: clarify what third-generation AI software is realistically suitable for and capable of, assess how and to what extent it relates to the development of the next generation of AI, and contemplate whether we can solve certain problems by applying alternate, more reliable technologies and software solutions.

What will the fourth generation of AI consist in? Whether or not it involves deep learning, it must have a deeper understanding of context and language, as well as time, space and causality. It must broadly adapt to changes in the prevailing environment. The development and chances of success of a useful third-generation AI have raised questions for the last three years. We believe a critical review is in order.

 

[1] Typical of the press overestimating the current achievements with anthropomorphism: https://www.forbes.com/sites/bernardmarr/2019/11/11/13-mind-blowing-things-artificial-intelligence-can-already-do-today/, accessed on December 1, 2019

[2] The broader use of the term artificial intelligence covers far more than machine learning, and allows that additional breakthroughs could eventually lead to powerful intelligent systems capable of making better decisions than people in a broad range of settings. AI researchers project this occurring somewhere between 80 years from now … to never.

[3] How different the debate might have been if the labels had been less exotic.

[4]  https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/, accessed on 12/2/2019

[5] https://www.wired.com/story/ai-algorithms-need-drug-trials/, accessed on 12/2/2019

[6] https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

 

More on AI in the new publication Diffusion of Artificial Intelligence Technologies through S&P500 Companies

 

For more information on artificial intelligence

Ali-Yrkkö Jyrki, Koski Heli, Mattila Juri & Seppälä Timo; (2019), BRIE-ETLA: Shaping the Future in the Era of Intelligent Tools: AI and Beyond, https://www.etla.fi/tutkimukset/brie-etla-2019-2022/

Agrawalin Ajay, Gansin Joshua & Goldfarbin Avi; (2018), Prediction Machines: The Simple Economics of Artificial Intelligence, Harvard Business Review Press, Boston, MA

Hudson Matthew; (2018), Artificial intelligence faces reproducibility crisis, Science, Vol. 359, Issue 6377, pp. 725-726, DOI: 10.1126/science.359.6377.725

Marcus Gary & Davis Ernest; (2019), Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon Books, New York, NY