Mihai Nadin, Emeritus Ashbel Smith University Professor, University of Texas at Dallas. His career combines engineering, mathematics, digital technology, philosophy, semiotics, theory of mind, and anticipatory systems. He holds advanced degrees in Electrical Engineering and Computer Science and a post-doctoral degree in Philosophy, Logic, and the Theory of Science. His forthcoming book is Disrupt Science: The Future Matters (Springer, December 12, 2023).
In this article he puts into perspective the spectacular accomplishments of AI models and the resources consumption needed to achieve them. Which leads him to formulate the following definition: Artificial entities could justifiably claim intelligence if, in executing a task, they would use as much energy or less, and as much data or less, than a living entity performing the same task.
Given that as of recently you can talk to your Chat GPT (which means a more intuitive interface to the GPT technology)—the so-called Generative AI– some will ask: how promising or how dangerous this is? What is missing is the larger question: how consequential for our civilization? This is actually the question I asked since my own involvement in AI—going back to its first wave of success started (and surviving the subsequent winters). I do not claim a specific contribution, but given my involvement with computer science and semiotics, I can claim that my broader perspective of machine intelligence and learning is my contribution. Let me put it in context.
Extinction or paradise?
The current infatuation with artificial intelligence is indicative of the level of competence of those who are in the headlights of a fast moving, still unidentified, flying object. The headlines range from “Mitigating the risk of extinction from AI[1]” to promising a world free of disease (cancer[2], in particular), and unlimited prosperity. No more need for lawyers (thank God!), no more need for doctors, not to say truck drivers, and Hollywood screenwriters. AI is all over, most of the time in stealth mode—and pretty successful in every form of surveillance (there are so many).
MIT Sloan Executive Education bluntly declares: “The hype surrounding innovative AI technologies is here to stay. Make sure you are able to capitalize on it[3].” Contrast this to: “Bizarre AI-generated products are in stores. Here is how to avoid them”[4]. There are already experts on generative AI, as well as on deep fakes. But copying original art or plagiarizing a book is not the same as impersonating, even playing God.
Capitalizing on something that might extinguish humankind—67% of those active in AI believe that[5]—or might lead to experiencing paradise on Earth—demagogues as AI experts—goes beyond petty thievery. The new AI ventures capitalize in the range of trillions of dollars—unprecedented in every respect. And this, despite the fact that the wonder is the rehash at large scale, of the Emperor’s New Clothes. The “weavers” of the suit that’s supposed to make the king invisible to those stupid or ignorant are astute computer geeks riding the wave of large, very large, extremely large, hyper-large data processing. Their view of intelligence, which they are supposed to deliver in artificial form, is devoid of knowledge, but drowning in data. In fact, science was replaced by measuring without understanding the data it generates.
The Chimera of AGI
The focus is on quantifying, i.e., attaching numbers to everything. This is the obsession with data—to the detriment of understanding the meaning of what is measured. The absence of scientific foundation explains why the aim of the AI geeks is what is called “artificial general intelligence” (AGI). The attractive illusion of intelligence that can do everything: describe protein folding, cure toenail fungus, speculate in the stock market, replace, advise governments, perform surgery, and most importantly, save humankind, explains the unlimited funding the field enjoys. With the magic AGI—“we are so close to it!,” goes the claim–no more only instructions for winning in chess or Go, for interpreting X-rays, for autonomous driving or piloting, for writing poetry, or for solving math problems. Humankind saved at the cost of extinguishing life on planet Earth? The Faustian bargain comes to mind.
In 1983, only 40 years ago, Howard Gardner documented a variety of types of “intelligence” in his Frames of Mind: The Theory of Multiple Intelligences. Chances are that his book was “sucked” in as training data for the GPT or some variations on the transformer model. The intelligence of a football player, a singer or violonist, a painter, an investor, a cobbler (or whoever makes shoes today employing robots) is different from that of a programmer. The Chief AI Scientist at Meta, Yann LeCun observed that “humans don’t need to learn from a trillion words to reach intelligence”[6]. If no one else comes to mind think babies and children. Actually, the science, whose absence from AI we should deplore, instead of wishing to regulate it, is to a large extent available. In short: two mathematicians, Hilbert and Ackermann, formulated the so-called Entscheidunsproblem: Is there a machine that can decide whether a particular mathematical proof is right or wrong? Two of the scientific geniuses of our time came up with answers. Turing demonstrated the impossibility of building such a machine: No mechanical procedure could validate a mathematical proof. This in itself should inform those who focus on GENERAL intelligence—the goal of AGI—that it is, by its nature, a chimera. If one application—deciding on a mathematical proof is not achievable—forget the goal of doing everything intelligently. But there is also Gödel: There are undecidable entities. This means that we cannot describe them completely and consistently (i.e., without contradictions). A general intelligence would have to be decidable. This is as impossible as the squaring of the circle, or as trisecting an angle or doubling a cube, or representing the square root of 2 as a rational fraction a/b. No matter what new technologies are developed.
Machine learning is doomed to consume more and more energy
Are partial artificial intelligence applications possible? Of course, and some are convincing. We live with them. Elon Musk announced a trial of Neuralink for treating those affected by quadriplegia (as the late Steven Hawking famously was). Plus: his Grok—in answer to the GPT frenzy—trained on data from what used to be Twitter (now X) introduces the style of the Hitchhiker’s Guide to the Galaxy. But even in popular success there is a lot to be concerned about. Within the brute-force algorithmic computation model through which AI is carried out, ever greater amounts of data are processed. It takes not only a lot of data but also a lot of energy to do it. So far, the makers of more powerful computation engines—such as Nvidia—are those that capitalize big on AI.
The impressive technological performance of machine learning is, in the absence of knowledge, doomed to consume more and more energy. To win a game of chess at the expense of energy that a small town consumes in a week is unsustainable. A Chat GPT inquiry—or, for that matter, Google’s Bard, or Microsoft’s Bing—take ridiculously high amounts of resources, as Sajjad Moazeni of the University of Washington recently calculated[7]. No living, from the smallest to large animals will consume more energy than what it takes to get what it needs to survive. The energy consumed by the miniscule bar-tailed godwit during migration is acquired through metabolism Intelligence[8], in the form of anticipatory action, guides the living in all its known forms of existence, in acquiring what it takes to prosper. Human beings go beyond survival: our goal is to prosper. Unfortunately, sometimes at the expense of others. Or by borrowing from the future.
Figure – Energy consumption comparison of AI models. Source: Towards Data Science
With all this in mind, I formulated a precise criterion for defining intelligence: Artificial entities could justifiably claim intelligence if, in executing a task, they would use as much energy or less, and as much data or less, than a living entity performing the same task.
As spectacular as accomplishments described as AI are, none qualifies as intelligent, but rather as high-performance data processing—sometimes called brute force computation. A start-up trying not to process even more data at no matter how high a price, but rather to define the minimum of data necessary to achieve a desired goal, will reflect awareness of sustainability. Such awareness is greatly missing in the hype of our days.
Join also the GEAB Community on LinkedIn for more discussions on this topic.
________________
[1] Source: As expressed by Ursula von der Leyen in her State of the Union speech. European Commission, 13/09/2023
[2] Source: ABC News, 21/07/2023
[3] Source: Central Nebraska Today, 02/10/2023
[4] Source: Washington Post, 18/09/2023
[5] Source: Central Nebraska Today, 02/10/2023
[6] Source : Twitter – Yann Le Cunn, 28/03/2023
[7] Source : University of Washington, 27/07/2023
[8] Gould JL, Grant Gould C (2012) Nature’s Compass: The Mystery of Animal Navigation. Princeton NJ: Princeton University Press.
Anticipation means always looking a little further ahead and striving to think the unthinkable. So, when all the signs point to destruction, to collapse, we need to remain clear-headed so [...]
The conflict will be resolved in the short term and will allow the region to integrate and open up to all its potential. This is what we presented last month, [...]
In our October issue, the GEAB team anticipated Ukraine's medium-term development: As a result, Ukraine's integration into the EU seems rather hypothetical, as it would have to be supported by [...]
The reshaping of the world is reflected in the evolution of trade routes. Global merchandise trade is expected to reach $32.6 billion in 2030, with Asia, Africa and the Middle [...]
With the subprime crisis (2007/2009), the rescue plans for banks and financial institutions reached such dizzying heights that we did not expect to see them again any time soon. Since [...]
Our team is continuing to research and reflect on the impact of artificial intelligence (AI) on our future societies. In doing so, we are seeking to gather a wealth of [...]
Eurozone: State bankruptcies on the horizon? Stagflation, recession, rising credit costs, unemployment... the negative signals are multiplying for the eurozone economies. As the end of the year approaches, these indicators [...]
Comments