{“Do you want to play chess?”
 “No, I'm bored with chess.
 Let's talk about poetry."
 Gödel, Escher, Bach,
 Douglas Hofstadter.}

Humanity beware - conscious artificial intelligence is imminent and could subdue us in the foreseeable future! This is perceived by AI doomers as being the essence of warnings forwarded by leaders in the AI industry, pleading for regulations before it would be too late. Could they be right, could machines really overtake us and create a science fiction dystopia?

Of course, prophesizing conscious AI does not make it come to pass. In this post I want to argue that when a leading figure warns for the dangers of AI, then that is likely related to malevolent application or accidental software bugs, rather than AI becoming self-aware and perceiving us as a threat. Whether a machine could be consciously intelligent like a person, is a fascinating question that is being addressed by many great minds. Needless to say, they do not agree.

Are we anywhere near developing self-aware AI? This question gains actuality due to the popularity of large language models (LLM's) like ChatGPT. Alledgedly a LLM can already perform well in a verbal IQ test. That would be quite an achievement considering that such test is designed assuming conscious intelligence in the test subjects. A LLM algorithm is a particular deep learning neural network, synthesizing short to medium sized narratives that a person perceives as coherent. Not long ago I read a commentary that ChatGPT prose tends to be dull, repetitive and not at all appealing or enticing. Creating a well written, catchy story is a human ability that this LLM apparently cannot mimic.

When the neural network formalism was introduced in the 1960's, somebody thought it might model how the human brain works and the name took hold. Despite the open question whether this idea is correct, modern literature frequently presents it as a fact, without criticism. From a mathematical perspective a neural network like a LLM trained with an immense corpus of text, is an advanced automaton that mechanically computes the most probable next words in a sentence. Nothing more or less.

Generative AI such as a LLM is presented by certain individuals as the place to go for information. But can we trust LLM output, or must we scrutinize it? For the sake of the argument, assume that we have a science question and would consult a LLM that is trained with a large body of scientific literature. We should consider whether the question we pose relates to material that was in the LLM's training set. Should the LLM interpolate, we need to realize that only one-third of scientific literature will stand the test of time, research shows. When a neural network extrapolates it may hallucinate - confidently produce quasi-sensible fallacies. Curiously, AI is not unlike an oracle from classical antiquity, in the sense that human intelligence is needed to validate AI output.

Another example of a neural network algorithm is translation software. Research into computer translation dates back more than seventy years, today however many would agree that understanding context is not yet a strong point of such software. Coming back to above verbal IQ test, should a LLM really score better than the average person, a valid question would be what such test actually probes given that the test subject apparently needs not understand neither questions nor answers.

Part of the confidence in AI comes from its terminology, which is oftentimes anthropomorphic. A neural network is a mathematical formalism to fit any input to any output. Training is calibration with samples for which results are known. Hallucinations are spurious results, for instance from extrapolation outside a calibration. An implementation of an algorithm is a piece of software. An important workhorse in Machine Learning is multivariate regression, that in turn is applied linear algebra. Deep learning is constructing a neural network more complex than its elementary form - multivariate regression. When considering AI nuts and bolts, expectations for development of a sentient algorithm may be tempered somewhat.

Interestingly, not only AI terminology is anthropomorphic. We seem to presume that sentient AI would be a lot like us. From that perspective, the fear of AI should make one reflect on what this fear teaches us about ourselves. Whether a sentient AI would really be like us, is a fascinating subject however. For instance, how would a sentient AI answer an apparently straightforward question as what is your purpose? Would it understand individualism and that your could refer to either a drone or a hive? If it were an instinctive entity, could it even understand what purpose means? What communication would sentient AI's evolve, and would the Sapir-Whorf hypothesis apply to them? In fact, sentient AI could be completely alien to us.

A central question is whether AI eventually could become conscious at all. The hypothetical existence of self-aware machines poses important philosophical questions. Are we machines and do we have free will? Could we upload ourselves to the cloud? Some, like Nobel laureate Roger Penrose are convinced that AI could never reach our level, based on quantummechanical arguments implying that the brain is not algorithmic and therefore can solve problems a machine cannot. Moreover, conciousness would be the result of quantum vibrations in neurons. According to Penrose, a conventional or quantum computer could not emulate a human brain - simulate it at best. Others will use different arguments to defend that the brain may work in an algorithmic way after all. The matter clearly is not at all settled.

For machines to subdue us they would need to be sentient, otherwise it would be humanity itself shooting in its own foot. Whether AI could outwit us, is another question. Debatably, AI does not need to emulate human intelligence, in order to take us in. A modern chess program defeats most human players, in that sense AI arguably is smarter than most of us, at least at playing chess. However, such software is not actually playing chess, since a next move in a given position is computed by an electronic abacus. There is no shame in loosing from a good chess program, as you could not outrun a racing bike either. If software could autonomously publish an original work on endgame studies, we should perhaps reconsider, but do not expect such publication anytime soon.

In conclusion, as with many important questions, we cannot yet confidently answer whether a machine could be either conscious or sentient, like us. Hofstadter's quote at the top of this post can be interpreted as either an expectation for the future, or as an intellectual jest. You should evaluate the arguments yourself to determine your position in the discussion, and not blindly follow someone else's convictions. Gathering information to form a well thought out opinion, obviously is a characteristic of a truly intelligent, self-aware entity.

Published in Artificial Intelligence

More on Artificial Intelligence, Education, Mathematics, Philosophy



© 2002-2024 J.M. van der Veer (jmvdveer@xs4all.nl)