{“Do you want to play chess?”
 “No, I'm bored with chess.
 Let's talk about poetry."
 Gödel, Escher, Bach,
 Douglas Hofstadter.}

Humanity beware - conscious artificial intelligence is imminent and could subdue us in the foreseeable future! This is perceived by AI doomers as being the essence of warnings forwarded by leaders in the AI industry, pleading for regulations before it would be too late. Could they be right, could machines really overtake us and create a science fiction dystopia?

Of course, prophesizing conscious AI does not make it come to pass. In this post I want to argue that when a leading figure warns for the dangers of AI, then that is likely related to malevolent application or accidental software bugs, rather than AI becoming self-aware and perceiving us as a threat. Whether a machine could be consciously intelligent like a person, is a fascinating question that is being addressed by many great minds. Needless to say, they do not agree.

Are we anywhere near developing self-aware AI? This question gains actuality due to the popularity of LLM's, Large Language Models. Alledgedly a LLM can already perform well in a verbal IQ test. That would be quite an achievement considering that such test is designed assuming conscious intelligence in the test subjects. A LLM algorithm is a particular deep learning neural network, synthesizing short to medium sized narratives that a person perceives as coherent. Not long ago I read a commentary that LLM-generated prose tends to be dull, repetitive and not at all appealing or enticing. Creating a well written, catchy story is a human ability that a LLM apparently cannot yet mimic. Also, since a LLM does not understand text it processes, it is unlikely to handle creative use of ambiguity inherent to human use of language - for example, do not expect a LLM to have a sense of humour.

When the formalism was introduced in the 1960's, a neural network was thought to model the inner workings of a human brain. Modern literature frequently presents this as a fact, although the question whether this analogy is acceptable is as yet unanswered. From a mathematical perspective, a neural network like a LLM trained with an immense corpus of text, is an advanced automaton that mechanically computes a probable continuation of a piece of text, nothing more or less. Is this what we have reduced ourselves to - machines executing algorithms? We will return to this problem further on in this post.

Generative AI such as a LLM is presented by certain persons as the place to go for information. But can we take LLM output at face value, or must we scrutinize it? For the sake of the argument, assume that we have a science question and would consult an advanced LLM that is trained with a large body of scientific literature. Consider whether the question we pose - our prompt - relates to material that was in the LLM's training set. If it does and the LLM interpolates, we need to realize that only one-third of scientific literature will stand the test of time, research shows. Thus the LLM output may be flawed because of the inherent quality of the training data. When a neural network extrapolates it may hallucinate - confidently produce quasi-sensible fallacies. In either case human intelligence is needed to interpret and validate AI output. Curiously, from this perspective AI is not unlike an oracle from classical antiquity.

Another example of a neural network algorithm is translation software. Research into computer translation dates back more than seventy years, today however many would agree that understanding context is not yet a strong point of such software. Coming back to above verbal IQ test, should a LLM really score better than the average person, a valid question would be what such test actually probes given that the test subject apparently needs not understand neither questions nor answers.

Part of the confidence in AI comes from its terminology, which is oftentimes anthropomorphic. A neural network is a mathematical formalism to fit any input to any output. Training is calibration with samples for which results are known. Hallucinations are spurious results, for instance from extrapolation outside a calibration. An implementation of an algorithm is a piece of software. An important workhorse in Machine Learning is multivariate regression, that in turn is applied linear algebra. Deep learning is constructing a neural network more complex than its elementary form, which is multivariate regression. When considering AI nuts and bolts, expectations for development of a sentient algorithm may be tempered somewhat.

Interestingly, not only AI terminology is anthropomorphic. We seem to presume that sentient AI would be a lot like us. From that perspective, the fear of AI should make one reflect on what this fear teaches us about ourselves. Whether a sentient AI would really be like us, is a fascinating subject however. For instance, how would a sentient AI answer an apparently straightforward question as what is your purpose? Would it understand individualism and that your could refer to either a drone or a hive? And if it were an instinctive entity, could it even understand what purpose means? What communication would sentient AI's evolve, and would the Sapir-Whorf hypothesis apply to them? In the end, a sentient AI, if it would one day exist, could be completely alien to us.

Let us now come back to the earlier question whether we ourselves would be machines, and the related topic whether AI could be conscious at all. The hypothetical existence of self-aware machines poses important philosophical questions. Are we machines and do we have free will? Could we upload ourselves to the cloud? Some, like Nobel laureate Roger Penrose are convinced that AI could never reach our level, based on quantummechanical arguments implying that the brain is not algorithmic and therefore can solve problems a machine cannot. According to Penrose, a conventional or quantum computer could not emulate a human brain but simulate it at best, which explains his preference for the term Simulated Intelligence over Artificial Intelligence. Others will use different arguments to defend that the brain may work in an algorithmic way after all. The matter clearly is not at all settled.

For machines to subdue us these would need to be sentient, otherwise it would be humanity itself shooting in its own foot. Whether AI could outwit us at particular tasks, is another question. For example, a modern chess program defeats most human players. Hence we could argue whether mechanically computing chess moves makes a machine smarter than most of us, at playing chess. There is of course no shame in being beaten by a chess program, as you could not outrun a racing bike either. If software would one day autonomously author an original work on endgame studies, we should reconsider, but do not expect such publication anytime soon.

In conclusion, as with many important questions, we cannot yet confidently answer whether a machine could be either conscious or sentient, like us. Hofstadter's quote at the top of this post can be interpreted as either an expectation for the future, or as an intellectual jest. You should evaluate the arguments yourself to determine your position in the discussion, and not blindly follow someone else's convictions. Gathering information to form a well thought out opinion, obviously is a characteristic of a truly intelligent, self-aware entity.

Published in Essays

More on Artificial Intelligence, Education, Mathematics, Philosophy



© 2002-2026 J.M. van der Veer (jmvdveer@xs4all.nl)