“What astonishes me is how easily we humans are persuaded by content that is generated by a machine.”

Many of those following the current media discourse are left with a sense of alarmism: The message seems to be that advances being made in the development of artificial intelligence could cause the systems to spiral out of control. At the same time, experts are voicing concern about the potential influence of these technologies on the stability of democracies, given that images can be manipulated in a matter of seconds. We asked Wolfgang Faber, professor at the Department of Artificial Intelligence and Cybersecurity, for his take on the new AI tools and the state of research.

AI technologies are currently omnipresent in the media. Why do you think this is the case?

The term has been in use since the 1950s and the basic notion that machines might think or act in a manner that resembles human beings has been around for much longer. At present, we are witnessing breakthroughs in the proliferation of these technologies. They are more readily available today. Anyone can download image manipulation software, chat with a bot or use AI technology to complete their homework. While much of this was possible in the past, it wasn’t widely known.

More than 1,000 people, including Elon Musk, recently called for a freeze on the development of AI technologies. What is your view on this?

I find the move hard to follow. Even if you wanted to introduce this kind of pause for reflection, it’s unclear how it would be enforced. Of course, you can stop funding, but I really don’t see the point in doing that either.

Thinking about recent AI technology developments, what surprises you most?

I am amazed that the natural language used by a bot like ChatGPT feels so authentic. It’s truly remarkable. Speech recognition has long been a major weakness in the field; now we can see it working well. What astonishes me even more is how easily we humans are persuaded by content that is generated by a machine.

What is the alleged threat of artificial intelligence – the machine that is capable of something that seems uncanny or the human being who is unable to fully comprehend it?

I think the danger lies in the way the technology is used, in the pragmatics of it. Let’s consider the example of image manipulation: This is not new. In the past, it took longer and required a fair amount of skill and patience to manipulate an image. Today, the process is significantly faster. Technology is more readily available and therefore more likely to present a danger when used irresponsibly. Here, we need ethical systems of regulation.

Time and again, it is also reported that the underlying mechanisms of artificial intelligence are poorly understood.

I don’t believe that is a danger, since many motorists do not understand what is going on under the bonnet of their car from a technical point of view. I’m sure that’s not an issue.

But at least the people who build cars understand how they work. I find that reassuring.  

The engineers who build AI also know what’s going on in terms of technology. And things haven’t yet taken on a life of their own. The idea that at some point in the future AI will design other AI that is even more advanced, seems inconceivable at present. I see no immediate danger there.

Your own research lies in the field of artificial intelligence. What questions are you particularly interested in?

We are attempting to determine what knowledge is and how we as humans develop intelligence. A closer look reveals that the very definition of intelligence is fraught with difficulty. Our assumption is that it is a collection of various skills. In the field of AI, we are discovering that it is a tough challenge to reconcile these different aspects.

And what is knowledge?

Knowledge is also a skill. When we know something, we can weigh our actions in advance and adjust them to do what our knowledge guides us to do. My work deals with understanding how knowledge is represented and how conclusions are drawn.

Knowledge is also linked to learning.  

Indeed, but that is not among my core areas of focus. However, learning from AI systems currently features very prominently in the media. We can observe interesting phenomena here, because learning also works in very different ways: How do children learn to ride a bicycle and how do we learn to tell edible mushrooms from other fungi? The former is learned through experience, but the latter is best not learned in the same way. So how do AI learning systems learn? We can use the much-cited example of the chihuahua vs. the blueberry muffin: There are some photos where it is difficult to distinguish the dog from the sweet treat, even for the human brain. We do eventually get it right, but we don’t understand how we are able to do it. It is all the more difficult for a technology to learn this distinction.

Is it possible to measure whether a system (or a person) has understood something?

It is very difficult. We are all familiar with that feeling when things suddenly become clear to us, when the proverbial penny drops. Even with my own students, I am not always sure whether I can tell for certain that someone has really understood something. Applying this to AI technologies, many problems will remain unresolved until we fully comprehend what understanding is.

Do you feel that ChatGPT understands you?

Not really. (Wolfgang Faber shares a chat history with ChatGPT on the following question: Jimmy lives in a house with his two brothers and his parents. How many people are in Jimmy’s family? see insert)


Faber: Jimmy lives in a house with his two brothers and his parents. How many people are in Jimmy’s family?

ChatGPT: There are a total of six people in Jimmy’s family, including himself, his two brothers, and his parents.

Faber: Why six people?

ChatGPT: There are six people in Jimmy’s family because Jimmy has two brothers and his parents. Including Jimmy, that makes a total of six people in the family.

Faber: I don’t understand why 6.

ChatGPT: I apologize for the confusion earlier. I assumed that Jimmy had two brothers and his parents, but since the prompt does not provide that information, it’s best to be more general.
In general, a family can have any number of members, depending on their individual circumstances. For example, a family may consist of a single parent and a child, two parents and multiple children, grandparents living with their adult children and grandchildren, and so on.
Therefore, without additional information about the size of Jimmy’s family, I cannot give you a specific number. However, I hope this explanation helps.

Faber: The prompt does say two brothers and parents, and you can assume that there are two parents. So how many people are there?

ChatGPT: I apologize for my earlier confusion. Based on the information provided in the prompt, Jimmy lives in a house with his two brothers and his parents. Therefore, there are a total of five people in Jimmy’s family, including himself, his two brothers, and his parents.

Faber: Why did you say 6 people before?

ChatGPT: I apologize for my earlier mistake. I misunderstood the prompt and assumed that Jimmy had two brothers, in addition to his parents. However, the prompt clearly states that he lives with his two brothers and his parents. Therefore, there are five people in Jimmy’s family, including himself, his two brothers, and his parents.

Faber: I do not see a contradiction to your assumption.

ChatGPT: You are correct that there is no explicit contradiction in assuming that Jimmy has two brothers and two parents, as this is a common family structure. However, it is important to clarify the information provided in the prompt to avoid any misunderstandings.
Given that the prompt states that Jimmy lives with his two brothers and his parents, it is safest to assume that there are only two parents in the family, as this is the most common family structure. Therefore, there are five people in Jimmy’s family, including himself, his two brothers, and his two parents.
I hope this clears up any confusion, and I apologize for any earlier mistakes.

Some years ago, there was news coverage of an AI programme that was supposed to be able to pass school tests set for 8th graders in the US. I tried a simple question that even 5-year-olds can solve. Here, too, we can see that the programme can express itself impressively and accurately in grammatical terms. However, as far as the content is concerned, the answers do not always make sense. It is also a fascinating experiment to ask this kind of programme about oneself. When I ask the question “Who is Wolfgang Faber?”, the result is a CV that is similar to mine, but contains numerous errors. In other words, one of the weaknesses inherent to such systems is the underlying data and the notion of time: What is present? What is past?

Can you distinguish a term paper written using ChatGPT from self-penned work?

I do not presume to claim that I would get this right every time. Obvious inconsistencies are easily identified, of course, but I am not sure that coherent texts would be identified with similar ease. Besides, there is a chance that much of this will improve considerably in the near future.

A study commissioned by a bank recently found that journalists and writers, among others, belong to professions that are under threat from emerging AI technologies. Yet both require originality at the very least, and maybe even a sense of humour. Is this something AI can excel at?

Determining whether a joke is funny or not, and what type of humour appeals to whom, is very complicated. I don’t know if AI can tell good jokes. I’ve never tested that. In journalism, I can imagine that a piece of AI technology might do a good job condensing lengthy texts to brief summaries. But, to be honest, that is the boring part of journalism. When it comes to exciting tasks, for example carrying out an investigation, I have no idea how this might work from a technical point of view. I don’t think that earning a livelihood as an author might be jeopardised.

What approaches is AI research taking in the development of new technologies?

One approach is based on the assumption that we need to build something resembling a brain, which then also operates in a similar way to the human brain. The corresponding projects are highly complex, and much of what we expect from the result is still rather obscure: Do we really know all we need to know about the human brain in order to build something that is brain-like? Then, what happens once we have built a technical brain? Will it think? And if so, how do we ascertain that? The other view takes a more bottom-up approach and considers specific problems for which researchers hope to find better solutions with the aid of artificial intelligence. We believe that it would be expedient to link the two strategies more tightly to each other.

What skills do you teach your children to prepare them for a world where AI features more prominently?

It is important that they get to experience these technologies in practice. Shielding people only makes the ‘forbidden’ thing seem even more appealing. It vital to have a conversation about how these technologies work. At the same time, we can jointly examine the advantages and disadvantages of using them. And we mustn’t forget: What it is that a technology cannot do? In research in particular, this is often the most fascinating question.

Zur Person


Wolfgang Faber is Professor of Semantic Systems and Head of the Department of Artificial Intelligence and Cybersecurity at the University of Klagenfurt. From 2013 to 2018, he held the position of Professor of Artificial Intelligence at the School of Computing and Engineering at the University of Huddersfield. Between 2006 and 2013, Faber worked at the Department of Mathematics at the University of Calabria as an associate professor. At present, Wolfgang Faber is a board member of the Austrian Society for Artificial Intelligence (ASAI) and the Web Reasoning and Rule Systems Association (RRA).