A Paradigm Shift
As if life wasn’t already complex enough, we have been able to enter into a new, unknown, promising but daring relationship since November 2022. Generative Pre-trained Transformers (GPT) have arrived on the market and are waiting for conversation partners. 100 million have accepted the offer two months after the launch of ChatGPT, which no other tool has achieved so far. Instagram, for example, took 2.5 years to do so. The free access to AI was probably very appealing, but the new, never-before-experienced way of interacting with machines probably played a bigger role.
Jakob Nielsen, who has been working on user interfaces between humans and computers for over 30 years, describes an exciting difference in the interaction with Large Language Models (LLMs), which are the basis for all GPTs, compared to conventional systems. The previous command-based inputs are now followed by intention-based result specifications, where people tell the computer what they want and what they don’t want, how it should do it, what it should pay attention to, whether it has any questions about it, and so on.
Algorithms have been part of our lives for some time now, in the shadows, built into many software tools. However, since the end of 2022, we have been able to communicate with machines in a way that goes far beyond issuing commands. Large language models can automatically generate speech and writing by recognising statistical relationships between the smallest pieces of text. This allows them to respond, summarise and give the impression that they are responding to the other person, i.e. understanding things like a conversation partner.
This is a paradigm shift in the relationship between humans and machines. The most powerful media to date, such as language itself, writing, printing, the photocopying and the smartphone, have transformed communication enormously. The big language models and other systems based on algorithms are now joining the ranks. There are now new ways of experiencing, acting and processing our lives in a meaningful way, as sociologist Dirk Baecker puts it. We have to learn to deal with this and have no idea what exact impact this will have on us and our society. Just as people in the 15th century probably had no idea that the invention of the printing press would mark the beginning of the knowledge society.
“Now we, who are unpredictable by nature, are confronted with unpredictable machines. Who still believes that machines will simplify our lives?”
Now you might think that this is simply the perpetual technological transformation. However, there is something special about the current situation. We humans have always tried to see things through, to break them down into causalities. While Chester Carlson, inventor of the first photocopier in 1959, still had his machine completely figured out, the developers of language models no longer claim to do so. These machines, which process unimaginably large amounts of data and recombine them with the help of recognised patterns, as well as continuously learning (partially) autonomously, are no longer completely transparent.
Now we, who are unpredictable by nature, are confronted with unpredictable machines. A point at which, according to Dirk Baecker, complexity replaces reason. Who still believes that machines will simplify our lives?
And it gets worse
So far, LLMs have simulated human reality. The more we work out what makes us special and different from machines, the faster LLMs will learn these skills and become more and more human. At the same time, autonomous learning processes will continue to change the description of reality and create a reality of their own. It will then be exciting to see who will mirror or replicate whom in the future. ‘We shape our tools and thereafter our tools shape us,’ said John Culkin, Professor of Communication at Fordham University in New York, aptly back in 1967.
We are heading towards a world of polyrealities, a network of many parallel, interdependent realities that influence each other, with the boundaries between them becoming increasingly blurred. I fear human overload, being overwhelmed, feelings of powerlessness and the reflexive call for simple solutions with trivial explanations in this increasing complexity. In order to cope with this, many will retreat into their own little realities and create a reasonably manageable world within the world in which they can gather information, cultivate their relationships, feel themselves and the meaning of their lives.
Two Theses that can also be Opportunities
In addition to these prospects, which may seem gloomy to some, I would like to describe two working hypotheses that also point to opportunities for overcoming complexity in the current situation.
Thesis 1: ‘By engaging with machines, we learn essential things about ourselves as humans and have the chance to become human again.’
When machines, such as LLMs, compete with us on this scale for the first time, we are forced to come to terms with our own being. In doing so, machines hold a mirror up to us. We find surprising similarities as well as differences.

The more we watch machines think, the more we discover the machine aspect of our thinking. In the past, when machines were still transparent, people were subject to certain norms and forms that made their behaviour just as predictable. Workers on the assembly line are a classic example of this. The more people behaved like machines, the more satisfied traditional management systems were. Large language models also have enough processes that are very similar to human behaviour. Both draw on what has been learnt in the past in order to assess or predict the present. This can sometimes be helpful, sometimes dangerous. Both are prone to systematic distortions when making decisions.
However, the mutual confrontation between the two systems also highlights their respective special features, strengths and shortcomings. The greatest synergies probably lie in the clarity of what distinguishes the two from each other. We can learn a lot from each other.
If you have to solve another annoying captcha task because the programme is checking whether you’re a human or a bot, you can easily see what humans are still better at – distinguishing between chihuahua heads and blueberry muffins, for example. LLMs, on the other hand, can analyse data and its patterns faster, more accurately and more comprehensively, but they are context-blind, which leads to inappropriate decisions in many situations. However, context blindness is an advantage when diagnosing melanoma (malignant skin cancer) because it has been shown to be more accurate.
LLMs simulate understanding without the ability to grasp the deeper meaning of their actions. Human understanding means putting something into a describable context and thus generating meaning. LLMs therefore do not understand, or rather, do not know the meaning of their actions. They react to the other person and simulate understanding or recognition of what the other person wants – which in fact actually reminds me of some people.
One of my Self-Experiments
In a self-experiment, I used ChatGPT to try out a mutual clarification of expectations for a future collaboration. In terms of role clarity, I always do this with new colleagues. It goes beyond the scope to go into depth here. I was amused by statements about our consistent collaboration, such as ‘I will never get tired, bored, or distracted.’ And what does a ChatGPT expect from me? ‘Clearly articulate your thoughts, questions, and prompts so that I can understand what you’re looking for, a willingness to consider alternative viewpoints for more enriching discussions, treat our interaction with respect, patience allows for a smoother flow of conversation, feel free to provide feedback,…’ I could only answer ‘vice versa’.
Typically Human
By communicating with LLMs, we learn to re-evaluate conversations with people. By immersing ourselves in virtual realities, we learn to experience analogue realities in a new way. There are qualities that are deeply rooted in being human – love, sensuality, passion – whose meanings must now be redefined in the context of work. Attributions such as playfulness or the courage to refuse, which were traditionally not highly valued in the world of work, can now be re-evaluated as key differentiating features of the human versus the machine. Even the (un)intentional disregard of rules has already produced one or two innovations. It is this clear differentiation that harbours the potential for synergies that go far beyond what either side could achieve on its own.

Which brings us back to the interface, or rather the interaction. When communicating with LLMs, we realise that the way you talk to LLMs is just as much an art as interpersonal communication – both are often underestimated and responsibility for mistakes is usually attributed to the other person. If someone is annoyed about an inadequate answer from ChatGPT, they rarely think about how inadequate their question was.
After all, findings from this dialogue with machines are only ever snapshots. As soon as one side learns something new, the cards are reshuffled. For example, the more we work out what makes us human, the quicker LLMs will be able to accept these seemingly unique abilities themselves.
Thesis 2: ‘The crucial aspect lies in the interaction when making decisions. It is not either human or machine, but a finely tuned both/and that will determine whether humans remain at the centre.’
Another way to overcome this complexity is to create clarity regarding the decision-making framework. This requires an understanding of how we ourselves and machines make decisions and who can contribute what to ensure that a good decision is made.
Cognitive decision-making processes are based on rational and emotional logic, whereby the role of reason is usually overestimated. Emotions usually have a stronger controlling influence, which is neither good nor bad in itself – but definitely sets us apart from machines. It may come as a surprise, but human decisions are also largely automated. Our brains are prone to systematic bias. Machines, such as LLMs, in turn learn from the products of our brains and adopt these biases. Another example that shows how we are both perpetrators and victims. If we demand transparency about the basis on which machines make decisions, we should also be aware of the criteria and dynamics on the basis of which we humans make decisions.

We always make decisions in social, ethical and other contexts – influenced by beliefs, values, principles and narratives – both consciously and subconsciously. Due to the context blindness of machines, humans are needed to include the relevant context in the decision-making process. At least as long as the machine has not yet learnt this skill.
A Three-Way Relationship
In the context of organisations, this two-way relationship becomes a three-way relationship: human – machine – organisation. What is needed here is clarity about the role of machines in decision-making structures and processes. To this end, people must be equipped with new skills, such as prompt writing, to work together in new roles with specifically selected algorithm-based technologies and responsibly place the output of this cooperation in the corresponding organisational context and decide or let the decision be made.
And in doing so, constantly reflect on the impact – sometimes again with the help of technology. A challenging task for organisations in the future. However, machine-based data foundation and pattern recognition combined with human insight and contextualisation have great potential to impressively increase the quality of decisions.
Another one of my Self-Experiments
As a self-experiment, I am constantly observing my interaction in decision-making processes with autonomous systems that want to make decisions independently using algorithms. Let’s take a journey in my vehicle, which has various (semi-)autonomous systems installed, as a simple example of who makes which decisions here. I decide where to go, my sat nav decides which route to take. It clearly has more (traffic) data as a basis for this. If I know the area well, I sometimes don’t stick to it. Then I’m proud when it goes faster, but most of the time I regret it. I have learnt that my vehicle is better at keeping in lane in many situations, dips the high beam more reliably, maintains a more constant speed and often recognises hazards more quickly.
“We shape our tools and thereafter our tools shape us.”
John Culkin
Our co-operation in steering is getting better and better as it becomes clearer who is allowed to decide what, when the steering handovers are, when I only ask for advice, when I hand over the decision but want to be informed and which things are better decided ‘together’. We still have bigger discussions when it claims that I’m tired and should take a break on longer journeys. I then deny that. But if I’m honest with myself, the system recognises it better and faster than I do.
For Clarification
The attempt to have this article written by an LLM failed spectacularly – it was too boring and too lacking in focus. However, the DALL.E Imagine Generator was able to provide the visual support.
Thomas Schöller has been researching the future of work for years and published his first article on the relationship between human and machine in 2019. He is a driving force and organisational designer for the realignment and redesign of meaningful organisations.
