“Data are stubborn and difficult.”

Katharina Kinder-Kurlanda examines the impact of digitalisation on our everyday lives. We talked to her about how we are responding to the new opportunities and what we are learning about the old as we reflect on the new.

You head the Digital Age Research Center (D!ARC) where you work in the area of “Human Science of the Digital”. Are there still people out there whose lives are completely “non-digitalised”?
KHardly. Even the data of individuals who are ostensibly not represented on the internet are digitally recorded and accessible. This is a global phenomenon that has affected virtually every aspect of our lives. My particular interest here is to see how people react to it in specific everyday situations.

But in order to be able to react, it is important to be aware that data is being collected.
My guess is that more and more people are very well aware that they are creating data trails. As long as ten years ago, we first interviewed researchers from a wide range of disciplines who were working with digital data. But even beyond science, many people are consciously trying to pay attention to the data they leave behind. The question is, what should be visible and what should be invisible, and how do we seek to use the algorithms for our own purposes?

The users are well aware that they are leaving data traces. At the same time, however, we don’t know exactly who collects what and where and what they use it for.

Can you give us an example?
Let’s take Netflix or Amazon: Based on the films and series you enjoy and watch regularly, or the books you buy, an algorithm will provide you with further suggestions. Then, many people individually try to “optimise” these algorithms in order to receive better recommendations – for example, by making their preferences more visible or less visible. An option allowing users to do this is often provided directly: On Amazon, for individual products ordered, I can specify that they are not to be used as the basis for future purchase recommendations. In this context, we also examined social casual games such as Candy Crush and Farmville. These simple games, commonly played in the course of everyday life, are often linked to social media platforms such as Facebook. The users are well aware that they are leaving data traces. At the same time, however, we don’t know exactly who collects what and where and what they use it for. What I find interesting is how people imagine the algorithms working and how they interact with them, often in unexpected ways not anticipated by the developers.

These are examples of how to optimise entertainment services. If my attempts fail, the worst that can happen is that I don’t find a product I’m interested in.
Yes, I have the roles of user and data donor when using these services. In a sense, I am helping the platform to learn. However, if we look at other contexts of use, we can see that people are affected by other structures and even by constraints: For example, an Uber driver is locked into an algorithm designed to optimise time and profits. A configuration is provided indicating how he should behave in his day-to-day work while driving a taxi. If he is not skilled at this, he will earn less money.

Many are concerned that the conclusions drawn from the data trails could have an adverse effect on their lives. If I regularly buy unhealthy food in the supermarket and this information is stored on my loyalty card, my health insurance provider might be able to gain insight into my lifestyle, which could then be to my detriment. Is this a likely scenario?
Several aspects come into play in this example. First and foremost: If you have been shopping at the same supermarket for years, then the cashier will already have noticed what you buy over and over again. However, the difference compared to digitalisation is that the digitally stored information about shopping behaviour is easier to evaluate and much easier to share. As you say, there is concern that the data could also be linked to completely different areas of life. In any case, we need to be watchful of those who are trying to make a profit from digital data about our behaviour. We need to ask more questions about who is allowed to use what data for what purpose. But I think it is unlikely that the exact example you mentioned will come to pass.

From literature we are familiar with the phenomenon whereby the potential of a new technology is often overestimated, either too negatively or too positively.

Why?
Firstly, the ominous scenario implies that entities that are very different – such as the supermarket and the insurance company – would collaborate. That cannot happen at the moment, it is not permitted. Secondly, from literature we are familiar with the phenomenon whereby the potential of a new technology is often overestimated, either too negatively or too positively. The actual changes are far more subtle or simply quite different than expected. We have the phenomena of myth-making and ‘messiness’. So we ask the question: What are the promises that a new technology brings to both developers and users? At the same time, we also need to look at the enormous fractures and inconsistencies we face in data collections. In my work, I have studied data infrastructures in great detail and in the process I have learnt: Data are incredibly stubborn and difficult, especially when it comes to merging them.

So much of what is assumed to be threatening is not even technically feasible?
Yes, in order to merge data, you often have to harmonise them on different levels so that they are compatible with each other and you don’t end up comparing or linking the proverbial ‘apples and oranges’. When you understand what a challenge it is, it is easier to assess the potential risks. It’s true, large data collections could, in certain circumstances, allow the wrong people to do the wrong thing. And people are certainly trying to profit from the ‘myth’ of Big Data right now. However, these very pessimistic debates often obscure the real problems – for example, who actually has access to this data and who doesn’t, even though this could be of interest to society. Or, to take your example from earlier, whether there is a shift towards abandoning solidarity-based models for health insurance.

We need machine learning or artificial intelligence in order to learn something from large volumes of data, or Big Data. However, the general public is not widely aware of these methods. Is science fully aware of what it is doing?
Explainable Artificial Intelligence addresses these questions. Many people either don’t know what’s going on or they simply don’t understand. The systems are often so complex that even researchers cannot always follow how a particular conclusion or decision is reached using these algorithms. Our EU project ‘NoBIAS – Artificial Intelligence without Bias’ is trying to get to the bottom of the question of which factors influence an AI-assisted decision. Another question on many people’s minds is who will be empowered to do something faster or better thanks to artificial intelligence. These are very complex issues. To answer them, we urgently need cooperation between different disciplines. That’s our focus at the Digital Age Research Center (D!ARC).

My own guiding question has always been: What do people do with machines and what do machines do with people?

What subjects are necessary for this? And what disciplines can you bring to the table?
D!ARC brings together various discipline-specific and interdisciplinary digitalisation researchers. My own guiding question has always been: What do people do with machines and what do machines do with people? I studied cultural anthropology and also minored in computer science and history. Overall, one needs a comprehensive social science approach to grasp the phenomena of digitalisation. At the same time, it is also necessary to have a technical understanding in order to be able to assess technological developments well. One focus of my work is on a meta-level: How do we conduct research on digitalisation? And what do digital data contribute to science?

At first glance, we might think that having a lot of easily accessible data is a blessing for science, wouldn’t we?
From my point of view, Big Data is neither a blessing nor a curse for science. The fact that much of the data is under the sovereignty of large, private internet companies and their profit interests is a problem for science as well as for society. We also need to be aware of the shifts in the scientific communities. New disciplines are emerging that are working with this data and developing new methods for this purpose. The new often leads to a reflection of that which already exists; thus it also has an impact on the areas that operate traditionally. Big Data arrived with the promise that it was superior to traditional methods and that it could be used to gain more comprehensive insights. How Big Data is changing science has not yet been sufficiently explored and remains an ongoing process.

What new skills do researchers need to be able to work with big data and artificial intelligence?
We are experiencing a wide-ranging discussion on skills: Today, mathematics and information technology have gained importance across all disciplines. These data are only accessible to those who also have the ability to download and analyse them. It requires a profound knowledge of methods, tools, platforms and interfaces.

Critical thinking, the principles of scholarly work and research ethics are sub-areas where the social sciences and humanities have considerable expertise.

Does this mean that the technical experts are the only beneficiaries of this change?
No, I believe that a lot of expertise needs to be transferred into the technology sphere from the social sciences and humanities as well. Critical thinking, the principles of scholarly work and research ethics are sub-areas where the social sciences and humanities have considerable expertise. In contrast, those in technical subjects are often not used to conducting research on people and social matters. Ultimately, we can benefit greatly from one another.

In closing, could you name one area in which you have particularly benefited from digitalisation?
I would like to speak up on behalf of online communication in its broadest sense – from Zoom meetings to co-authoring a text on Google Docs. Now, more than ever, these tools allow us to work together in international research collaborations. This is simply incredible. Much the same applies to online teaching, in my view: We have phenomenal opportunities here as well.

But we do not always use them to their full effect.
Indeed, at the moment we mainly use tools that try to recreate the lecture hall or seminar room situation online. I would like to call for a closer look: What happens in the lecture hall under what circumstances? Why are we even sitting in the seminar room? I suspect that there are underlying social constructs that are important to those involved, such as the mapping of power structures or concepts of knowledge hierarchies. If we critically reflect on the entire setting, we can arrive at new forms of teaching and exchange in the online lecture halls as well, which can move us forward as a whole.

for ad astra: Romy Müller

About Katharina Kinder-Kurlanda


Katharina Kinder-Kurlanda joined the Digital Age Research Center (D!ARC) as a university professor in February 2021 and has headed it since October 2021. She studied cultural anthropology, computer science and history at the Eberhard Karls University in Tübingen and at the Goethe University in Frankfurt am Main. From 2005, she was a PhD student in an EPSCR (Engineering and Physical Sciences Research Council)-funded project on the Internet of Things at Lancaster University in the UK. In 2009, she completed her doctorate at Lancaster University with a dissertation on “Ubiquitous Computing in Industrial Workplaces”. After her PhD, she worked as a postdoctoral researcher and visiting lecturer at Lancaster University Management School. From 2012 to 2016, Katharina Kinder-Kurlanda worked at GESIS – Leibniz Institute for the Social Sciences in Cologne as a senior researcher and between 2016 and 2021 she was team leader for “Data Linking & Data Security”. At the same time, she held a fellowship at the Center for Advanced Internet Studies (CAIS) in Bochum between 2019 and 2020.