Bernhard Rinner pleads for greater levels of awareness when it comes to the security and privacy of camera and sensor networks. In numerous projects, together with doctoral student Ihtesham Haider among others, his team is working on new technologies.
Where are intelligent cameras used?
Rinner: When we talk about camera networks, most of us start to think about surveillance cameras, such as those at airports, train stations or high-security facilities. Camera networks have been used in these areas for a long time now. They are typically very expensive in terms of installation and operation. But what is new is the use of more cost-effective cameras in smaller spaces, such as in the home. For example, people are looking into the viability of using cameras or other sensors to help elderly people stay in their own homes for longer.
But what about privacy?
Rinner: On the one hand we have security requirements; on the other hand we have to think about protecting people’s privacy, which is also very important. At the same time, the camera network also needs to be relatively cheap to implement. However, protective technologies require processing capacity and this increases the system costs.
What is the difference between data security and privacy?
Rinner: When it comes to data security, we’re talking about confidentiality, integrity and availability of data: how can I encrypt the data in such a way that only authorised persons are able to see the content? How can I protect the data from intended or even unintended changes? And how can persisent access to the data be ensured? When it comes to privacy we are talking about identity protection. Data is anonymised, i.e. the information is reduced, so as to ensure individual persons are no longer identifiable.
You have recently been working on two projects, ProSecCo and TrustEYE, which deal with aspects of the security and privacy of cameras. What were your findings?
Rinner: In TrustEYE we developed a trustworthy camera platform, similar to the well-protected organ of perception. Our platform protects all captured images and prevents them from being accessed by unauthorised parties. In ProSecCo we studied how we can implement this functionality in the most resource-efficient way possible.
Why is resource efficiency important?
Haider: We’re dealing with very small cameras that on the whole have a low processing capacity. That’s why it’s important to develop efficient protection measures.
What have you developed?
Haider: We are taking advantage of the inherent manufacturing variations of integrated circuits: due to very small changes in the semiconductor production process no two circuits are exactly alike. These differences are like the unique fingerprint of the circuit; which we are able to use for authenticating or encryption. As the fingerprint is based on the physical properties of the chip, it cannot be copied onto another chip.
To what extent is this a new aspect?
Haider: Research into Physical Unclonable Functions, as this is known, has been carried out for around ten years now. What’s new about our work is that we want to use this function in cameras. The advantage is clear: the fingerprint is already intrinsic to the chip, meaning we don’t need any additional components in order to achieve a greater level of (hardware supported) data security. The application is still in its infancy, and there is still a long way to go.
Technology such as this can offer support when it comes to the security of camera networks. What contribution can science make to protecting privacy?
Rinner: We are trying to ensure that people are not recognisable in the images. We are changing the image so you are still able to see what is going on, just as before, but not who is doing it. We are aiming to modify the content of the image, e.g. through automatic pixelation. The images are, however, not always appealing looking. That’s why we’re working on other methods, such as cartooning.
If the cameras are observing an elderly person so as to be able to let relatives know if anything is wrong, how can you be sure that enough of the image will remain for what’s going on to still be recognisable?
Rinner: Privacy is subjective. What one person may see as a massive invasion of privacy may well be no problem for someone else. The person being observed also needs to know what is happening to the recorded data. We have to admit there is no perfect solution, and that actually a compromise between the various requirements is always necessary. The ‘best’ solution for the protection of an individual’s privacy would be a black image, but that’s not going to be of any use. That’s why we’re looking into adaptive methods such as cartooning, where it is possible to change the extent of the modification.
When you’re older, would you like to have a camera system like this set up in your living room?
Haider: (laughing) only if it’s one we’ve developed.
How intelligent are today’s cameras? Could a camera, for example, identify unusual behaviour by an individual at an airport?
Rinner: To be able to recognise unusual behaviour the technology has to know what usual behaviour is. Typically this means analysing movement and behaviour patterns over time, then using this data to describe ‘normality’. If there is then a significant change from what the camera has learned, the alarm is raised.
What does the camera actually learn?
Rinner: Whether we’re talking about an individual camera or a system: an algorithm is the learning entity, not the person being observed. There is currently extensive research taking place, trying to ‘understand’ images and videos with the aid of learning algorithms. Using this, it is hoped it will be possible to recognise unusual behaviour.
How is the camera supposed to achieve that without recognising the individual?
Rinner: We have to ensure here that everything the camera relays is already protected. Encryption must take place before the data reaches the cloud, as otherwise this could lead to security loopholes. Based on this example, I would not be allowed to identify anyone other than the individual I was potentially wanting to follow based on some abnormalities.
Are there any visions of the future you are fearful of?
Rinner: We are surrounded by cameras, whether we like it or not. At the moment there are a lot of cameras being used with no one making the effort to think about privacy or security. Just think about selfie drones. For me it’s important to draw attention to aspects of privacy and security, but also to encourage dialogue about the need for legal regulation. There are already good technologies available that can make cameras more secure.
for ad astra: Romy Müller
About the people
Bernhard Rinner is a Professor of Pervasive Computing at the NES [Institut für Vernetzte und Eingebettete Systeme (Institute for Networked and Embedded Systems)].
Ihtesham Haider has worked at the NES since 2014. He is writing his dissertation on Physical Unclonable Functions.
TrustEYE was financed by Kärntner Wirtschaftsförderungsfonds [the Carinthian Economic Promotion Fund]. ProSecCo received financial supported from the Austrian Research Promotion Agency (FFG).