D!ARC Lecture: Automated and Datafied Welfare Futures

June 18th 2024

Dr. Doris Allhutter

Abstract:

In many countries across the globe, the public sector is expanding its efforts to introduce data-driven and intelligent systems in the administration of core welfare services such as social benefits provision, unemployment services and healthcare. Critical data studies and adjacent fields have raised concerns that datafication as a social and political process mainly caters to commercial and governance interests. The risks of automation in welfare have primarily been discussed as “black-boxing” (i.e. algorithmic assessments of welfare applicants rest widely inaccessible and unexplainable for case workers and citizens alike), as fundamentally problematic practices of profiling and categorizing citizens (which limits their life experience to what is observable and quantifiable, omitting the complexity of their life situations), and in terms of biased and discriminatory outcomes for protected groups in vulnerable life situations.

Read more

Low Degree Polynomial Models In Cryptography

March 5th 2024             11:45 am – 13:15 pm            HS N.2.57

Matthias Steiner M.Sc.

(Institut für Artificial Intelligence und Cybersecurity)

Miriam Fahimi: “Wir sollten keine Maschinen brauchen, um die Welt fairer zu gestalten.”

Künstliche Intelligenz erleichtert bereits heute unseren Alltag. Die Systeme können aber auch sehr problematisch sein. Diskriminierende Aussagen und die Verbreitung von Stereotypen, wie durch den neuen „Berufsinfomat“ des AMS, machen sichtbar, wo die großen Herausforderungen liegen.“

 

Mira Dolleschka vom Moment Magazin hat Miriam Fahimi, unserer Doktorandin am Digital Age Research Center über Künstliche Intelligenz, dessen Zukunft und Konfliktfelder gesprochen.

https://www.moment.at/herausforderungen-k%C3%BCnstliche-intelligenz

Democracy, AI and Privacy

January 25th 2024     2:00-4:00 pm     will only be held ONLINE

https://classroom.aau.at/b/mat-6mu-tza

PD Dr. Carsten Ochs

This lecture is organized in collaboration between the Faculty of Social Sciences and the D!ARC

Abstract:

Determining the relationship between democracy, AI and privacy begs the question if contemporary society in general is to be conceptualized in terms of an emerging digital condition: to what extent do digital transformations establish novel modes of sociotechnical structuration, thus bringing about a new type of society proper? Are digital networking, platforms, self-tracking, algorithmization, datafication etc. to be considered as surface effects of society as we know it, or is there structural modification as regards the foundational logic of the constitution of society? Empirical and theoretical answers to these questions vary greatly. Whereas some have suggested that what we call “digitization” in fact induces a new anthropo-logic, a new evolutionary phase in human becoming-with (M. Faßler) or at least novel modes of societal structuration (D. Baecker), others have regarded digitization simply as continuing of genuinely modern structural principles (A. Nassehi).

Having said this, my presentation starts from the assumption that the only way to gain an analytical understanding of the constitution of digital society is to determine in theoretical terms novel structural principles based on thorough empirical research, with the notion of „structural principle“ referring to modes of structuration that pervade societies at large. Thus, is digitization to be understood as a (Durkheimian) “total social fact”, as N. Marres has claimed? Answering in the affirmative presupposes that we identify genuinely digital structural contradictions. In my presentation, I will make two such contradictions a subject of discussion:

  • First, there is the contradiction between what I call digital optionality (the increase of options for action brought about by digital infrastructures) on the one hand, and digital predictivity (the narrowing down of options via predictive analysis) on the other. As Subjectification is faced with this contradiction, 21st century privacy is bound to take the form of a right to unpredictability.
  • Second, there is a contradiction between the societal disorder caused by Machine Learning-based AI systems structuration of communicative practices (see, e.g., the role of Facebook’s algorithms in January 6 US Capitol attack of 2021) on the one hand; and what I call “hyper-nomy” on the other: the hypertrophic growth of non-negotiable normative ordering mechanisms put into operation by said systems at the same time.

In the last part of my talk, I will speculate about the potential impact that the transformations of AI and privacy have on democracy. As predictivity and hypernomy tend to narrow down the contingency of possible futures by way of sociotechnically fostering the reproduction of the past, a democratic politics of the digital is bound to come to terms with both of these structural principles to safeguard the openness of social futures.