Hate speech in the digital sphere has the potential to silence voices and thereby threaten democracy. But hate is not always expressed through swear words online; implicit insults are also ubiquitous. Tracking these down efficiently by technical means, however, is extremely challenging. Michael Wiegand is currently working on the “Recognition of Implicit Insults” in a project funded by the FWF.
News published by the University of Klagenfurt concerning the university centre D!ARC
Public service broadcasting has a statutory mandate. This includes a certain level of diversity in its programming. The shift to digital formats combined with the use of recommender systems can jeopardise this diversity. Can a recommender system tuned for diversity step in and recommend alternatives? Nikolaus Poechhacker is a researcher and lecturer who works in the research group “Digital Culture” at the Digital Age Research Center (D!ARC). He focuses on the interface between society, law and technology.
We are still a long way off from developing artificial life forms that can understand and communicate with humankind on all subjects, according to Michael Wiegand, computer linguist at the Digital Age Research Center (D!ARC) at the University of Klagenfurt. He specialises in the detection of hate speech and tells us what algorithms need to learn in order to reliably recognise insults.