KISTRA - Use of Artificial Intelligence for Early Detection of Crimes

On July 1, 2020, the research project KISTRA – Use of AI for the Early Detection of Crimes was launched. The aim is to research the possibilities and framework conditions for the ethically and legally justifiable use of artificial intelligence by security authorities for the early detection and prevention of hate crime. A consortium of nine partners, led by the Central Office for Information Technology in Security, shortform ZITiS, is participating in the three-year project. KISTRA is characterized by the interdisciplinary integration of science, economy and end users. In addition to ZITiS and the Federal Criminal Police Office, or BKA for short, which is both partner and end user, seven other partners are involved in the project: Johannes Gutenberg University Mainz, Ludwig Maximilian University Munich, Munich Innovation Labs GmbH, RWTH Aachen University, Technical University Berlin, Technical University Darmstadt and University Duisburg-Essen. The research project has a total budget of 2.98 million euros. KISTRA’s results will include socio-scientific, ethical and legal expertises as well as technical solutions, for example software demonstrators. In addition to the direct application at the BKA, other authorities with security tasks can also benefit from the results: on the one hand, through the function of ZITiS as a central office with the mandate to support the German security authorities by researching and developing tools in the digital space, on the other hand, through the BKA’s central office function for the federal and state police forces.


The project is based on the “Protection against Crime and Terrorism” initiative of the national security research programme at the Federal Ministry of Education and Research, shortform BMBF, and aims to promote the development of legally compliant AI and procedures for the collection and evaluation of security-relevant content on the Internet.

Background and Objectives

Police crime statistics have shown a significant increase in politically motivated crimes with an Internet connection in recent years. The Federal Government is countering this development, among other things, with a legislative initiative to combat right-wing extremism and hate crime, which was adopted by the Bundestag on 18 June 2020. This is intended to counteract the brutalization of communication, which is increasingly being observed particularly in social networks, if the content is criminally relevant. Social networks with at least two million registered users in the Federal Republic of Germany are to be obliged to report certain criminal content that they become aware of in the course of user complaints to the Federal Criminal Police Office so that the BKA can initiate criminal prosecution with the competent security authorities. In order to implement this plan, the BKA is currently in charge of setting up a new Central Internet Reporting Office for hate crime, among other things. Here too, ZITiS will provide support by providing information technology capabilities, for example by researching and developing trustworthy AI. Artificial intelligence, or AI for short, will be a key technology in dealing with the challenges of hate crime. KISTRA is investigating its application to support investigators at security authorities. Hate crime stands for politically motivated crimes which are directed against people of an actual or ascribed social group, for example in the form of death threats, sedition or insults. The amount of data that is exchanged daily on the Internet is growing continuously and can hardly be processed without the support of AI methods. Deep learning in particular has proven to be a proven method for classifying large amounts of data of different data types.

KISTRA is investigating the possible application of AI in security authorities in a holistic approach: technical, social scientific, ethical and legal considerations are all taken into account in the project. The overriding goals include:

  • the consideration of the legality and the ethical justifiability of the intended AI solutions and the resulting methods for security authorities,
  • the identification and sociological examination of politically motivated hate speech and “hate crime” on the Internet,
  • the development and implementation of adaptive AI methods to support the police criminal law assessment of hate crime incidents, and
  • the holistic consideration of the individual technical components and scientific results and their transfer into a technical overall solution, a so-called framework.
André Calero Valdez
Professor of Human-Computer Interaction and Usable Safety Engineering

I am insterested in studying effects human-algorithm interaction and their impact on safety.