Together with the University of Cambridge Lawrence Sherman, the British Durham police developed and tested the HART (Harm Assessment Risk Tool), a program that can predict the likelihood of harm in the future by an already detained intruder. The algorithm was created from 2008 to 2012, and then went through practical tests for several more years. Representatives of law enforcement services are sure: using artificial intelligence, it will be easier for police to decide how to deal with suspects – leave them in a detention center or release them on bail. In practice, HART has proven effective in 88–98% of cases. HART classifies detainees, determining the degree of potential risk from their actions in the future, and helps to determine the optimal preventive measure for the offender. However, experts involved in the study of such programs warn that artificial intelligence takes over the thinking and even social prejudices of its creators, and therefore is not objective. Carey Collanese, a professor at the University of Pennsylvania, has been studying algorithmic solutions for many years and strongly recommends using HART development for consultation only – the final decision should remain with the individual.