We have all seen a considerable increase in technology focused firms, or firms with a significant technology angle to their operations. Technology increasingly impacts on everyone’s lives so it’s no surprise that policing has followed suit, with forces adopting an ever greater number of these new systems. Most recently, this has extended to the use of artificially intelligent algorithms (i.e. algorithms that have the ability to self-improve and refine themselves as further data is provided) for risk profiling, preventative policing measures, and crime investigation.
Success has been mixed and the police hierarchy might be wise to take a step back and consider how the new technology should be used appropriately, especially when public confidence in policing is reaching an all-time low.
This is a complicated topic, for the on the one hand budget cuts have put an incredible strain on police forces and, theoretically, new technologies should be able to free up officers time to do what the public want them to do most, which is to prevent and investigate crime.
As an example, a police force in the UK is using an algorithm to help decide which crimes are solvable and should be investigated by officers. As a result, the force trialling the solution now investigates roughly half as many reported assaults and public order offences. This saves time and money, but some have voiced concerns over the way in which the algorithm could bake in human biases and lead to certain solvable cases being ignored. The tool is currently only used for assessing assaults and public order offences, but may be extended to other crimes in the future.
The possibility of a “computer says no” response from the police when reporting offences or requesting updates on investigations, is not appealing. Needless to say, if the public believed that police officers had abdicated responsibility for investigating crime to computers, confidence in the police could collapse completely.
The concern is that these programs encourage racial or other profiling and discrimination, and that they threaten privacy and freedom of expression. Given that the data used to drive the algorithm is based on existing arrest data, it is likely that the system is already imbued with discrimination and bias from the way people policed in the past, now entrenched by algorithms.
One area where technology should certainly be put to better use is reducing the amount of time taken up with administrative duties (e.g. reporting, data cleansing etc.). There are many technology solutions available which could save time and cost by assisting in operational tasks, including voice recognition software, big data analytics engines (e.g. Alteryx), and other more bespoke search and reporting tools. This is an area of technology where we at Polestar have considerable experience, and can verify that there are significant potential benefits to clients who deal with very large amounts of data on a regular basis, such as the NHS and police forces.
Overall, technology and software advances present significant opportunity for substantial improvements in all aspect of business as well as policing. However we should not be dazzled or beguiled into shifting responsibly for decision making to systems, which bear no responsibility for mistakes and may even exacerbate flaws already in place.
We should not be dazzled or beguiled into shifting responsibly for decision making to systems, which bear no responsibility for mistakes and may even exacerbate flaws already in place