How artificial intelligence is changing investigations, policing and law enforcement

By Christine Duhaime | February 13th, 2019

Artificial intelligence is having a significant positive impact on the ability of law enforcement to identify criminals and to detect and investigate crime.  In the process, it is changing the face of policing.

Early AI in Money Laundering

AI in law enforcement is not new. Two areas where AI was used early on in law enforcement were for border control and anti-money laundering law in the US.

The US Customs and Border Protection Agency created an AI system using rule-based reasoning to identify suspiciousness for immigration purposes in the mid-1980s.

And in 1993, FinCEN developed a system called the FinCEN Artificial Intelligence System, which links and evaluates financial transactions for indications of suspicious transactions characteristic of money laundering or terrorist financing to identify unknown, high value leads for investigation and, if warranted, prosecution.  In its first two years, FinCEN’s AI System identified over US$1 billion in potential laundered funds that humans alone could not detect.

AI systems are changing investigations.  They allow investigators to detect criminality in ways not before possible of transactional data to be processed and linked to identify patterns and connections.  AI systems can process big data rapidly, reducing the amount of time investigators would otherwise spend manually combing through large data sets for leads and patterns in financial crime. In financial crime law especially, investigations are often hampered by manpower shortages.  Mining and processing by data solve manpower shortage problems and accelerates pattern detection to identify anomalous behaviour and detect criminal actors.

This is especially useful for transnational criminal organizations.  They usually involve repetitive patterned behaviour in select areas of criminality, such as drug trafficking, extortion, cybercrime and money laundering.  They also involve multiple offenders connected through various relationships such as family, friendship or business associates.  In terms of behaviour, they typically travel and dine together. Learning and linking associations between members of criminal organizations and their business enterprises is a critical part of how anti-money laundering legal experts and law enforcement uncover criminal activities and criminal networks.  Combining AI and traditional link analysis in investigations is enabling the public and private sector to have deeper intelligence.

At FinCEN, because specialized money laundering and terrorist financing expertise is distributed among FinCEN agents, the system incorporates a wide range of shared knowledge.  The design of the suspiciousness evaluation modules, with individual rule sets addressing specific money laundering indicators, facilitates the incorporation of additional indicators and improves accuracy of findings. Using AI technology, an organization like FinCEN can identify multiple businesses linked to certain financial transactions to detect money laundering activities, criminal associations to support enforcement.

Facial Recognition

Facial recognition is undergoing a renaissance with AI and changing policing.  Facial recognition was developed in the late 1980s by the Central Intelligence Agency.  Back then, the CIA’s facial recognition system combined image analysis technology with collateral information tied to a database that was used to identify people.  Since then, facial recognition technology has played a role in law enforcement since the mid-1990s.  For example, border agencies at airports in China and Japan have deployed facial recognition systems for years to control immigration and in the process, built two of the world’s largest facial recognition databases.  China’s federal facial recognition database is tied to national identity cards and intelligence agencies.

The Jiao Tong University has built a facial recognition system that can identify criminals with 89.5% accuracy rate using machine vision algorithms based on examinations of photographic records of known criminals and non-criminals.

The FBI has facial recognition systems as well that access and scan over 411 million photos in state and federal databases.

US Customs and Border Patrol is developing drones with sensors, cameras and facial recognition capabilities to allow it to film persons near borders to see if they are matched in law enforcement databases using remote drones, including matching from the IDENT database which has more than 170 million facial images collected from foreign nationals as they enter the US.

The UAE has built police robots whose primary function is to scan faces using facial recognition programs for enforcement.

In the private sector, Google and Facebook apply facial recognition on photographs voluntarily uploaded by users on their platforms, and they group photographs of people together, informing their AI systems to associate photographs with the same person automatically.  However, such automatic linking of a person by tech companies raises privacy law issues in respect of the collection, retention and use of the likeness of a person, and issues of informed consent.

Also in the private sector, organizations such as casinos, use facial recognition programs to take images of the public and extrapolate certain information for compliance and enforcement purposes to detect if a person is to be prohibited from gambling for any number of reasons, including if they are a member of organized crime.

CCTVs in public places operate the same as in casinos in the sense that facial recognition systems work with machine learning systems to scan faces and inform law enforcement of suspicious activities, such as when the same people appear at the same locations more often than statistically probable.

For example, facial recognition and machine learning tech can detect if the same person interacts on the same street corner frequently and the system may then predict that the person is selling illegal drugs or will record a person entering a high end hotel at the same time frequently and predict that they are selling prostitution services.

There are obvious concerns with such judgement calls.  In the example above, the person who interacts at the street corner frequently could be a girl selling Girl Guide cookies and not a drug trafficker.

For a response to be sufficient to justify reasonable grounds to suspect and justify a search or seizure or reporting, it must be accurate and be based on an understanding of the law.  Systems are only as good as the coding and if that coding involves criminal prosecutions and is not done with lawyers, it may fail constitutional thresholds.

Predictive AI

Another area where AI is changing investigations and law enforcement is predictive AI.

Predictive AI is expected to become embedded in policing to predict and stop crimes before they happen.  In the future, it is highly probable that a machine will identify criminals all on its own and alert law enforcement on how and where to locate a suspect, with the evidence detailing the crime packaged by systems for arrest and prosecution.

Chicago is already evaluating predictive AI with the use of public data, such as social media data, and other sources to identify people likely to be criminals before they commit crimes.  The research is controversial because it assumes criminality can be predicted.

Automating the process cuts down on the time a human would take to identity the data and come to the same conclusion.  The advantage of using data vacuumed from a social media account is that it can pick up repeat information (such as the hashtag #drugs), correlations among social media posts, repeat locations, connections and references to other people that a human could not detect without years of analysis.

Moreover, using social media as the collateral information allows financial crime investigators to detect, within seconds, things that are out of pattern – for example, if a person has geo-tagged frequent visits to expensive resorts or restaurants that are inconsistent with their salary, that may be indicative of possession of proceeds of crime.

Today, we can identify criminal actors in organized crime before there is sufficient evidence to prove criminal conduct but that is markedly different from predicting criminality of an individual.  With respect to the former, identity is based on the fact that members of criminal organizations and gangs are part of the same circles and networks.  Statistically speaking, they are likely to so-called infect each other with criminal interests.

It sounds great in theory that we can predict criminality but there are risks because machines are not infallible and neither are humans.  Often humans make bad judgment calls in their life decisions, or lack the maturity, intellect or education to comprehend the consequences of what they post online for the world to see and how it is being used. People who are unaware of data vacuuming may be harmed by the storing of their social media in a permanent database when it is later used for criminal predictability.

Borg Collective?

Accessing big data and undertaking data vacuuming and then applying machine learning to that data may lead to a form of constructive knowledge, legally speaking, allowing for law enforcement to have constructive awareness of predictions of criminality, ignoring the individual reasonable suspicion requirements.  One scholar has suggested that this would turn our police agencies into something like the “Hive Mind” that collects and processes data from millions of datasources, CCTVs and drones, similar to the Borg Collective in Star Trek allowing police agencies in the future to rely on global real-time updated databases in respect of individuals for law enforcement.

Other AI in Law Enforcement

In other contexts, securities commissions, including the US Securities Exchange Commission and the Australian Securities and Investments Commission, use AI to detect rogue market behaviour among traders and brokers. Nasdaq is looking at developing AI software to use machine intelligence to understand the language used by traders and identify key indicators of fraud or criminal activities when they happen.

In the same vein, autonomous boats equipped with sonar AI capabilities are used to detect and report illegal fishing in oceans and other illegal activities, such as drug trafficking off coastal waters.

AI is also being used to create safer cities.  Students at Berkeley University developed an app that brings real-time crime incident information to users, using historic and location data to identify safe navigation paths and alerts.  The app features a dynamic crime map, notifications about nearby crimes and automated reporting of incidents.  It draws upon various data sources, including police dispatch data, crowd-sourced information and historic data.

The world is rapidly changing with AI and policing is no different.

Comments are closed.