Over the past few years, there has been a sustained push by law enforcement agencies to adopt “smart” technologies in policing. This push is often framed as a response to rising crime rates, low conviction rates, and the growing backlog of unresolved cases in India. At the same time, the nature of crime itself has evolved with recent technological advancements. Police forces in India are also known to be overworked and short staffed. In this context, it is not surprising that AI is being positioned as a solution to the constraints in policing and to strengthen policing especially to assist with predicting and preventing crime i.e. predictive policing.
Predictive policing is facilitated by the use of AI technologies that analyse large volumes of policing data to identify patterns related to crime and prevent it before occurrence, which would otherwise be difficult to analyse manually. The data used for training the AI predictive policing tool could include anything from historical crime records, FIRs, arrest data, emergency call logs, and other operational datasets to identify crime patterns and locations. This allows police to optimize the deployment of personnel, patrol vehicles, and other resources. In parallel, AI-driven video analytics are deployed to process large volumes of CCTV footage to automatically scan large volumes of recorded or live footage to look for specific visual markers. This can include matching faces against stored images, tracking a person’s movement across multiple cameras for potential criminal activities, or filtering footage based on time, location, clothing, or other identifiable features in tracing suspects. There have also been reports of predictive policing tools used to monitor social media platforms for signs of unrest, potential threats, or criminal activities.
In essence predictive policing can be divided to four categories¹:
(i) Predicting where and when crimes may happen
(ii) Predicting identity of suspects
(iii) Predicting potential offenders before crime happens
(iv) Predicting who may be targeted or become victims
It is important to not confuse predictive AI tools with AI-assisted investigative or forensic technologies used in policing. Tools that automate the review of visual data such as enhancing poor-quality images, reconstructing faces, or generating composite sketches operate retrospectively and are applied after a crime has occurred to assist investigators in analysing evidence. These systems do not forecast future criminal behaviour. Instead, they support human decision-making by reducing manual effort and time in processing large volumes of visual material. Predictive AI tools, by contrast, are forward-looking and probabilistic in nature, they guide pre-emptive policing decisions based on patterns and inferences drawn from historical data.
While all of this may sound non-problematic, the mass deployment of these systems brings with it unresolved questions around surveillance, data protection, algorithmic bias, transparency, and institutional accountability. Moreover as AI systems become more deeply embedded in everyday policing practices, the legal safeguards for these practices, its implications on civil liberties, due process, and the impact on fundamental rights continues to be unclear and questionable. Long-standing research and lived experience indicate that FIRs in India are frequently under-reported for certain categories of crime, particularly crimes against women, caste-based violence, and offences involving powerful actors, while at the same time reflecting patterns of over-policing in marginalized and over-surveilled communities. As a result, the data used to “train” predictive systems often mirrors existing social and institutional biases rather than objective crime trends. When such historically skewed data is fed into AI-driven predictive tools, past patterns of policing risk being treated as indicators of future criminality. Areas or communities that have been historically subjected to heavier police presence are more likely to generate higher volumes of police data, which in turn can lead systems to repeatedly flag the same locations or groups as “high risk.” This creates a feedback loop where increased surveillance and policing are justified by algorithmic outputs that are themselves shaped by prior bias, rather than by independent assessments of actual harm or risk².
There is also little to no public information on how datasets are cleaned, whether inaccuracies are corrected, how missing or unreliable data is handled, or whether bias and error rates are assessed at any stage of deployment. This also makes it impossible to conduct independent audits and raises serious questions about the legitimacy of using such data-driven systems which are used to make decisions that can directly affect individuals’ liberty, mobility, and fundamental rights.
Automated Facial Recognition System (AFRS)
In 2015, the Delhi Police and Indian Space Research Organisation–Advanced Data Processing Research Institute started developing Crime Mapping, Analytics, Predictive System (CMAPS), a web-based application for all police stations in Delhi for live spatial hotspot mapping of crime, criminal behavior patterns and suspect analysis³. The input data of this system is from the Dial 100, which is the emergency call centre of India and the data stored in the Crime and Criminal Tracking Network Systems (CCTNS is a nationwide digital platform which connects all police stations in India for real-time crime and criminal data sharing.), and other archived crime data visuals.
In 2018, the Delhi High Court in the Sadhan Haldar v. The State NCT of Delhi⁴, authorised Delhi Police to use Automated Facial Recognition System (AFRS) for the tracking and reunion of children. Later there were news reports that confirmed the Delhi Police used the AFRS software to screen crowds during political rallies, anti-CAA protests and farmer protests. These instances show clear function creep by the Delhi Police where technology procured to trace and rescue missing children as approved by the Delhi High Court was then redirected for entirely different purposes. The legal framework or Standard Operating Procedure for the use of NAFRS were never released publicly.
In 2018, the Telangana Police also launched TSCOP, a mobile app which assists investigating officers in detecting faces of crime suspects and missing persons by cross-referencing them with the CCTNS database, maintained by the Ministry of Home Affairs as well as other databases maintained by the State of Telangana. The app also allows officers to click photos of suspects and run it through the databases for identification during patrols. The app has been highly criticized for feeding on the privacy of citizens to develop data-hungry mechanisms and for disregarding less intrusive methods and guidelines for search and is currently subjected to a public interest litigation filed in the Telangana High Court.⁵
In 2019, the Ministry of Home Affairs (MHA) floated the first tender to develop the National Automated Facial Recognition System (NAFRS)⁶. The NAFRS will serve as a centralized platform to identify suspects, missing persons, and unidentified bodies using facial recognition technology integrated with existing image databases used by State police forces.The current status of the development or deployment of the NAFRS is unknown.
AI- enabled CCTV
Indian law enforcement agencies have increasingly upgraded existing CCTV to AI- enabled CCTV. The AI enabled CCTV assists police and authorities in boosting public safety, crowd control, traffic monitoring – all of this is part of predictive policing. Unlike traditional CCTV, which requires human operators to manually review footage, AI-enabled CCTV systems are designed to detect predefined “events” or “anomalies” such as unusual movement, loitering, crowd density changes, traffic violations, or specific behavioural patterns. In law-enforcement contexts, these systems are increasingly framed as tools for public safety, crowd management, and traffic regulation. Over the last few years, AI enabled CCTV has found its way to railway stations, metros, public spaces, market places, temples, events and towns. AI enabled CCTVs were also installed in cities for monitoring garbage dumping , to curb illegal parking and traffic violations. Some of these AI enabled CCTVS are also used by police officers to identify suspects from crowds and to match suspects against existing criminal databases or to track missing persons. Several states have developed their respective AI-enabled CCTV systems for predictive policing.
Automatic Number Plate Recognition (ANPR)
Automatic Number Plate Recognition (ANPR) is a form of algorithmic surveillance that uses cameras combined with optical character recognition and machine-learning techniques to automatically read vehicle registration numbers from images or video feeds. The Motor Vehicles (Amendment) Act, 2019 enables electronic monitoring for road safety, and the Central Rules notified in August 2021 expressly recognise ANPR as an “electronic enforcement device.”⁷ In policing contexts, ANPR systems are typically connected to large backend databases, such as vehicle registration records, stolen-vehicle lists, or watchlists and can log the time, location, and movement of vehicles across multiple points. While ANPR is often presented as a neutral traffic-management or enforcement tool, its continuous and automated collection of mobility data enables the profiling of movement patterns, repeat behaviour, and associations over time. Thus ANPR contributes to predictive policing by allowing law-enforcement agencies to infer risk, flag “suspicious” vehicles, anticipate where certain vehicles or categories of vehicles may appear, and proactively deploy police resources based on past patterns rather than immediate suspicion.
In India, ANPR has increasingly been integrated into broader predictive and data-driven policing infrastructures rather than operating as a standalone traffic tool. For instance, in Delhi, ANPR cameras form part of the city’s wider Safe City and traffic surveillance architecture and are used alongside other analytics to track vehicles, identify repeat offenders, and detect end-of-life or blacklisted vehicles. In Bengaluru, ANPR is embedded within the Intelligent Traffic Management System (ITMS), where historical violation data and movement patterns are used to identify high-risk corridors and optimize enforcement. Telangana has integrated ANPR feeds into its Integrated Command and Control Centre, enabling real-time alerts for stolen or flagged vehicles and retrospective analysis of vehicle movement across the city. Tamil Nadu has deployed ANPR linking it with e-challan systems and central databases to automatically identify violations and trace vehicles involved in offences, while also proposing wider deployment at toll plazas and border points. Similar deployments also exist in states like Kerala, Haryana, and Punjab.
AI-enabled Drone Surveillance
Lately there have been reports about police officers in Rourkela, Odisha using AI-enabled drones for carrying out predictive policing activities such as patrolling. The AI-enabled drones are capable of area surveillance, vehicle tracking, detection and prevention of crimes with timely interventions. AI-enabled drones were also introduced in Karnataka. The drones are said to have advanced surveillance capabilities such as day-and-night monitoring, ANPR, crowd and vehicle tracking and rapid field deployment which will enable tech-supported policing across districts. There were also reports of Delhi police purchasing AI-enabled drones ahead of Independence Day 2025. AI enabled drone surveillance is also becoming a common trend used by police officers to monitor crowds especially during festivals and occasions. This was recently seen in Kerala, Odisha and Lucknow. However the Odisha Police’s AI-enabled Integrated Command and Control Centre (CCC) deployed during Rath Yatra to assist with crowd management failed to detect stampede which killed three people and injured several others. Investigations found that only 123 of 275 cameras were functional, feeds were inconsistent and drones were under-utilised.
The Drone Rules, 2021, which focus on airspace safety, registration, and operational permissions, do not address law-enforcement use of drones for surveillance, data analytics, or predictive decision-making. Thus the expansion of AI-based drone surveillance in predictive policing continues without any express laws.
Safe City Project and rise of Predictive Policing
In 2018, The Safe City Project was launched in 8 metro cities in India under the Nirbhaya Fund Projects with an aim to create a safe and secure environment for women and children safety. As part of the project states were permitted to use technology in identifying hotspots for crime against women and deploy technological measures to enhance safety. What initially began as the installation of CCTV cameras gradually evolved into the deployment of AI-enabled surveillance systems and centralized “smart” control rooms in the 8 metro cities (Delhi, Mumbai, Kolkata, Chennai, Bengaluru, Hyderabad, Ahmedabad and Lucknow) as well as other cities. The AI-based video analytics systems automatically scan live footage and raise alerts based on the specific scenarios such as particular movements or behaviours considered suspicious which are predetermined and are used to train the models by the police officers. ⁸ There have also been reports of how the AI enabled CCTVs under the Safe City Project are being used for reasons other than women’s public safety such as in Delhi the AI enabled CCTVs installed were integrated for independence day security to monitor VIP movements and detect suspicious activity
However research reveals that despite heavy investment in AI-enabled surveillance infrastructure, the project relies excessively on technology while neglecting human oversight, timely police response, and accountability. The AI-generated alerts are frequent but largely unreliable, with most flagged incidents turning out to be false positives, overwhelming understaffed control rooms and delaying real intervention. In serious cases, including abduction and sexual violence, the system failed to generate actionable real-time alerts and was used only retrospectively like ordinary CCTV Camera. However what the project actually enables is real time surveillance of people on bail, protests, trans persons, sex workers, and political activists.⁹
Future of AI in Predictive Policing Tools
Traditionally, policing used to be a reactive function which included responding to reported offences, investigating specific crimes, and acting on individualized suspicion grounded in law. Predictive policing marks a departure from this model towards a system that predicts future risks. Areas or categories of people are continuously assessed, classified, and monitored based on algorithmic inferences drawn from historical data. Decisions that were once made through evidence, legal standards, judicial oversight, and procedural safeguards are increasingly migrated to dashboards, heat maps, and risk indicators generated by opaque systems. In essence the pace in which predictive policing is expanding in India has outpaced the development of legal safeguards, institutional accountability, and democratic oversight. To draw a parallel the EU AI Act has demarcated the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces under the “prohibited AI practices” category and the exceptions to use such technology is limited to specific use cases such as targeted search of missing persons, prevention of a threat to life or physical safety of persons, or the threat of a terrorist attack and certain specific crimes.¹⁰
Further requests for information regarding predictive policing systems have also been frequently denied under the Right to Information Act on grounds of commercial confidence or blanket exemptions. As a result, basic details regarding system design, training datasets, accuracy benchmarks, vendor contracts, and standard operating procedures remain inaccessible to the public. This secrecy also makes it difficult for the public to scrutinise these systems and nearly impossible for individuals wrongfully affected to challenge decisions taken by such predictive policing tools.
There is limited publicly available information on whether these systems comply with the Digital Personal Data Protection Act, 2023 and the data protection principles within the law such as purpose limitation, data minimisation, accuracy, storage limitation or reasonable safeguards. The situation looks even more grim as Section 17 of DPDPA allows the government to exempt its own agencies on broad grounds of national security or public order, allowing a massive carve out for law enforcement and investigation agencies. Further, unlike the EU GDPR, DPDPA does not provide data principles a right to object to automated decision making or seek human review of automated decision-making leaving data principles vulnerable to false positives and bias within the systems.¹¹ Thus the very law that is supposed to be a safeguard to use of personal data, risks becoming a tool that can potentially expand surveillance through predictive policing tools which continue to be opaque and lacking accountability.
As these technologies become part of routine policing, we worry that the risk lies not only in how they can be misused, but in how easily they are accepted without question. Surveillance quietly becomes routine, finding its way into ordinary police work and public life without any public debate or legal scrutiny. The boundaries of surveillance continue to be pushed beyond what is already authorised by law. Over time, this normalisation will lower expectations of accountability and intrusive practices will be seen as necessary and citizens will be expected to accept constant monitoring without questions of legality, necessity or proportionality.
Footnotes:
1.Perry, Walter L., Brian McInnis, Carter C. Price, Susan Smith and John S. Hollywood. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. Santa Monica, CA: RAND Corporation, 2013. http://www.rand.org/pubs/research_reports/RR233
2.Marda, V., and Narayan, S. (2020). Data in New Delhi’s predictive policing system. Proceedings of the 2020 Conference on Fair-ness, Accountability, and Transparency, 317–324. https://doi.org/10.1145/3351095.3372865
3. Digital Initiatives of Delhi Police, 2020 https://cag.gov.in/uploads/icisa_it_reports/f6b3cd4d211336f907d5233be43b9227.pdf#:~:text=The%20Delhi%20Police%20replied%20(June%202020)%20that,is%20under%20consideration%20by%20the%20Technical%20committee.
4.W.P.(CRL) 1560/2017
5. S.Q. Masood v. State of Telangana WP(PIL) 35/2024
6. GoI, ‘Automated Facial Recognition System will facilitate better identification of criminals, unidentified dead bodies and missing children and persons: Shri G. Kishan Reddy’ (MHA, 2020) https://www.mha.gov.in/sites/default/files/PR_RSUSQ1495AFRS_03042020.pdf
7. https://www.pib.gov.in/PressReleasePage.aspx?PRID=2078259®=3&lang=2
8.“Watched but Unprotected: How Lucknow’s Safe City Project Fails Women” (Pulitzer Center) https://pulitzercenter.org/stories/watched-unprotected-how-lucknows-safe-city-project-fails-women
9. idbi
10. Article 5 of EU AI Act
11. Sahil and Jeet DrS, “Predictive Policing, AI Surveillance, and Privacy in India: A Legal Analysis under the DPDP Act 2023” (International Journal of Trends in Emerging Research and Development, November 11, 2025) https://zenodo.org/doi/10.5281/zenodo.18221510
12. Artificial Intelligence and Surveillance in India: 2025 Roundup, https://sflc.in/artificial-intelligence-and-surveillance-in-india-2025-roundup/
