Artificial Intelligence and Surveillance in India: 2025 Roundup

 

If 2025 has made one thing clear, it’s that Artificial Intelligence (AI) is no longer a futuristic concept, it’s everywhere. Alongside the rise of AI, another trend has quietly emerged: AI-driven surveillance.  AI driven surveillance tools were introduced into cameras, databases, and control rooms to strengthen governments ability to watch and monitor people. These tools promise efficiency, speed, and scale, and are often introduced in the name of public safety, smarter cities, or better infrastructure. It’s hardly surprising that governments and public authorities have embraced these technologies.

 

While mass surveillance itself isn’t new, AI is transforming its reach and intensity. Analysis and decisions that once required significant human effort can now be automated, expanded, and executed in real time. On paper, AI-driven surveillance tools promise smoother services and smarter governance. In practice, they also expand the State’s ability to observe, track, and analyse people at an unprecedented scale.

 

At the same time, citizens across the country have embraced AI-driven technologies without fully realising the implications for privacy, data protection, and data security. When surveillance technologies are carefully masked as tools of convenience and efficiency such as biometric travel systems DigiYatra, which is marketed not as monitoring mechanisms, but as seamless, time-saving innovations, it is worth asking whether citizens can really be blamed for a lack of awareness.

 

Through this blog post, we analyse AI-driven surveillance technologies that have emerged/ became popular in 2025 and examine whether there exists a clear legal basis for deploying/ expanding such technologies in a manner that is lawful, necessary, and proportionate.

 

AI-surveillance infrastructure  

 

The AI surveillance infrastructure is primarily reliant on using Facial Recognition Technologies (FRT). FRT is similar to a biometric identification, It analyses a person’s face to exact distinctive features such as the distance between eyes, bridge between lips and nose, shape of the face, it also goes beyond solely mapping data points on a person’s face to mapping body language, emotions and can supposedly  predict future moves. Unlike traditional CCTV systems where recorded footage is saved and used if required, FRT enables active tracking that enables authorities to identify individuals in real time and track their movements. This is possible with AI, when a new facial image is introduced, the system compares it against its learned patterns to find a match or to detect an emotion against the facial expression datasets it’s trained on.  Another upcoming genre in AI surveillance is context-aware AI in surveillance. This  refers to the ability of an AI system to analyze video feeds and other sensor data while considering environmental and situational factors. It can evaluate additional parameters such as time of day, location, crowd density, behavioral patterns, and historical data to make more informed decisions. For instance with the help of AI in video data analytics a simple prompt like “detect women in pink saree” could show results of every woman wearing pink saree in the area where the tool is deployed. However when FRT systems are deployed without clear legal authorisation, necessity assessments, or proportionality safeguards it enables mass surveillance. At present no structure or authority exists to curb the State’s power to deploy such invasive technologies pervasively and no mechanism to conduct checks and balances. The unregulated and unbridled use of facial recognition technology by government agencies raises questions about the transparency and accountability of surveillance practices, the storage and handling of collected data, and the potential for misuse or unauthorized access to sensitive personal information.

 

Crowd Control and Public Safety 

 

At the beginning of 2025, we saw police officers using FRT and 2,700 CCTV cameras enhanced with AI capabilities  to count the number of people during the Maha Kumbh in Prayagraj, Uttar Pradesh. The AI behind the cameras could send alerts to authorities when it detects a surge in any one section of the festival city, a fire, or if people cross barricades they are not supposed to. The alerts are relayed to personnel on the ground to take corrective action.As for finding lost visitors, the technology works only if the missing person has been captured on the crowd monitoring cameras. Underwater drones operating at a depth of up to 100 metres to send real-time alerts in case of accidents during the dip. The government itself released the details of the surveillance it undertook depicting it as a necessary safety measure. At approx 1 Crore people came on a single day.

 

During the Ganesh Chathurthi 2025, Mumbai and Pune also saw use of AI driven CCTV systems with facial recognition and behavior analytics to monitor processions Pune Police confirmed that over 8 lakh alerts were generated and around 250 individuals were flagged for past criminal records and for suspicious activities Mumbai Police also adopted AI tools during Ganpati Visarjan, using an AI‑based control room, drones, and about 10,000 networked CCTVs to track large processions, estimate crowd size in real time, and to monitor immersion spots.

 

Prior to Independence Day 2025, the Delhi Police used AI surveillance systems including FRTs with video analytics which was integrated with a database of 3 Lakh suspects, anti-intrusion cameras, people count cameras, automatic number plate recognition (ANPR), and abandoned object detection.

 

AI-driven crowd surveillance at large public gatherings is routinely justified by authorities on grounds of public safety and order, its legal basis under Indian law remains uncertain. India does not have a dedicated statute that authorises the use of facial recognition technology, AI-driven behavioural analytics, or biometric identification for crowd control. In practice, such deployments rely on broad and general police powers under laws such as the Code of Criminal Procedure and state police acts, none of which contemplate continuous, automated, or biometric surveillance. Following the Supreme Court’s decision in Justice K.S. Puttaswamy v. Union of India, any state action infringing the right to privacy must satisfy the tests of legality, necessity, and proportionality. While crowd management and prevention of accidents at large religious or cultural events may constitute a legitimate state aim, the use of facial recognition and database-linked identification raises serious concerns regarding necessity and proportionality, particularly when less intrusive measures could achieve the same objectives. In the absence of clearly defined statutory limits on data use and retention, data collected through AI-driven surveillance systems remains vulnerable to function creep as information collected could be repurposed without additional legal authorisation or public scrutiny.


Predictive Policing Systems

 

The year 2025 saw an expansion of existing predictive policing systems and artificial intelligence, enabling tools across multiple Indian states. The press release of the Ministry of Law and Justice dated February 25, 2025 also confirmed the integration of AI in crime detection, surveillance and criminal investigations. Some examples of such applications are surveillance and investigation by way of facial recognition systems with national criminal databases, AI-powered forensic analysis for examination of evidence and use of AI models to analyze crime patterns, high-risk areas, and criminal behaviour which will enable law enforcement to take proactive measures. The Indian Police Journal on AI and Policing released in 2025 also discusses the effectiveness and pitfalls of predictive policing.

 

A key national programme underpinning many of these deployments is the Safe City Project, a flagship government initiative aimed at enhancing public safety, particularly for women and children using surveillance infrastructure and data-driven policing tools. This project relies heavily on CCTV cameras, facial recognition systems, and behavioural analytics to monitor public spaces and flag suspicious activity. AI-driven CCTV cameras are becoming increasingly popular across cities in India, however, publicly available information on the precise scope of data collection, retention periods, accuracy benchmarks, or independent oversight mechanisms remain limited.

 

In recent developments, Project Trinetra (Targeted Risk-based Insights for Next-crime Estimation & Tactical Resource Allocation) was launched by Akola district police in Maharashtra. Project Trinetra uses data analytics to assign risk scores to repeat offenders based on conviction history and location. As a result areas hosting high risk individuals get focused patrolling. The project is particularly lauded for narrowing the focus to individuals with a history of crime instead of surveilling the entire population.

 

In Odisha, the Rourkela Police introduced Project SHIELD (Smart Habitual Offender Intelligence and Early Law Enforcement Detection) which uses AI-powered cameras and gang analysis algorithms; officers can now generate an habitual offender score to assess reoffending risks and identify crime hotspots.

 

Telangana, a state in India which is known to be the most surveilled state, in October 2025, issued a tender for four cyber crime investigation tools that are intended to significantly expand the state state capacity to handle cybercrime by tools that can monitor social media content and extract data from devices. In effect it raises concerns about mass online surveillance.

 

There have also been reports of Smart Prahari, an AI-based platform developed by an IPS officer which is currently deployed in Washim, Maharashtra. The model identifies recurring patterns from FIR files it is trained on and identifies when and where crime may occur and suggests patrolling routes for police officers. What is particularly concerning in this instance is the apparent absence of any formal approval process, legal framework, or independent evaluation prior to deployment, it looks like individual officers can now design and deploy predictive policing tools without independent scrutiny.

 

In December 2025, NATGRID which is a state-backed surveillance database that enables law enforcement agencies, including the state police, to access government and private databases got an upgrade with Gandiva, a data analytics tool powered by AI and FRT. Gandiva can match a suspect’s image against the NATGRIDs database and assist law enforcement and intelligence agencies to connect the dots by analyzing massive amounts of data from disparate sources in real-time. This can include all kinds of data about a citizen, including details of driving licence, vehicle registration, bank records, Aadhaar registration, FASTag, hospital data, airline data, tax records and telecom and internet usage metadata. The Union Ministry of Home Affairs has asked States to liberally use the platform and has expanded the pool of law enforcement agencies that can access it.

 

The rapid proliferation of predictive policing tools in India is deeply concerning, particularly in the absence of a dedicated legal framework governing law-enforcement use of AI. There is limited publicly available information on whether these systems comply with data protection principles such as purpose limitation, data minimisation, accuracy, storage limitation or reasonable safeguards. There are also broader concerns about fundamental rights under Article 14, 19 and 21 being violated by arbitrary state surveillance systems. From a comparative perspective, the European Union’s AI Act classifies social scoring systems and AI tools used for individual criminal risk assessment or prediction as posing an “unacceptable risk,” effectively prohibiting their development and use. While India has no equivalent prohibition, this contrast underscores the regulatory vacuum within which Indian predictive policing systems are currently being deployed.

 

AI driven Surveillance in Travel 

 

DigiYatra is an AI-driven FRT that links Aadhaar ID and boarding pass to the user’s facial data and registers them on the DigiYatra App to facilitate seamless airport travel in India. By 2025, DigiYatra had expanded to several airports across the country and continues to be promoted by the Government of India and the Ministry of Civil Aviation as a flagship digital aviation initiative. However, despite being publicly presented as a government-backed programme, DigiYatra is operated through the Digi Yatra Foundation and 74% of its shares are owned by private parties, DigiYatra is also facing an ownership dispute in the Delhi High Court. Though voluntary for use, the claims by DigiYatra regarding decentralised storage, limited retention, or restricted use are difficult for users to independently verify and raises surveillance concerns. As per a recent direction from MoCA airports have also been authorised to conduct random Aadhaar based re-validation activities which may even include fingerprint scanning and iris scanning which significantly expand the scope of biometric processing beyond facial recognition.

 

As a first in India, Mysuru division of South Western Railway station deployed an AI-driven CCTV and video analytic system to provide real time alerts on the presence of criminals. The technology is connected with existing digital criminal databases and a pop-up will appear on the screen at control rooms and alert the security personnel about their presence. The project envisages installation of video surveillance system at 228 stations in SWR with 2,784 cameras. As per the new report the authorities have cited that Railways were advised to merge existing CCTV footage to be linked with the National Intelligence Grid to ease sharing of data among different intelligence agencies in the country.

 

In April 2025, the Northeast Frontier Railway installed 135 AI-driven CCTV cameras across Guwahati railway stations. Notably, the system is configured to trigger alerts for actions such as a person remaining in the same area for more than “five seconds, this raises questions on how ordinary behaviour is now being interpreted as a security risk.

 

There have also been news reports of 7 railway stations- Mumbai, Delhi, Kolkata, Chennai, Hyderabad and Bihar to deploy AI-powered facial recognition surveillance systems under the Safe City Initiative. Bangalore and Kolkata have also deployed AI based FRT systems in metro stations. Delhi metro has been piloting a mobile based FRT which captures an image on a phone and assists officers to match identities of suspects with existing digital databases. The system also allows officers to enter new data. Police officers have been stationed across some stations with mobile device that have the AI driven FRT system The Bangalore metro station also has deployed Automatic Number Plate Recognition (ANPR) technology which allows authorities to monitor surrounding areas and reads vehicle number plates near stations aiding in real-time identification of suspicious vehicles or security risks.

 

Exams 

 

Another area, where AI based surveillance has significantly increased in exam halls. CCTV with AI capabilities are being used by different departments that conduct exams to surveil students. It detects suspicious gestures, mobile phone use and tracks facial expressions such as unusual eye movement. Suspicious actions trigger immediate alerts to invigilators for swift action. In 2025, these technologies were implemented to monitor board exams in Schools in Uttar Pradesh, Bihar, Karnataka, to monitor entrance exams, UPSC and public service exams such as Rajasthan Staff Selection Board Exam, Rajasthan police constable exam and Review Officer/Assistant Review Officer in Uttar Pradesh.

 

The deployment of such technologies raises fundamental questions about legality, proportionality, and authorisation, particularly where children are concerned. There is a lack of publicly available information on the legal basis under which biometric and behavioural data of students are collected. It is also unclear which authority authorises this surveillance and whether informed consent is obtained from parents/ students or guardians, how long the data is stored, who has access to it, and whether it is repurposed beyond exam monitoring. The use of facial analysis and behavioural inference in high-pressure exam environments also risks false positives, discriminatory profiling, and psychological harm,  normal student behaviour may be misclassified as misconduct. In the absence of clear statutory backing, transparent safeguards, independent oversight, and child-specific data protection standards, AI-driven exam surveillance risks normalising invasive monitoring of children and young adults.

 

Election

 

In Hyderabad in a first of its kind incident, the country saw 139 drones being deployed across 407 polling booths for surveillance during district elections. Nagpur also saw deployment of drones prior to elections, The drones could provide real-time visual monitoring of sensitive locations, crowd movements, and any potential violations of the Model Code of Conduct.  The step was ironically undertaken to ensure free, fair and transparent elections. The presence of drones and constant monitoring risks undermining voters’ rights to privacy and dignity, as meaningful participation in elections also requires freedom from unnecessary and intrusive scrutiny.

 

The road ahead continues to look grim 

 

The spread of AI-driven surveillance technologies in public spaces coupled with government’s over reliance on Aadhaar marks a clear shift in how the state surveils its citizens today. Life in India continues to be complicated with layers of identification, verification, and monitoring systems. While the government presents these systems as tools for safety and efficiency, we see the risks of constant surveillance that weakens personal freedoms and places enormous power over data in the hands of the state. Across India, AI-powered surveillance systems are being rolled out with little clarity on who authorises them, how long data is retained, or how people can challenge wrongful identification or profiling. As surveillance becomes smart infrastructure, consent, which was once at least treated as an afterthought, is now increasingly seen as unnecessary and risks to individual rights which were previously sidelined are now barely even being discussed.

 

In November 2025, UIDAI announced its plans for the Aadhaar Vision 2032 roadmap, this includes a plan to integrate AI, blockchain, quantum computing, and next-generation security technologies into Aadhaar. All these technological expansions continue to unfold without parallel conversation or public discussion on what these changes mean for digital rights, privacy, exclusion, accountability, or redress. The situation looks even more worrying when viewed alongside the Digital Personal Data Protection Act, 2023. Section 17 allows the government to exempt its own agencies on broad grounds of national security or public order. The very law that’s supposed to be a safeguard, risks becoming a tool that can potentially expand surveillance.

 

As the India AI Impact Summit 2026 approaches, the absence of meaningful discussion on these issues is concerning. The focus of the Summit is primarily around innovation and growth of AI. If conversations around AI continue to ignore questions of rights, and accountability, we continue to remain at the risk of building systems that are technologically impressive but socially harmful and systems that see citizens as data points first, and rights-holders as an afterthought.