How ready is India for ‘Responsible AI’?



The increasing excitement around technological processes that can mimic human intelligence is well overdue. Several definitions have been offered to classify such technological processes as ‘Artificial Intelligence’, or ‘AI’. For the purpose of’s work in the domain of AI, unless otherwise provided, the words ‘Artificial Intelligence’ shall mean – “the ability of machines to perform cognitive tasks like thinking, perceiving, learning, problem-solving, and decision making” [1]


On several occasions, the Government of India has highlighted its agenda of making India the ‘global AI hub’– to bring the country to the forefront of the global landscape, as an incubator for AI technologies [2]. Forecasts suggest that the Indian Economy will experience an additional USD 967 billion by 2035 due to AI [3]. In addition to this, India has been granted the position of ‘Chair’ at the Global Partnership on AI (GPAI) [4], as of November 2022. These are symptomatic of two things – that the policy of the Government of India will be to expand critical infrastructure and resources to increase deployment of AI in India and that India will play a significant role in the determination of AI growth and development on the global landscape. 


It becomes essential, therefore, to examine India’s preparedness to meet its objective successfully. This requires an investigation into India’s infrastructural readiness, regulatory plan, and capacity, and an observation and cultivation of conversations that the State, the developers, and the citizenry have about the risks and benefits of deploying AI. This is the first in a series of blog posts that will test India’s readiness by highlighting and assessing the developments of ‘AI in India’.  



AI in India

The deployment of AI solutions in the public and private sectors has seen a significant rise. AI-generated revenue in India has already reached an impressive US $12.3 billion in 2022 [5]. Both the private and public sectors have seen increasing reliance on AI-based solutions for improvements in efficiency and scale. The National AI Portal of India (IndiaAI), set up as a joint venture by the Ministry of Electronics and Information Technology (MEITY), National eGovernance Division (NeGD), and the National Association of Software and Service Companies (NASSCOM) is an active knowledge resource for “aspiring entrepreneurs, students, professionals, academics, and everyone else” [6]. The Union Budget for 2023-24 lays out financial contingencies for three centers of excellence in India for interdisciplinary research to develop AI solutions [7]. The Minister of State for Electronics and Information Technology indicated at the G20 (Group of 20) conference that the Government of India will “soon start to assemble large forms of the anonymized dataset collected and harmonized under the Data Governance Framework policy” [8]. This, it was said, will allow the development of “anonymized non-personal data for Indian startups working on AI projects” [9]. MEITY also constituted four committees to gauge the socio-economic impact of AI, which would assist the development of AI policies in India [10]. In addition to the above, the Center for Artificial Intelligence and Robotics (‘CAIR’) at the Defence Research and Development Organization has developed the Networking Traffic Analysis (‘NETRA’). This will allow the surveillance of internet networks, voice traffic through digital platforms, emails, instant messaging transcripts, etc. State Governments too have deployed AI solutions to amplify efficiency in the healthcare, education, public utilities, rural development, and revenue sectors. 


In the Indian judiciary, the Supreme Court of India has launched the Supreme Court Portal for Assistance in Court’s Efficiency (SUPACE). The tool is to enable easier file previews, engagement with a chatbot to assist with a quick overview of cases, generating of proper chronologies, and voice dictation technology for comprehensive drafting of case notes [11].


On a global scale, India has been engaging in strong collaborations with other nations for funding, research, and development of AI technologies. The India-EU Joint Committee on Science and Technology will develop a long-term strategy for collaboration between the countries for research and innovation [12]. The Digital Investment Forum, which was held in 2021 between the two countries by the Joint ICT Working Group was an investment-focused gathering to explore and promote mutual investments in digital markets. The Memorandum of Understanding between Singapore and India, signed in 2022, also seeks to expand cooperation and collaboration on emerging technologies, with both countries entering into an Implementation Agreement to mutually open companies to promote funding [13]. Similar initiatives of international cooperation with the United States of America, Germany, Finland, Japan, the United Kingdom, Australia, the United Arab Emirates are indicative of increasing international investments and collaborated research in the near future into emerging technologies, including AI. 


The intensity and speed with which India is proceeding towards its goal of becoming a “global AI hub” warrants an urgent examination of the legal and regulatory infrastructure that has been prepared to mitigate the risks inherent in emerging technologies, particularly AI. 



The Current Regulatory Landscape 

Indian regulation of technologies has suffered the ill fate of frequent redundancies or obsolescence. Presently, the Ministry of Electronics and Information Technology (MEITY) carries the portfolio of regulating technologies in India [14]. Numerous statutes such as the Information Technology Act, 2000, and the Patents Act, 1970 regulate technological matters and their development. Several statutory bodies, such as the Reserve Bank of India, the Telecom Regulatory Authority of India, and the Competition Commission of India, provide sectoral regulation and enforcement. The Ministry of Electronics and Information Technology itself hosts divisions dedicated to the development of ‘infrastructure and governance’, ‘research and development’, ‘international cooperation’, ‘human-centered computing’, ‘emerging technologies’, and ‘electronic systems design and manufacturing’, among others. In addition, bodies such as the ‘Office of the Principal Scientific Adviser’, and the ‘Empowered Technology Group’ influence the development of policies critical to the ideation, development, and deployment of new technologies. 


As a data-driven technology with far-reaching consequences, any observations that are made about the health of the present landscape in India on AI governance must be prefaced with a study of the data protection legislation existing, and the regulatory guidelines and protections in the context of AI respectively. Currently, India can boast of neither. 



Existing Data Protection Legislation

The enactment of legislation to protect personal and non-personal data in India has seen multiple attempts between 2018 and 2022, ultimately culminating in the Digital Personal Data Protection Act, 2023. This legislation is lacking on several fronts, especially in terms of addressing emerging technologies, like AI. The Act is designed in such a manner to allow the Government, and its instrumentalities, to exempt itself completely from its application, and hence, obligations and responsibilities. Despite that a comprehensive consent mechanism has been provided, it can be easily overridden by the provision allowing the Government to process personal data without express consent, for legitimate, reasonable, and expected purposes. The legitimacy, reasonability, and expectancy of these purposes are left at the discretion of the Government. Further, the Act doesn’t create scope for standardization of data storage and processing practices and completely misses incorporating essential data protection principles, such as data minimization, purpose limitation and storage limitation.  


Further, its scope is limited to only personal data, with non-personal/anonymized data existing in the public domain completely unregulated. To counter this, the Center released a set of policies under the Draft National Data Governance Framework Policy in 2022 to realize the full potential of the data-driven economy in which India finds itself. The policy proposes to achieve its objectives through the establishment of an Indian Data Management Office (IDMO) which is entrusted with broad powers to make rules, regulations, standards and uniform policies on data collection, storage, processing, and sharing with other entities. Private entities, start-ups and Indian researchers have been given the ability to collaborate, contribute and utilize the datasets available on the multiple ‘platforms’ maintained by the IDMO. However, these policies are far from being enforceable and implemented in the design and functioning of companies. 


The absence of enforceable legislative safeguards to protect personal and non-personal data, which fuel AI technologies and deployment of AI solutions, poses a significant risk. Non-consensual collection and unsupervised processing of personal data violates the right to privacy which the Supreme Court of India held to be fundamental in Justice K.S. Puttaswamy (Retd.) v Union of India [15]. Therefore, absent adequate protection of data, the development of AI and its deployment can cause a negative impact, as the next section shows. 



Present Regulatory Guidelines and Protections

Presently, India does not have any enforceable guidelines on the development or deployment of AI. NITI Aayog, the think tank of the Government of India, developed the National Strategy for Artificial Intelligence in 2018 in the context of five public sectors in which AI adoption has been envisioned. Subsequently, it released two discussion papers. First, the paper identified “principles for responsible design, development, and deployment of artificial intelligence (AI) in India, and setting out enforcement mechanisms for the operationalization of these principles” (Responsible AI principles) [16]. In the succeeding paper, they conducted a use case study of the application of Responsible AI principles on the Digi Yatra program. For context, Digi Yatra is a technology developed by the Ministry of Civil Aviation to allow smoother boarding processes for departing passengers at airports using facial recognition and verification technologies. The paper conducted an examination of whether Digi Yatra’s framework and processes match the RAI principles’ standards and forwarded recommendations on the same. 


It bears mentioning at this point that the principles and recommendations laid out by NITI Aayog are merely suggestive and have no binding effect. As such, neither the State nor private bodies, are required or obligated to follow the safe and responsible approach that has been laid out. This is despite the rapid deployment of AI solutions and private entities across the country. 


In summary, the two essential prongs of protecting human rights in sectors where AI is deployed are currently left vulnerable due to the absence of legislative initiatives. 




The immediate consequential question that arises is about the kind of vulnerabilities that individuals are exposed to when AI solutions are deployed despite the prevailing vacuum. Reflecting on the risks of unsupervised deployment is essential to both understand the urgency of regulatory movement in this regard and to understand the nature and degree of such regulation. 


A recurring concern, for example, has been in the context of the mirroring of human biases in AI. The aggressive nature of this is highlighted when it is juxtaposed against use cases, such as a biased AI system that is deployed in the judiciary to determine factors such as recidivism, or the use of invasive AI in schools to prevent cheating in examinations. For example, the Correctional Offender Management Profiling for Alternative Sanctions program was deployed in the United States to measure the risk of a convict committing offenses upon release. It was reported to generate biased assessments against black offenders, while white offenders, who committed felonies nearly twice as often after release, were tagged ‘low risk’ [17]. In the education sector, AI’s solutions are being considered to enable better delivery of teaching services, reducing drop-outs, and removing the assessor’s (teachers’) bias from the grading process. However, active oversight is required to ensure that the dataset that is relied on to develop these solutions does not emulate the real world. For example, a study has indicated a correlation between the gender of the student and the teacher, and grading- when the student and teacher share the same gender, the student is graded better. Without adequate identification, oversight, and removal of these biases from the dataset and algorithms, the very objectiveness that AI solutions seek to provide is nullified. 


As machines that use data as their fuel for developing solutions, deployment of AI without addressing privacy risks should invite severe legal scrutiny. The use of predictive technologies by the healthcare industry can be relied on to press this point into service. The aspirations of the healthcare industry to introduce AI solutions for the early detection and cure of diseases demand extensive personal datasets of a large pool of individuals. Without legal obligations on the service providers to obtain informed consent of the individuals whose data is being collected for processing in different ways, to maintain adequate storage protocols to prevent breaches and leaks, to engage in healthy and informed data-sharing practices with consent, the personal data of patients is subjected to aggressive risk. The concerns are highlighted further when reading reports that indicate that India ranks second in the list of countries with the highest number of cyber-attacks in the healthcare industry in 2021 [18]. Furthermore, AI systems may even predict developments and share that information without the patient consenting to such prediction or sharing. This extended example can be relied on to reiterate the requirement of statutory intervention both to ensure better data protection and regulation which ensures a rights-centric AI ecosystem. 


The absence of structures that introduce transparency and accountability of AI solutions risks leaving an aggrieved individual remediless. Deployment of automated solutions, for example, without being able to decipher the reasons that caused it to provide a certain output, risks subjecting individuals to opaque processes which may lead to unfair results. The use of AI solutions for recruitment in the private sector has seen a gradual increase, with leading organizations relying on predictable algorithms to decide whether an applicant should be hired. Adequate identification of the considerations undertaken by the machine to determine the reliability and fairness of the output is essential not only to ensure that a beneficial decision has been made for the recruiter but also to ensure that the applicant has not been subjected to scrutiny owing to algorithmic redundancies. Accountability in this, and all scenarios in which AI solutions are deployed, requires that the decision-making process is transparent, and understandable for those who employ it, as also are subjected to it. Statutory guarantees by way of accessible legislative and institutional recourse, methods of dispute resolution and clarity on the assignment of accountability, and prescriptive standards and mandates for developers are crucial to mitigate the risks that can arise. 




Presently, there is no centralized mechanism to map trace, control, fund, and nurture AI initiatives and solutions in India. This aggressive fragmentation is problematic for several reasons, the chief of these being an evasion of standardization. Without control measures to ensure that AI is rolled out in a rights-friendly environment, where primacy is given to protecting and promoting human rights, rather than efficiency, the present AI landscape in India is far from ideal. The promises of a Digital India Act in this regard, which will certainly influence standardization, best practices, accountability, and transparency in the context of AI solutions deployed by the public and private sectors in India are bleak presently, with the consultation period still underway. When examined, it is found that AI solutions that have been deployed by the State, including surveillance technologies, operate in opaque environments, with virtually no oversight [19]


Even if India were to undertake a sectoral approach, where each industry (such as healthcare, education, agriculture, infrastructure, information technology, etc.) is regulated by the concerned Ministry, as the current pattern seems to indicate, it is essential that requisite instruments and documents (Standard Operating Procedures, by way of an example), are made publicly available. This would, at the very least, provide relief through public scrutiny and oversight. At present, there is no clarity, and therefore as to whether the processes employed to collect personal and non-personal data, the anonymization standards, data and solution-sharing practices are compliant with the fundamental rights promised to Indians in the Constitution. Furthermore, there is no indication as to whether any efforts are made at all to address any of the ethical concerns concerning bias and un-explainability (of some AI systems). 


India may consider the approaches taken by its contemporaries such as the United States of America, or the European Union. Both of these regions are proceeding towards legislative controls to regulate the growth of AI deployment and mitigate the risks by injuncting harms ex-ante and providing reliefs ex-post. The Artificial Intelligence Act proposed in the European Union, for example, seeks to strike the perfect balance between risk and innovation, by creating compliance bands based on the risks to rights which AI solutions can place. This is an obvious response to concerns about the restrictive growth of emerging technologies due to over-regulation. 


Non-binding documents, such as recommendations of the NITI Aayog, are insufficient to meet the speed of AI deployment, and speed and scale of human rights which are at a high risk consequently. The development and deployment of AI in India, and its harms and benefits for Indians, will be a self-fulfilling prophecy. Of all its aspirations of becoming a leader in technology, India must make sure that it does not repeat the sorry episode that left Indians without a law to protect their data and privacy, despite several attempts since 2018.  




[1]  Discussion Paper National Strategy for Artificial Intelligence (2018), NITI AAYOG.( June 2018),  Available at:

[2]  Sai Ishwar, India set to become global AI hub through tech-based skilling: PM Modi, Business Standard. (Oct 2020), Available at:

[3] After assuming the G20 presidency, Shri Narendra Modi government assumed the Chair of Global Partnership on AI (GPAI). Press Information Bureau. (Nov 2022),

[4] The GPAI is “an international initiative to support responsible and human-centric development and use of Artificial Intelligence (AI).” See more:

[5] Abhijeet Adhikari, The state of AI in India 2022, Analytics India Magazine (Dec 2022), Available at:

[6] India ai: About Us, The National AI Portal of India. MEITY. Available at:

[7] India to set up three centres of excellence for AI, says FM Sitharaman, Mint (Feb. 2023),

[8] Swati Luthra, India to assemble anonymized datasets, collected under Data Governance Framework Policy, Mint. (Dec. 2022),

[9] Ibid.

[10] Finalization of National Artificial Intelligence Mission,  Press Release, Press Information Bureau (2018),

[11] Launching of AI Portal – SUPACE  by Hon’ble Shri Justice Sharad Arvind Bobde (Chief Justice of India and Patron-in-Chief, AI Committee), Webcast | Government Video Portal, Government of India (Apr. 2021),

[12] India-EU Joint Committee on S&T Cooperation Creates Action-Oriented Agenda Focusing on ICT, Resource Efficiency & Electric Mobility, Department of Science and Technology, Ministry of Science and Technology, Government of India (Aug. 10, 2022),

[13] Singapore and India Enhance Cooperation in Science, Technology, and Innovation, Press Release, Ministry of Trade and Industry, Government of Singapore (Feb. 23rd, 2022),

[14] MEITY Rules of Business and Procedure, Ministry of Electronics and Information Technology, Government of India,

[15] Justice K.S. Puttaswamy (Retd.) v Union of India, (2017) 10 SCC 1.

[16] Discussion Paper: Responsible AI for All – Adopting the Framework: A Use Case Approach on Facial Recognition Technology, NITI Aayog (Nov 2022), Available at:

[17] Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin, How We Analysed the Compass Recidivism Algorithm, ProPublica. (May 2016),

[18] Cyber-attacks on Indian Healthcare Industry Second Highest in the World: Cloud SEK, The Hindu. (Sep 2022),

[19] Legal Challenge by CPIL and SFLC.IN to Surveillance Projects CMS, NATGRID and NETRA, Software Freedom Law Center, India (Mar. 2022),

Related Posts