Esya Dispatch | 22-28 March 2026 | NHRC Issues Notices Over Alleged DPDP Act Violations and MeitY Moves to Regulate Online User-Posted News

Welcome to The Esya Dispatch, a weekly snapshot of the policy debates shaping India’s digital economy. Each edition brings together key developments in technology policy, from platform governance and AI regulation to data protection and competition — along with the Esya Centre’s perspective on what they mean for innovation, businesses, and users.

Here’s a quick recap of two key tech policy developments from the past week:

NHRC issues notices over alleged DPDP Act violations by AI, social media, edtech platforms

The National Human Rights Commission has initiated action against certain companies alleged to have violated provisions under the Digital Personal Data Protection Act, based on a report by the think tank ASIA. The ASIA report claims that services like Meta Platforms, Khan Academy, Gemini and Perplexity AI are not compliant with the Act’s requirements related to data tracking, server security and grievance redressal mechanisms. The Commission stated that this raises serious concerns for children’s online safety and directed the entities to submit a compliance report within 15 days.

ESYA’S TAKEThe Asia report, on which the NHRC’s complaint is based, does not consider legal or technical reality. For one, it checks compliance with the Digital Personal Data Protection Act even though this law is not fully in force. Additionally, there seems to be no justification for choosing the platforms evaluated aside from reach. This metric does not make sense for platforms like ChatGPT or Claude, because users cannot reach out to one another on them. The platforms identified also have no unifying features when it comes to child safety, which renders any comparison flawed.  Many of the platforms evaluated are also educational platforms, which are exempt from certain requirements under the Digital Personal Data Protection Rules, 2025.

More broadly, the NHRC’s decision to act as the de facto enforcer of a law that is not in place is concerning and sends a bad message to investors about how uncertain and unpredictable the Indian regulatory environment is, particularly for tech.


MeitY moves to regulate online user-posted news

The Ministry of Electronics and IT has proposed amendments to the IT Rules, 2021 to give greater legal weight to government-issued advisories and SOPs, and expand government oversight over user-generated online news content. Broadly, the proposed amendments mandate that platforms comply with government advisories as part of their due diligence obligations, and any failure to do so could result in them losing safe harbour under the Information Technology Act. They also enable the inter-departmental committee under the IT Rules, which comprises bodies like the Ministry of Home Affairs and the Ministry of Defence, to block user-generated news and current affairs content, and further empower this committee to examine not just escalated user complaints, but also matters referred directly to it by the government.

ESYA’S TAKEThe proposed amendments raise considerable economic concerns. For one, they envision an “advisory governance” model, where the government regulates digital services by rapidly issuing advisories in response to a specific concern or incident. While this helps with agility in governance, it also introduces significant regulatory uncertainty for businesses. This is because unlike laws and rules, advisories typically lack procedural safeguards such as prior consultation with stakeholders, a defined scope, and predictable timelines. Thus, businesses may find it difficult to anticipate regulatory changes, assess compliance obligations, and plan their investments.

The impact of this regulatory uncertainty will likely extend well beyond India’s digital markets, which are deeply interlinked with its physical economy. For example, traditional businesses that rely on digital platforms for market signaling or customer acquisition may face higher transaction costs due to the proposed amendments, which could lead them to reduce investment or outputs. Smaller firms may be forced to exit the market altogether, as they may not be able to accurately estimate regulatory risk. Thus, the ad-hoc and uncertain framework under the proposed amendments presents a systemic economic risk, which needs to be factored in by policymakers.

 

Esya Dispatch | 15-21 March 2026 | Blocking Orders and Competition in the AI value chain.

Welcome to The Esya Dispatch, a weekly snapshot of the policy debates shaping India’s digital economy. Each edition brings together key developments in technology policy, from platform governance and AI regulation to data protection and competition —along with the Esya Centre’s perspective on what they mean for innovation, businesses, and users.

Here’s a quick recap of two key tech policy developments from the past week:

Centre looks to empower more ministries to block social media content

The Centre may expand the scope of Section 69A of the Information Technology Act, 2000 to allow bodies like the Ministry of Home Affairs, the Ministry of External Affairs, and the Ministry of Defence to issue blocking orders to social media platforms. Currently, only the Ministry of Electronics and IT can block online content under this provision. Senior government officials said that the move is necessary to combat AI-generated misinformation online. Notably, the Centre may extend blocking powers to other ministries by issuing a gazette notification, instead of amending the IT Act or its associated rules.

Esya’s take: This move ostensibly aims to speed up the removal of AI-generated misinformation by extending blocking powers to multiple ministries, but may not be constitutional.  In the past, a provision in the IT Rules had similarly proposed taking down content flagged as misinformation by a govt-appointed fact-check unit. However, the Bombay High Court struck down this provision for violating the right to free speech. It noted that the government cannot be the arbiter of truth and the right to free speech does not encompass the right to the truth – because if this was the case, all fiction would be illegal.

Recent amendments to the IT Rules have also reduced content takedown timelines to 2-3 hours in some instances. Since this move could also lead to a surge of requests from multiple ministries, platforms may struggle to process them efficiently, creating confusion.

 

CCI Chairperson: Ready to act against anti-competitive conduct in AI value chain

The Chairperson of the Competition Commission of India (CCI), Ravneet Kaur recently said that the regulator is ready to act against anti-competitive conduct in the AI value chain. She flagged concerns regarding algorithmic collusion, targeted price discrimination, self-preferencing and opaqueness in AI systems, noting that these could lead to concentration in AI markets. She added that the regulator has issued a guidance note advising businesses on how to conduct a self-audit of AI systems, to prevent any anti-competitive outcomes from the development and deployment of AI applications.


Esya’s take: A few months ago, the CCI released its market study on AI and competition, which identified algorithmic collusion, targeted pricing and reduced transparency as key competitive concerns. However, such conduct is not new or unique to AI markets – for example, price discrimination is a common practice in sectors like transportation and hospitality. Additionally, since research on issues like AI-driven algorithmic collusion is still at a nascent stage, it is premature to predict the likelihood of its occurrence. Our survey of 50 Indian companies also found that 54 percent of respondents regularly multi-home across both open-source and proprietary AI models, which shows that the Indian market is not concentrated around a handful of frontier models. Thus, the CCI’s concerns appear overstated and rest on uncertain presumptions.

 

Esya Dispatch | 8-14 March 2026 | Bots, Deepfakes and Social Media Bans

Welcome to The Esya Dispatch, a weekly snapshot of the policy debates shaping India’s digital economy. Each edition brings together key developments in technology policy, from platform governance and AI regulation to data protection and competition —along with the Esya Centre’s perspective on what they mean for innovation, businesses, and users.

Here’s a quick recap of two key tech policy developments from the past week:

One-fifth Australian children still use TikTok, Snapchat despite social media ban 

A study found that a fifth of teenagers below 16 continue to use social media in Australia, despite the country having enacted a social media ban for children below this age. Since the ban took effect, the number of users aged 13-15 using Snapchat reduced by only 13.8 percentage points, while those using TikTok and YouTube reduced by just 5.7 and 1 percentage points respectively. However, even these meagre dips in usage are now beginning to recover. Australia’s eSafety Commissioner has stated that it is actively engaging with platforms and their age-assurance providers regarding the presence of under-16 users on social media.

The findings from Australia come as states like Karnataka and Andhra Pradesh consider banning social media for younger users, and the central government reportedly explores graded restrictions for children. Our survey suggests that children will likely find ways around such rules.

ESYA’S TAKE Banning social media may not be an effective way to keep children safe online. Notably, the Esya Centre’s own survey of 1,000 Indian children aged 10–15 shows why a ban is unlikely to work. For one, many children are more tech-savvy than policymakers assume—69% said they were comfortable changing settings on their social media accounts. We also found that children may also easily bypass age-gating mechanisms, with 71% accessing social media through a family member’s account. At the same time, bans could cut children off from positive online spaces: 55% of respondents said they had meaningful interactions with strangers online.


MeitY meets industry stakeholders regarding bot amplification, deepfake regulation 

The Secretary of the Ministry of Electronics and IT, S Krishnan, recently held a meeting with stakeholders discussing the role of bot accounts in amplifying misleading information online. In the meeting, the government sought details on whether platforms possessed sufficient resources to curb such bot networks and enquired whether there is a need for a new policy or regulation to address this issue. The government also discussed the regulation of AI-generated deepfakes, asking whether copyright law or personality rights could be used to counter any synthetic content based on the likeness of a person.

ESYA’S TAKE Letting people use copyright law to block the use of their likeness could create tensions between the rights of different stakeholders. Normally, features like a person’s face or voice are protected through personality rights, not copyright. Additionally, copyright usually belongs to the person who created the work – for instance, the copyright over a person’s image lies with the photographer. Thus, if individuals are allowed to claim copyright over their personal characteristics, it could trigger a conflict with creators’ intellectual property rights.