Digital Disinformation and Election Integrity: Benchmarks for Regulation

As critical events in democratic life, elections pose extraordinary challenges to the autonomy of public opinion. This article outlines some of the regulatory challenges that have emerged in the wake of digital media expansion in India, and argues that the self-regulatory mechanism that was developed during the 2019 national elections is insufficient in addressing problems of online extreme speech, algorithmic bias, and proxy campaigns. Building on the electoral management model proposed by Netina Tan, it suggests that a critical overview of ongoing efforts can help determine the readiness of Indian regulatory structures to respond to digital disruptions during elections and emphasises the need for a co-regulatory mechanism.

It is commonplace to acknowledge that political parties and politicians intensify their efforts to influence public mood and voter loyalties during elections. Democracies, then, not only become a theatre for maverick speech and public performances, but also a testing ground for regulatory interventions. The expansion of digital media in India in the last decade has placed new pressures on regulatory efforts at containing malicious rumours and disinformation during elections. These tensions reflect similar developments around digital social media and electoral processes around the world. Globally, digital campaigns have raised concerns around data privacy and microtargeting, as well as the use of bots and algorithmic sorting as new ways to sabotage political discourse (Ong and Cabanes 2018; Bradshaw and Howard 2017). 

The 2019 general elections in India exposed several limits and loopholes in the existing regulatory structures around media-enabled campaigns. During the elections, digital social media and messaging services emerged as a battleground for political parties to experiment with new tactics of content creation and distribution. Building on years of preparation, the Bharatiya Janata Party (BJP) was at the forefront in organising novel ways of creating and distributing election content. The party continued to rely on its office bearers, proxy workers, and volunteers to navigate different levels of content veracity and creative messaging. Multiple strategies of content creation were at work: from straightforward “party line” slogans to deep message ambiguation where words mutate as they travel and accumulate sinister meanings within specific cultural and political contexts of reception. Innovations were also striking on the distribution side. If office bearers with designated roles as social media coordinators closely monitored the “official channel” of content flow from national to local levels, proxy workers and volunteers assembled vast networks of distribution based on personal connections and snowballing techniques. These networks were further augmented by the potential virality of fear-inducing and humour-laden extreme speech that targeted communities based on religion, caste and gender (Udupa 2019). 

The BJP’s first-mover advantage in social media campaigning was challenged by other political parties during the run-up to the elections. Stepping up its efforts, the Indian National Congress (INC) re-energised several of its party units, including a dedicated “research team” to prepare “counters” to the BJP and other parties. Full-fledged social media teams of the Congress and regional political parties got on to the same game of composing witty, satirical, and retaliatory messages. 

Alongside party-based efforts, individual politicians increasingly recruited social media campaigners for online promotions. It was common to witness social media strategists accompanying politicians during campaign visits for ward-level mobilisation. These strategists ranged from a single individual who would follow the leader with a camera to upload the video the very next minute on Twitter, YouTube, and Facebook to small- and mid-sized enterprises that had paid teams working on social media promotions. Media reports also exposed clandestine operations of proxy companies that created toxic digital campaign content aimed against religious minorities and opposition party leaders (Poonam and Bansal 2019). Even as Facebook, WhatsApp and Twitter came under the radar for election content volatilities, TikTok, ShareChat, Helo and other mid-range platforms started providing new means to share political content and peddle partisan positions.

The vast complexity of content creation and distribution channels, together with the speed of circulation in the digital age, placed enormous demands on regulatory mechanisms during the national elections. How did the regulatory system respond, and what were the limitations?

Voluntary Code of Ethics

The Election Commission of India (ECI) opted for a cautious, if lenient, approach that allowed social media companies to develop a “voluntary code of ethics.” The voluntary code aimed to bring transparency in paid political advertisements and place checks on violative content. With the Internet and Mobile Association of India (IAMAI) as the representative body, social media platforms including Facebook, WhatsApp, Twitter, Google, ShareChat and TikTok agreed to act on violations reported under Section 126 of the Representation of the People Act, 1951, within three hours of receiving complaints from the ECI. The time frame followed the recommendations of the Sinha Committee. During the national elections, social media platforms acted on 909 violative cases reported by the Election Commission (BBC Monitoring South Asia 2019). Social media companies also agreed to “provide a mechanism for political advertisers to submit pre-certified advertisements issued by Media Certification and Monitoring Committee” (ECI 2019a). Alongside these steps, IAMAI members promised to organise voter awareness campaigns. 

The ECIIAMAI agreement was the first formal regulatory step to bring internet-based companies to agree on implementing a voluntary code. The code covered key aspects of internet speech regulation, including expeditious redressal of potentially violative content, transparency in political advertisements, capacity building for nodal officers in reporting harmful content, public awareness, and coordination between social media platforms and the ECI. According to the ECI, Facebook, Twitter, WhatsApp and other social media companies have agreed to adhere to this code in all future elections, including the Maharashtra and Haryana assembly polls (ECI 2019b).

The self-regulatory code is likely to remain a common feature of election-related regulatory process in the coming years. Without doubt, self-regulatory mechanisms have several merits. They can prevent regulatory overreach and political misuse of existing provisions. For instance, Germany has introduced new regulations to punitively enforce social media companies to remove content flagged as hate speech. These drastic measures have invited criticism that penalties are decided without “prior determination of the legality of the content at issue by court” (Article 19 2017: 2). Concerns have been raised that such unilateral actions could set a bad precedent for countries where guarantees to political freedom are not secure.

While the self-regulatory code appears to be a good solution in the context of actual and potential misuse of regulatory power, the question remains whether the voluntary code is sufficient to realise the stated regulatory objectives of containing harmful content and stemming opaque sources of political advertising. A telling detail in the Indian case is that the IAMAI continues to act as a liaison between the ECI and social media companies. Social media companies have secured the buffer of an association to agree to a voluntary code. The looming question is whether such double distancing—first from being direct parties and second from enforceable obligation—can bring about the desired changes. 

The fate of the Codes of Ethics and Broadcasting Standards in commercial news television is a sobering reminder of the limitations of self-regulation (Seshu 2018). Mechanisms of peer surveillance and industry-evolved guidelines, in this case, have failed to ensure uniform compliance. In 2009, the News Broadcasters Association (NBA), a professional association for private news broadcasters, drew a code of ethics and set up the News Broadcasting Standards Disputes Redressal Authority. The industry-wide response was prompted by governmental attempts to make direct regulatory interventions in content. Since its inception, the NBA has advocated for stronger and more uniform application of the code of ethics across television channels. However, the Hoot’s study in 2012 revealed that the NBA “did not take ‘strong punitive action’ against the channels that violated their guidelines” (Akoijam 2012). A more recent report in the Hoot has confirmed that the trend has not been promising in the following years (Seshu 2018). Global trends have also suggested that the self-regulatory model bears the risk of fragmentation and lack of legitimacy. How then would this work for even more volatile field of digital social media and messenger services?

An effective co-regulatory model is much needed in ensuring regulatory oversight for in-built incentives evolved by the industry. For instance, in the broadcasting sector, co-regulatory models in many European countries retain the state’s regulatory power through certification of code, while allowing sufficient institutional space for the industry associations to administer and monitor regulation. Such measures should extend to entities like IAMAI so that incentives and guidelines developed by the industry are linked to autonomous public statutory bodies with well-defined processes for escalation of complaints and publication of findings.  

Institutional Measures

Developing a co-regulatory code is an important first step. As Sharma (2019) has commented, 

a top-down imposition of statutory regulations or a checklist of dos-and-don’ts on political parties is unlikely to be a plausible solution on its own. The costs of regulation are too high, the political will for enforcement is too low, and the possibility of loopholes too many. Instead, regulations placed on how political parties use digital media needs to be seen as part of a wider gamut of other urgent reforms and regulations—such as those pertaining to political financing and creating a legal framework for political consulting firms.

A co-regulatory code set within broad-ranging institutional reforms can facilitate urgent actions required by social media companies. These include transparency in resources allocated for content screening and content moderation algorithms, and implementing practices such as inviting public feedback to the training material for content moderation and offering data access to research (Hickok and Udupa 2019). These measures are important because not only human actors, but also automated lurkers and sorters of artificial intelligence systems have emerged as a new challenge to the legitimacy of political discourse. 

In a useful study, Tan (2019) has proposed “electoral management digital readiness index” (EMDR) to assess the capacity of electoral management bodies to “respond to digital disruptions.” The criteria include “the type of electoral management model; presence of specific or new regulations governing online campaign and disinformation; confidence in the rule of law, and technological readiness of the digital economy.” In recent years, the Indian government has tried several measures, including guidelines on fake news. In 2018, the Ministry of Information and Broadcasting issued what was seen as a sweeping order to take action against journalists accused of spreading fake news (Business Today 2019). Regional governments have not been silent. The West Bengal government, for instance, has strengthened existing laws to act against citizens who spread misinformation and cause fear among the public. The government has also been actively “preparing a database of fake news stories distributed on social media over the past few years and … kept records of past offenders” (Funke and Flamini nd). Alongside the voluntary code of ethics for social media companies, the Indian state has announced the policy of #AIforAll, which addresses, among other issues, the growing concern over algorithmic bias (NITI Aayog 2018: 85). Last year the Ministry of Electronics and Information Technology released draft changes that would require internet intermediaries and messaging apps to trace originators of messages and provide this information to authorised government agencies (Business Today 2019). 

The multifarious efforts that are now put into action should be examined to assess the EMDR index for India. Moreover, close scrutiny should follow in terms of determining which measures are directed at squashing political dissent. Freedom House has reported that India had the highest number of internet shutdowns globally in 2018 (Shahbaz 2018). Non-profit organisation Access Now (2018) reported 134 internet shutdowns in India in 2018, which was the highest in the world (followed by Pakistan at 12 instances, and Yemen and Iraq at seven instances each). In a recent study, Rydzak (2019: 4) has argued that although social media and digital platforms are “not critical to collective action in India,” they “are readily employed as methods of coordination, and removing them can turn a predictable situation into one that is highly volatile, violent and chaotic.” These reports have revealed that internet shutdowns have been disproportionately high in Jammu and Kashmir. Such drastic steps indicate that what is enforced in the name of public interest action against digital disinformation might serve just the opposite ends—of squashing political dissent and delimiting democratic participation. 

Globally, the Cambridge Analytica case has been a watershed in exposing how social media and data analytics companies manipulate public life (Howard 2018). However, the trends are growing and gaining new dimensions, including algorithmic manipulation, internet shutdowns and consolidation in content distribution in countries like India. An effective regulatory response would have to go beyond election time fixes and attend to deeper problems of carriage and content in the era of “functionally unbundled” digital communication where communication functions are distributed across different platforms through interoperable standards (Goldman and Chen 2010; Udupa, 2012). These include enabling multicasting technologies that could benefit all content providers instead of proprietary forms of content-catching benefiting capital-heavy private players; fair competition among digital content carriers and distributors to avoid trends of vertical and horizontal concentration; and a content policy that embraces the strategy of eight Ds to prevent online harms—deletion, demotion, disclosure, dilution, delay, diversion, deterrence, and digital literacy (Persily 2018). At the same time, public scrutiny on governmental actions, such as sweeping regulations against fake news and internet shutdowns, is a dire necessity. These thoroughgoing efforts need steady cooperation from social media companies as well as civil society pressure to hold corporates and governments to account. 

Must Read

Do water policies recognise the differential requirements and usages of water by women and the importance of adequate availability and accessibility?
Personal Laws in India present a situation where abolishing them in the interest of gender justice also inadvertently benefits the reactionary side.   
Back to Top