Cambridge Analytica and the Political Economy of Persuasion

Where do the financial interests of social media platforms and advertising firms align with the interests of political actors? Where might these interests conflict with constitutional values? As we approach the 2019 general elections in India, a framework to regulate political advertising and data privacy has become most urgent.

We would know what kind of messaging you are susceptible to—the framing, topic, tone … where you are most likely to consume it … and how many times we would need to touch you with that to change how you think.

The above excerpt is from an interview with Christopher Wylie (Wickendon 2018) of Cambridge Analytica, the political consulting firm that has risen from relative obscurity to being globally known in a matter of weeks. Cambridge Analytica used deceptive means (illegal in several countries) to gain access from Facebook to “granular” information about more than 50 million Americans and deployed it to tailor political messaging for Donald Trump’s (eventually successful) presidential campaign. 

Propaganda is not new and, when done right, it does involve manipulation of public opinion. What, then, was new about this incident? For one, information about people’s preferences and motivations had been obtained under the pretence of a cheerful personality quiz on Facebook (Cadwalladr and Graham-Harrison 2018). Users were outraged that their (and their Facebook friends’) information was used as fodder for a political campaign, without their knowledge or consent.

We do not know the extent to which, or whether at all, Cambridge Analytica’s activities contributed to Trump’s victory; but, for now, that speculation is irrelevant. In the outrage surrounding the exposér, the promise of the internet as a decentralised, open, and democratic space for public dialogue and private communications seems ever more distant. You can still “tell your story” in ways that television or print has never made possible, but what (or who) decides its audience?

It has been an open secret that communications on the internet are concentrated on a handful of social media platforms, and with a smaller subset of companies that own them. The internet might have limitless content, but its consumers have finite attention. Platforms are built around addressing this attention scarcity, with the stated goal of bringing forth content that is relevant and tailored to each user. Social media companies do not generally produce content, and so their very public community guidelines apply to users, rather than constitute editorial guidelines for the platform. Through the operation of algorithms, however, they organise and rank what an individual user sees on the platform and when, as well as decide on the basis for classification of “trending topics.”[1] Overall, these algorithms determine the content that is amplified and that which disappears without a trace. A part of this, of course, is sponsored content, which may be artificially boosted to target audiences in exchange for payment.

These algorithmic choices and business dynamics shape our exposure to opinion and fact, and the range of sources from which we get them. The manipulation of their preferences may not interfere directly with an individual’s options, but, as legal philosopher Joseph Raz (1986: 377) explains, “perverts the way that person reaches decisions, forms preferences, or adopts goals.” This distortion, too, is an “invasion of autonomy.” India’s constitutional jurisprudence affirms that a range of informational choices is intrinsically important for both individual freedom and democracy. Justice Mathews, in his dissenting opinion in Bennett Coleman v Union of India (1973), noted that an informed electorate is not harmed by government censorship alone, but equally by more subtle “restraints on access” that are put in place by private players.

In a different case, in 1995 the Supreme Court made the seminal observation that if “only the affluent few” were to control airwaves for broadcasting, it would put them “in a position to use it to serve their own interest by manipulating news and views.” Such a situation would pose a danger to free speech because it would threaten diversity of opinion and deny the public “truthful information on all sides of an issue” (Ministry of Information and Broadcasting v Cricket Association of Bengal 1995).

Where, then, do the financial interests of platforms and advertising firms align with the interests of political actors, and where might these interests conflict with constitutional values? What technologies aid such interests and why might these justify being thought of differently from similar conflicts offline? Part of this debate, I would argue, tends to overstate the impact of technological architecture on public opinion. First, we need much more India-specific research. Internet penetration through mobile phone, might be nearing 60% in urban areas, but less than 20% of rural India is online.[2] This means that much of electoral propaganda is still spread through offline tools. That said, these numbers are rising, as are data-driven election campaigns. As we approach the 2019 general elections, a robust legal framework for both political advertising and data privacy has become urgent.

Personalisation and Targeting

The organising principle for content on social media platforms is personalisation, or the targeting of content that is specific to the individual’s preferences and networks. These platforms might be “neutral” to the content they host, in that there is no human curation, but they involve a host of design choices. Specifically, they are formulated to optimise “engagement” or interaction on the platform: the more time spent on the site, the more clicks, the more “likes” (Wu 2017). These tools allow political campaigns or their agents to deploy messaging that is tailored to more specific target groups.

Eventually, the more granular and specific the targeting, the longer the users are expected to spend on these sites,[3] and therefore the more attractive it is for advertisers. The (economic) value of the platform subsequently increases. But, what impact does this have on users? Given that their attention is limited, some amount of filtering is not only justified, but ­necessary. However, many are alarmed that personalising entails exposing individuals only to views they are already sympathetic to, and those of their immediate networks, thereby fostering a more polarised populace (Sunstein 2007; Pariser 2011).

In offering a single persuasive opinion at the right time, targeted communications are uniquely placed to narrow our information choices (Benkler 2001). As news consumption moves away from traditional media to social media channels, studies point to a vanishing “common core” of what is newsworthy in a society, and even a lack of a shared sense of reality (Moeller et al 2006). The internet, once celebrated for creating space for marginalised narratives, is today being accused of promoting only special-interest politics (Sunstein 2007).

At the same time, while these concerned voices often assume technology to be a defining factor in social and political relations, the extent of its impact on, say, electoral outcomes, remains an open question. Many of these accounts risk overstating the links. A study conducted by a team from Harvard University found that over the course of the presidential election in the United States (US), the right-wing media system had developed into an “insulated knowledge community, reinforcing the shared world view of readers and shielding them from journalism that challenged it” (Benkler et al 2017). However, this polarisation was asymmetrical—more acute on the right-wing than on the left—which is indicative of the internal political, non-technological dynamics that might make certain communities less connected to non-partisan mainstream news outlets than others.

Similar research is critical for India, particularly before our general elections in 2019. The immediate concern is that these tools for targeting are on offer to the highest bidder. Those with deeper pockets can access greater scale and more precise targeting than ordinary users.

Political Advertising

Political advertising on social media comes in many forms and remains under­examined in India. Direct forms include political campaigns paying social media companies to promote their content. 
Increasingly, however, advertising is channelled through personal accounts of individuals with large networks and high levels of engagement, labelled as “social-media influencers” (Basu 2018). These agents float content that invariably does not disclose that these are paid for by political campaigns or their social media companies. In India, reports suggest that WhatsApp (much more than Facebook or Twitter) is the primary tool for the dissemination of political communications (Dias 2017; Daniyal 2018; Calamur 2017).

These forms of political messaging may be distinguished from traditional mass media in at least two ways. First, personalisation allows political actors to tailor their messages right down to the individual level, at scale and in real time. It is possible to roughly identify where particular kinds of audiences gather (say a group on Facebook or a particular Twitter account) with relative ease. However, some social media companies offer this service to advertisers, at scale and with greater precision. Facebook, for example, defines audiences based on “demographics, location, interests, and behaviour”[4] and can charge a fee to disseminate content to this group (“Politically moderate, practices yoga, lives in Uttar Pradesh,” for example). 

Advertisers can also share the emails of a selection of individuals and Facebook’s “Lookalike audience” tool will link them to similar audiences based on gender, region, age, or special interests.[5] Studies in the United Kingdom and the US have, for example, followed campaign firms that run multiple versions of advertisements on social media to tailored groups; in the end, the firms recommended “softer” approaches for some, and more brazen for others (Bartlett 2018). After political events, campaigns—or the social media management firms that work for them—are able to adjust these messages in real time.

Second, while political advertising on traditional mass media could potentially be identified, and consequently audited, personalised advertisements on social media are relatively opaque to external audits (Dias 2017). This problem is heightened with WhatsApp, which functions as a messaging app and has end-to-end encryption, making it technically resilient to interception. The inability to monitor the scale means that electoral spending limits are hard to enforce and promises made to prospective voters unaccountable.

While India-specific research is lacking, news fact-checking organisations such as AltNews and SMHoax have demonstrated possible facets of this problem. They have noticed how certain accounts on Twitter routinely post sensational (and, invariably, communal) content to amass a large network of followers. Intermittently, these accounts also post blatantly promotional content, with everything from praise for large corporates (Sinha 2017) to advertising images for shoe companies (SMHoaxSlayer 2018). It is possible (and likely) that political campaigns would do the same. There needs to be a thorough investigation into these practices and the social media management firms that facilitate them.

Transparency, Not Pre-certification

Pure business logic does not require the drawing of a line between political and commercial advertisements. But, in the heat of the current moment, platforms are now having to do just that. Faced with the threat of regulatory backlash, social media companies have announced more disclosure to users about political advertisements, and less anonymity for those who fund them. Chief Executive Officer Mark Zuckerberg of Facebook even committed to transparency on what he termed “issue ads”—paid content about topical political issues—even if it is not explicitly about elections or from candidates.[6]

He announced that, going forward, Facebook might also require accounts with a “large numbers of followers” to verify their identity. Several social media platforms have now come forward to endorse the Honest Ads Act (Wang 2018), which will require ­anyone who spends upwards of $500 on advertisements to make that disclosure public.[7] In Canada, Facebook has already piloted an archive of “federal election related ads” to document current and historical political advertisements, their sources, and the amounts paid to promote them (Canadian Press 2017). Twitter, on its part, has announced the “Advertising Transparency Centre” in the US, which shows how much money each campaign spends on advertising, the identity of the organisation funding it, and the target demographics they used (Ha 2017).

Social media platforms must move swiftly to implement these transparency features in India. According to one news report, the social media budget for the 2014 elections was pegged at `10,000 crore and yet only five Lok Sabha members of Parliament declared expenses on social media (Gupta 2017). Some provisions in the Representation of the People Act, 1951 and the Model Code of Conduct—which regulate the conduct of elections, including a ceiling on election expenses[8] and a 48-hour moratorium on political campaigning before polling day[9]—are agnostic to the medium. These provisions should apply to social media as well.

Given the potentially large amount of content that is funnelled through influencers, we also need stronger rules to monitor deceptive political advertising. Here, traditional rules on “paid news” or “advertorials,”[10] as they are termed in Europe, could be instructive, although their focus on editorial content is incongruous with the text-and-video combination generated by social media users. In 2012, the Election Commission of India introduced a loose set of guidelines[11] to restrict the phenomenon of “paid news,” defined by the Press Council as news or analysis appearing in any medium for a price in cash or kind as consideration. The guide­lines set up a Media Certification and Monitoring Committee (MCMC) in each state that is meant to pre-certify all paid news,[12] but its application to social media is unclear.[13]

The larger issue is that the online medium frustrates these modes of monitoring and enforcement. Personalised, ephemeral communications are much harder to detect at scale, particularly for an external and relatively small-staffed committee such as the MCMC. Moreover, such processes do nothing to flag these as sponsored content.

Demanding transparency in political funding, rather than pre-certification, and making these disclosures unambiguous to users, is a more urgent concern. An auditable database of such expenditures would also allow third parties to monitor this space. This could include “issue ads,” which may not come from political campaigns but cover “hot button” political issues, which would be an effective tool to gauge the many mouths through which political campaigns speak.

Controlling the Beast

A scandal around the manipulation of an electorate has swiftly transformed into a global debate about privacy and data-protection laws because the lifeblood of these targeted communications is, unsurprisingly, the collection of personal information. Even for those who might be unconvinced of the influence of these campaigns, the unbridled nature of data collection (and its monetisation) is disconcerting. For years now, privacy researchers have been drawing attention to the dizzying array of techniques deployed to collect and combine data. An anxious senator had asked Zuckerberg during his testimony before the US Senate Committee, “Does Facebook use audio obtained from mobile devices to enrich personal information about its users?” Zuckerberg was quick to shut this down as conspiracy theory.

But, as commentators have pointed out, they do not need to hear your conversations to know the books you read, what you buy, your weekend visits, or, indeed, your political leanings (Gebhart and Williams 2018). It might “feel like” platforms know what is on your mind, but that can always be traced to your digital footprint. Tools such as web beacons and cookies can be used to track behaviour across sites.[14] They connect such behaviour to specific people through identifiers such as their IP addresses, emails, or social media accounts.[15] Location-tracking services provide data about where you live, where you work and when you have the maximum time to engage on social media. GPS-enabled devices allow for this tracking. Many users keep these location services on at all times, but, as researchers have demonstrated, even when GPS is off, a combination of other offline metrics is used to approximate locations (Ghosh and Scott 2018).

These data-collection practices are common across companies; some use the data exclusively for marketing their own products and services, while others sell it to data brokers who, in turn, combine it with other data (including offline records) and sell it to marketers.[16] This integration of large social media platforms with a larger network of intermediaries (advertisers, data brokers, social media management systems) is only the tip of the iceberg. Advertisers are just as likely to be political campaigners as they are businesses. While commercial advertisers would measure success by whether people end up making purchases, these political actors are seeking, at the least, their attention, and at best, their allegiance.

Privacy and the Question of Choice

Recent jurisprudence is a reminder that information privacy, or the right to exercise control over personal data, is a facet of individual autonomy (Justice K S Puttaswamy v Union of India 2017), and integral to a vibrant democracy. In what ways can this control be exercised? Julie Cohen (1996) posits that the “right to read anonymously” is integral to individual privacy. This claim restricts monitoring of what individuals choose to read or watch. This, at an extreme, means that individuals should be able to entirely opt out of personalisation on a platform, along with the data-collection activities that enable it.

One proposal might be to give users visibility into how and why they are profiled a certain way, and the ability to change it. Another well-recognised privacy right, the “right to rectify” your personal data,[17] does provide users the right to correct inaccurate data about themselves. But, this raises a philosophical question: if data “about you” simply reflects the choices that you yourself have made, then does “rectification” (which requires proof that the information is inaccurate) go far enough? Can I demand to be categorised in the group of young right-leaning professionals if my behavioural profile indicates otherwise? Should individuals have control over data profiles, regardless of what the data might suggest?

These proposals could empower users, but they also depend on users to manage these choices. The proposals assume that users find profiling troubling, and right now there is little evidence to suggest that. We need to protect the integrity of choices, but regardless of what users choose, there should be standards of proportionality and necessity that apply to all data-processing activities. For example, if cross-site tracking by a particular application is disproportionate to its purpose, this should not be permitted, irrespective of what users choose.

These proposals also reflect a growing consensus in data-protection laws globally.[18] India is in the process of formulating its own law, and a committee chaired by Justice B N Shrikrishna has announced that a draft law is imminent. A legal framework, technical features (built-in “privacy by design”), and a vigorous programme of public education could introduce limits to data-driven propaganda, especially where they have not developed organically.


Implementing transparency in political advertising and data-protection norms is urgent, particularly as India approaches its 2019 elections. On-the-ground research from India will illuminate and contextualise solutions being deployed elsewhere in the world. But, to begin with, there is a wider debate to be had on the power of these platforms. Public dialogue and private communications being concentrated among a handful of private players is perhaps at the root of the “subtle restraints on access” about which the Supreme Court’s jurisprudence had warned us.

The author wishes to thank Maya Palit for reading a draft of the article.

Must Read

Do water policies recognise the differential requirements and usages of water by women and the importance of adequate availability and accessibility?
Personal Laws in India present a situation where abolishing them in the interest of gender justice also inadvertently benefits the reactionary side.   
Back to Top