The age of (trying to regulate) surveillance capitalism

For a while now the most interesting discussions about tech and privacy have been happening in the United States. In politics, the US Congress in 2018-2020 was awash with proposals for a federal privacy law. Privacy academia is also in full spate, and there are few more incisive and erudite contributors than Julie Cohen, who in her latest blockbuster suggests what such a federal privacy law should and should not do if it is to address what she calls the ‘dysfunctions of the networked information economy’. 

Privacy has been viewed through the prism of individual rights, even if there is a general acceptance that individual rights and freedoms are social goods. Cohen focuses on the limitations of this perspective. ‘Atomistic, post hoc assertions of individual control rights … cannot meaningfully discipline networked processes that operate at scale. Nor can they reshape earlier decisions about the design of algorithms and user interfaces.’  She takes aim especially at ‘The continuing optimism about consent-based approaches to privacy governance’, which she finds ‘mystifying, because the deficiencies of such approaches are well known and relatively intractable.’  The landscape is too complex and manipulation too easy for consent to be the answer. Tech firms can game user control rights with ‘synthetic data’ that tends to lie outside the framework. She (like Paul Schwarz much earlier) sees the consent requirement as akin to property rights, which have failed to safeguard the interests of the marginalized and poor. ‘It makes no sense whatsoever where networked, large-scale processes are involved.’ 

Novel and creative apparatuses that aim to compensate for the limitations of individual control, like ‘user governed data cooperatives’ or ‘automated consent management panels’, might work for ‘smaller, more homogeneous communities’ where it is obvious which resources are to be protected. But their effect, Cohen argues, is to ‘turn consent into a fig leaf’, useful only ‘to achieve compliance with a regime that requires symbols of atomistic accountability.’

Moreover, the hoped for ‘trust’ and ‘duty of care’ often touted – festishised even – as an all-purpose salve, a sort of soft law social contract between commercial digital players and society, would never be considered sufficient for banks, pharma or insurance companies. We are too aware of the risks in those fields to allow such latitude. In the digital economy, ‘dominant actors’ could only be expected to change their surveillance-based business models, if their ‘corporate ownership and control structures’ as well as ‘licensed flows of data’ were disrupted. They set the terms and protocols for data use that developers and ‘marginal actors in networked information ecosystems’ – from media companies to white supremacists – have every incentive to follow.

Until recently, there were vast swathes of the planet without any general privacy and data protection laws. Those regulatory deserts are now dwindling, with Brazil and soon India adopting rules that will extend legal protection to billions of people. (See especially the work of Gabriela Zanfir-Fortuna at the FPF in tracking these developments.)  Authoritarian China is also aping GDPR-styles rules, reflecting popular exasperation with their own surveillance capitalists, though the rules are clearly subservient to the far greater priority of state coercion and control. Ironically, the United States could soon become the global outlier.

Cohen however critiques data protection on grounds that it ‘relies on prudential obligations’ and ‘invites death by a thousand cuts’, unable to scale to today’s tech leviathans and systemic privacy depredations, reducing well-meaning laws to ‘an exercise in managerial box-checking’. She takes the GDPR to task, for example, for requiring data protection-by-design without specifying what the design ought to be: ‘There is a hole at the center where substantive standards ought to be—and precisely for that reason, data protection regulators often rely on alleged disclosure violations as vehicles for their enforcement actions, reflexively reaching back for atomistic governance via user control rights as the path of least resistance.’  She notes that the proposed privacy bills all shy away from regulating government use of data, so they would not remedy the shortcomings identify in the two CJEU Schrems judgments. Nor, for that matter, do they seem interested in the principle of purpose limitation in the use of personal data. It is unclear, however, what Cohen would prefer to see. The GDPR and its fellow travellers are criticised for their tendency towards over-prescriptiveness, but at their core is an attempt less to micromanage business practices, even less to outlaw business models, but instead to make them accountable for whatever they choose to do with the data they control. Consent is not, in fact, at the heart of the GDPR.

Cohen is at her most convincing when highlighting the gap between legislative promise and delivery of outcome. ‘If one wants to appear reformist while moving the needle only very slightly—one confers authority to make rules and bring enforcement actions, and then one waits to reap the predicted beneficial results.’ This model has broken down, she says. It’s like a car manufacturer or (at least before the 2007-8 crisis) mortgage lender that fawns over its prospective customers before neglecting them after the purchase has been made and whenever problems arise. The ruthless injunction to ‘Always Be Closing’ in David Mamet’s Glengarry Glen Ross becomes Always Be Legislating, and let the enforcement look after itself. ‘Enforcement practice,’ however, ‘has largely devolved into standard-form consent decree practice, creating processes of legal endogeneity that simultaneously internalize and dilute substantive mandates.’ The dirty secret of these enforcement strategies is that Big Tech’s excesses remain and have if anything got worse during their implementation.

These ‘litigation-centered enforcement mechanisms’ can only ever be as effective as access to justice which, at least in the United States, has diminished over the years as standing and class-action have been curtailed. Public enforcers have been underfunded and become increasingly risk-averse.  Infringements of privacy being systemic, Cohen thinks it is insufficient to single out individual actions of bad actors pour encourager les autres, as the effect is to ‘validate the mainstream of current conduct rather than meaningfully shifting its center of gravity’.  The behaviour regulators seek to penalise and reform is so profitable that occasional enforcement can be planned for, and absorbed, by the bigger companies as an acceptable business overhead – Cohen cites the size of the 2019 FTC $5bn settlement with Facebook which broke records but changed nothing.

What is the alternative? Privacy regulation should move out of its silo and borrow from post-crisis financial regulation’s ‘operating requirements for auditing, benchmarking, and stress testing,’ and for ‘public-facing transparency about information-handling practices’. ‘Honoring the public’s right to know requires a less deferential approach to the secrecy claims that have become endemic in the networked information era.’ Instead of drop-in-the-ocean civil fines, public enforcers should be able to reverse the incentive bias by disgorging profits accruing from unlawful activity, and return those ill-gotten gains not only to the individuals affected but also plough them back into the public enforcement budget. Where violations with knowledge and intent have been proven, it should be possible to target the personal wealth of those accountable board members, and especially where they happen also to be majority shareholders. These are means for internalizing the externalities of their immensely profitable enterprises.

Julie Cohen’s remedies echoed and coincided with those proposed by the apparently chastened former CFO of Goldman Sachs. Writing in The Economist, Martin Chavez said, ‘Just as CCAR places limits on the leverage in derivatives portfolios by imposing capital requirements, digital regulators would insist that platforms constrain the algorithmic amplification of outrage and emotional resonance by limiting the propagation of viral content. Regulators would base their rules on a deep and shared understanding between the regulator and the regulated of how digital companies make money, just as banks simulate their complex chain of interlinked businesses into the future in a stress test.’ Chavez concedes that the Dodd Frank Act and its ilk have hardly ushered in golden era of financial morality, but he is not alone in wondering why what’s good for the banking goose cannot equally be served up to the digital gander.  

These are worthy and thoughtful additions to the debate on how to forestall the failure of tech companies that have become ‘too big to fail’, by means of a concoction of intricate technocratic rules that only the biggest can be expected to be able to comply with. For engagement with the underlying questions of growing inequality, digital or otherwise, and of when a company may simply be ‘too big’ for a purported democracy of human rights and equality before the law, we must look elsewhere.