Reining in Big Tech

Highlights/Executive summary

This policy brief offers recommendations for policymakers and advocacy groups that want to curb the power of Big Tech and data brokers. The following recommendations are designed to protect individuals’ human rights from violations and degradation through private enterprises in their attempts to maximise profits:

  • Understanding the importance of keeping personal data private.
  • Opting out of personal data collection as the new default.
  • Imposing fiduciary obligations on personal data collectors and traders.
  • Splitting up Big Tech platforms using competition law.

Who is this for?

This policy brief is aimed at policymakers as well as at advocacy groups looking to protect online privacy and to protect human rights and democracy from Big Tech’s overreach.

Introduction

The Internet is dominated by Big Tech companies whose services are hardly avoidable. These platforms are among the highest valued companies worldwide. Most of their value lies in the harvesting and trading of personal information of their users. This ‘surveillance capitalism’, (Zuboff 2019) is based on a problematic knowledge asymmetry. Big Tech knows more intimate information about their users than anyone previously could. Users have to accept platforms’ terms of service and privacy policies. What Big Tech knows about its users makes them susceptible to harm. Governments can acquire this personal information from data brokers. Big Tech can manipulate the weaknesses of people at risk e.g. children. ‘Surveillance capitalism’ is unacceptable since it is incompatible with people’s human right to privacy. Loss of control over personal information opens people up to profit extraction and abuses of political power. Values like democracy and human rights can only survive if this power asymmetry is rectified.

Recommendations

  1. Understanding the importance of keeping the personal private:
  • The status quo makes it impossible for Internet users to protect their personal information from misuse and monetisation.
  • Personal information harvested allows companies to make more useful individualised shopping recommendations. However, it also enables them to exploit people’s natural psychological weaknesses (choosing convenience over value) and prey on difficulties they experience (e.g. negative life events) to entice them to spend money.
  • Once personal information has been collected, it becomes a commodity that is difficult to protect from abuse and theft. Governments can acquire and use it to target those opposing them. Criminals can steal it to commit identity fraud. Companies can acquire it to sell products to people unable to make rational financial decisions (Reglitz 2024).
  • The amount of data collected, the possibilities to use it to infer other things about people, and the ways in which websites are designed to influence people’s shopping decisions are opaque to most people. So, consequently, is the value that their privacy has for them (Zuboff 2019).
  1. Opting out of personal data collection as the new default:
  • Currently, platform users are frequently not given the option to effectively opt-out of the collection of their personal data. But most personal data collection is unnecessary for the functioning of the platform services used.
  • Mandating an opt-out option (as done e.g. by Article 25 of the European Union’s General Data Protection Regulation) is insufficient because it is too big a hurdle for platform users who would have to select the privacy-protecting setting up to dozens of times a day. Selecting the opt-out option can also be designed to be complicated for users.

How? The only effective protection of user privacy is to make the opting out of the collection of personal data the default when using online services. Platforms could still generate profits selling space for non-targeted adverts (Reglitz 2024).

  1. Imposing fiduciary obligations on personal data collectors and traders:
  • Changing the default to opting out of personal data collection is not enough since people can still consciously choose to have their data collected. Currently, this information can be sold to others who can use it to their advantage.
  • Vulnerabilities due to differential power that on knowledge asymmetries (e.g. between physicians and patients or lawyers and clients) are standardly addressed by imposing fiduciary obligations on the more powerful party to protect the more vulnerable one. Big Tech companies know more about their users that anyone previously could.

How? Public authorities should impose fiduciary obligations on those collecting personal information (Balkin 2020):

  • a duty of confidentiality that includes a ban on passing on personal data to others not bound by the same duty,
  • a duty of care to keep that data safe e.g. from theft and abuse,
  • a duty of loyalty toward their clients to not use the data contrary to the interests of the clients.
  1. Splitting up Big Tech platforms using competition law:
  • Currently, a few giant Big Tech platforms dominate the personal data harvesting and sales market. They combine different functions of the personal information marketing business under one roof e.g. data collection, advertisement brokering, advertisement publishing. This vertical integration of services emulates past monopolies (e.g. Standard Oil).

How? Just as historic examples of vertically integrated corporations were broken up, public institutions should use existing competition laws to split up Big Tech monopolies so that different services are controlled by different people (Reglitz 2024, Balkin 2021). Combined with the other recommendations above, this would ensure not only more effective competition. It would also better protect personal data by enabling a market of personal data brokers that each holds less information on fewer clients.

Key Insights/Takeaways

Personal information and privacy are essential values in digitalised societies.

  • Under current rules, people cannot effectively protect their personal information and privacy. This makes them vulnerable to abuse by governments and exploitation by private companies.
  • Once personal information has been collected, it becomes a commodity that is difficult to protect.
  • Current practices are too opaque for most people to be able to know what information about them is collected, what those in possession of their data can infer about them, and how this information is used to influence their decisions and exploit their vulnerabilities.

The current model of surveillance capitalism is unnecessary and incompatible with values like democracy and human rights. But effective protection is possible.

  • Opt-out by default settings can effectively protect Internet users from unwanted harvesting of their personal information.
  • Imposing fiduciary obligations on personal data collectors mitigates existing power imbalances and vulnerabilities.
  • Breaking up the existing oligopoly of Big Tech data harvesters ensures greater market competition and more effective personal data protection.

Further readings and references

  • Reglitz. 2024. Free Internet Access as a Human Right (Cambridge University Press).
  • Balkin. 2020. “The Fiduciary Model of Privacy,” Harvard Law Review Forum 134(1): 11-33.
  • Balkin. 2021. “How to Regulate (And Not to Regulate) Social Media,” Journal of Free Speech Law 1(1): 71-96.
  • Zuboff. 2019. The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power (London: Profile Books).

Contact:

Dr Merten Reglitz/ Associate Professor/ Department of Philosophy/ University of Birmingham /

m.reglitz@bham.ac.uk.