“Hey, Alexa!” “Let Google do it!”
The next generation of technology devices, the so-called internet of things (IoT), is upon us. The value proposition is enticing. Soon, every appliance in your home will respond to your voice. Your car will do the same. They will extend the reach of internet services and deliver forms of convenience that were unimaginable only a few years ago.
But take a moment to consider the less obvious dimensions of the IoT. The operating system is almost always made by Google. The voice control almost always comes from Amazon or Google. In order to work, IoT products have to listen to everything around them. If we buy enough of these devices, the IoT will be listening to every aspect of our lives, from the kitchen to the car to the bedroom. The devices will offer some convenience, but the data they collect will be used for things other than delivering the services we paid for.
To understand how the IoT is likely to develop, we need only look at the evolution of the current generation of data surveillance products. I spent nearly 34 years as a professional tech investor and tech optimist before observing, in 2016, bad actors exploiting Facebook’s architecture and business model to harm innocent people. First, I saw misogynistic memes about Hillary Clinton being distributed by ostensibly pro-Bernie Sanders Facebook groups that appeared to be inauthentic. Then I read about a company that used Facebook’s advertising tools to gather data on people who expressed an interest in Black Lives Matter and sold the data to police departments. Next, I saw the results of the Brexit referendum. For the first time, I realised that Facebook’s algorithms might favour incendiary messages over neutral ones.
In October 2016, I contacted my friends Mark Zuckerberg and Sheryl Sandberg, two people I had advised early in Facebook’s life, to warn them — but they politely informed me that what I had seen were isolated events that the company had addressed. After the 2016 US presidential election, I spent three months begging Facebook to recognise the threat to its brand if the problems I observed proved to be the result of flaws in the architecture or business model. I argued that failing to take responsibility might jeopardise the trust on which the business depended. When Facebook refused to take responsibility, I worked with a small group to look into the issues and raise awareness of them.
Thanks to a series of reports over the past year about failures to protect personal data, an increasing number of the humans-formerly-known-as-users are now aware of risks. Policymakers have responded to concern with initiatives such as the European Union’s General Data Protection Regulation (GDPR), the passage of a GDPR-like law in the state of California, and a proposed internet bill of rights in the US House of Representatives.
This is progress and should be applauded. Government intervention of this kind is a first step on the path to resolving the privacy issues that result from the architecture, business models and culture of internet platforms. But privacy is not the only problem we must confront. Internet platforms are transforming our economy and culture in unprecedented ways. We do not even have a vocabulary to describe this transformation, which complicates the challenge facing policymakers.
Where marketers in the past gathered data to match products to customers, Google, Facebook and other internet platforms use data to influence or manipulate users in ways that create economic value for the platform, but not necessarily for the users themselves. In the context of these platforms, users are not the customer. They are not even the product. They are more like fuel.
The dark side of the web
McNamee on Brexit
When I saw the results of the Brexit referendum I realised that Facebook’s algorithms might favour incendiary messages over neutral ones
On Black Lives Matter
I read about a company using Facebook’s ad tools to gather data on people interested in Black Lives Matter that then sold the data to the police
On Myanmar
Misuse of internet platforms for hate speech has dramatically altered the life of the Rohingya minority in Myanmar — or, in the case of thousands, ended it
As the Harvard professor Shoshana Zuboff notes in her new book The Age of Surveillance Capitalism, humans have in the past experienced industrial innovations so profound that they changed everything, creating a “before” and an “after”. The almost simultaneous commercialisation of electricity and of automobiles at the start of the 20th century is an example. The leaders of those industries were wise to ensure that the largest number of people would benefit from their innovations. Henry Ford understood that enabling his factory workers to become customers, to enjoy the benefits of what they produced, was essential. Google and Facebook have shown no such wisdom. They view the people who use their platforms as nothing more than a metric.
Google, Facebook and the rest now have economic power on the scale of early 20th-century monopolists such as Standard Oil. What is unprecedented is the political power that internet platforms have amassed — power that they exercise with no accountability or oversight, and seemingly without being aware of their responsibility to society.
Not taking responsibility has been central to the culture of Silicon Valley since just after turn of the millennium, when thought leadership passed from traditional venture capitalists to the “PayPal Mafia”, visionary alumni of the payments start-up led by Peter Thiel, Elon Musk and Reid Hoffman. The PayPal Mafia were among the first to recognise the shift from a web of pages to a web of people, and their investments and insights propelled the social network generation to success. They paired a strategy of “hyperscaling” with the libertarian notion that start-ups were not responsible for the consequences of any disruption they caused.
When capitalism functions properly, government sets and enforces the rules under which businesses and citizens must operate. Today, however, corporations have usurped this role. Code and algorithms have replaced the legal system as the limiter on behaviour. Corporations such as Google and Facebook behave as if they are not accountable to anyone. Google’s seeming disdain for regulation by the EU and Facebook’s violations of the spirit of its agreement with the US Federal Trade Commission over user consent are cases in point.
People often tell me they don’t worry about privacy because their information is already “out there” and, besides, they have “done nothing wrong”. Those statements are demonstrably true for most of us but irrelevant to the discussion at hand. Google and Facebook hoover up mountains of data in the service of business models that produce unacceptable costs to society. They undermine public health, democracy, innovation and the economy. If you are a member of the Rohingya minority in Myanmar, the misuse of internet platforms for hate speech has dramatically altered your life — or, in the case of thousands, ended it. Internet platforms did not set out to harm the Rohingya or to enable interference in the politics of the EU or US. Those outcomes were unintended consequences.
Two new platforms threaten to make these problems much worse: the IoT and artificial intelligence. For the consumer, the former offers convenience in the form of voice control and access to online services in new settings. For the vendor, it vastly increases the range and depth of surveillance. Amazon’s Alexa voice-control interface has taken an early lead, but Google Home and other technologies are likely play a role. Google’s Android operating system is the foundation of nearly all IoT systems.
We should view all IoT products with extreme scepticism. Just as with Google and Facebook on the internet, IoT vendors characterise their surveillance as essential to the consumer value proposition. But vendors gather data less to improve the customer experience on that device than to create new economic opportunities for themselves in other places. Meanwhile, consumers suffer not only the potential harm of manipulation, but also side-effects such as the recent hack of a Nest home security system that resulted in customers believing they were under nuclear attack.
AI promises to be revolutionary. That said, it will not necessarily be a force for good. The problem is the people who create AI. They are human. They are under pressure to bring their products to market quickly. They have no incentive to invest time and resources to protect the people affected by their inventions. They train their systems with data from the real world and, to date, have not taken steps to eliminate the implicit biases that dominate the real world.
For example, AI-based mortgage origination systems have exhibited racial bias, reproducing the “red lining” that has historically prevented people of colour from buying property in some neighbourhoods. AI-based employment applications have retained gender and racial biases common to their human counterparts. We assume that technology is value-neutral. We do not realise that the biases of its creators will infect any tech product that has not been designed to eliminate them.
Early implementations of AI have issues that go far beyond bias. Among its most profitable applications today are those that eliminate white-collar jobs, use filter bubbles to tell people what to think and use recommendation engines to tell us what to buy or enjoy. Our jobs, what we think, and what we buy or enjoy are among the characteristics that most define us as individuals. Turning those things over to AI makes us less individual, less human.
We can do better. I recommend two areas of emphasis: regulation and innovation. As for the former, the most important requirement is to create and enforce standards that require new technology to serve the needs of those who use it and society as a whole. The US faced this challenge with past generations of technology that were considered to be strategic, yet carried high risk. In 1938, the Food and Drug Commission began requiring new medicines to demonstrate safety and efficacy ahead of being launched. Later in the century, the country set standards for the development, handling, use and disposal of chemicals.
In the case of AI and other major technologies, regulations are required to enable the good, while limiting the bad. Fortunately, the process of demonstrating safety and efficacy for new digital technologies is easier and far less costly than for medicine. We should prioritise the creation of standard validation and audit programmes that can be embedded in new products to ensure they do what they are supposed to do without doing harm.
I strongly favour regulatory limits on the gathering and use of consumer data. Current practices are unacceptable, not only among internet platforms, but all companies built around Big Data. An example would be microtargeting in political campaigns, which invites abuses such as voter suppression and the spread of disinformation.
Today’s model of consent is not enough. Uses of data that do not deliver immediate value to the person from whom it is harvested should not be permitted. Society has determined that it is not appropriate to clone humans or to commercially produce anthrax or Ebola. It should do the same with many applications of Big Data.
Internet platforms are candidates for antitrust intervention. Google, Facebook and Amazon all enjoy monopoly power and have successfully prevented competitors from gaining traction — their advantages in data, capital and scale have scared off investors and entrepreneurs. This has almost certainly retarded the rate of innovation in their core markets.
In addition, internet platforms have exercised monopoly power against suppliers, advertisers and users, leveraging their market dominance to extract maximum value from every business relationship. Antitrust cases can be used to create space for competitors. They can also be used to limit anti-competitive business practices such as data sharing between divisions of a monopoly, an issue that the EU has raised in its latest inquiry into Facebook. To be clear, antitrust will be most effective in combination with restraints on extraction business models that exploit data.
Recommended
The EU has pursued antitrust action with increasing aggressiveness, particularly against Google. The cases have focused on Google’s use of monopoly power to eliminate competition, an obvious and important place to start. In the US, the model of antitrust regulation limits the definition of consumer harm to price increases, which are not easy to prove in barter transactions of services for data. Fortunately, there is growing recognition in both political parties that the internet platforms have abused their economic power and a willingness to rein it in with antitrust action, which is the most pro-growth form of intervention available.
If policymakers can create room for competition, I hope investors and entrepreneurs will abandon extraction business models in favour of Silicon Valley’s traditional focus on products that empower customers, what Steve Jobs famously called “bicycles for the mind”. Just as wind power and solar are profit-making opportunities to address aspects of climate change, “bicycles for the mind” will offer profits to those who help us move past the extreme flaws of today’s internet platforms and the IoT.
The threats internet platforms present individuals and policymakers with a challenge. Fortunately, we have more power than we realise, particularly if we act in concert. Google and Facebook depend on our attention. Let’s give them less of it. The IoT requires our approval. Do not give it until vendors behave responsibly. Demand that policymakers take action to protect public health, democracy, privacy, innovation and the economy. We can do this. Let’s get started.
Roger McNamee is a technology investor and author of ‘Zucked: Waking up to the Facebook Catastrophe’
Follow @FTLifeArts on Twitter to find out about our latest stories first. Subscribe to FT Life on YouTube for the latest FT Weekend videos