WEEKEND ESSAY | REID HOFFMAN

Don’t fear AI: used well, it can empower us all

New technologies will bring change as immense as the Industrial Revolution and it’s natural for us to be wary — but engaging now will make our lives better, writes Reid Hoffman, Friday December 27 2024, 5.00pm, The Times

A society where CCTV with full facial recognition technology is omnipresent would bring crime almost to an end but it would also minimise privacy and liberty. This is presumably what the philosopher Sir Isaiah Berlin would have described as a collision between two good things

In Two Concepts of Liberty, his first lecture as Oxford’s Chichele Professor of Social and Political Theory in 1958, Sir Isaiah Berlin showed us that the richness of human experience lies in the tensions and contradictions arising out of human values and their irreducible plurality.

In that lecture he explored two concepts — negative liberty and positive liberty. Negative liberty being the freedom from external constraints and interference; positive liberty being the freedom to take actions that can help one realise one’s full potential. I believe we should keep these concepts front and centre as we try to understand the opportunities, risks, obligations and impacts of AI.

Because ultimately I believe we’re in the midst of a new steam power moment. And by that I mean a revolutionary technological breakthrough that expands what it means to be human. A breakthrough that will change how we think about freedom, autonomy, agency and other key aspects of the human condition.

In Two Concepts of Liberty, Berlin emphasised how “value pluralism” — or the idea that values are relational and often in competition with each other — is a phenomenon we must account for when theorising about the nature and limits of political freedom. Prioritising safety, for example, can diminish liberty. Prioritising innovation can diminish stability.

And yet, even though values like these are clearly relational, in constant flux, we still often think of them as timeless, natural phenomena. Consider the symbols we use to invoke their essence: the Union Jack, Old Glory, Magna Carta, the Liberty Bell. So unchanged and enduring, as if liberty and freedom themselves are fixed in a vacuum of absolute truth. But they’re not. They’re fundamentally dynamic constructs, shaped by politics and historical contexts but most of all, I believe, by technology.

If we acknowledge the extent to which new technologies expand and redefine how we experience essential human values like freedom and agency, we put ourselves in a much better position to design these technologies in ways that maximise human flourishing. But while today’s breakthrough technologies create tomorrow’s freedoms, we often greet these breakthroughs as threats to individual liberty and autonomy — because our current idea of freedom is mostly constructed by what previous technologies have enabled. This is very much the case for AI.

In helping us to understand these ideas, and how we can design AI tools in a way that synthesises negative liberty and positive liberty, I offer a new word: superagency.

Superagency, as I define it, is what happens when large numbers of people get access to a transformative, general-purpose technology that they’re free to use as they wish. When that happens, individuals get new superpowers to apply to their lives in unrestricted, inventive and personally relevant ways. And because so many other people have new superpowers too, new capabilities and adaptations cascade through society, endowing every individual with a multitude of second-order benefits.

In the early days of the automobile, for example, doctors could suddenly make more house calls per day. They could expand the territory they covered. This made doctors more personally productive and also helped everyone they served. In the early days of the world wide web, it suddenly became possible to make your essays or your shareware programs accessible to a global audience — a huge boost in your individual agency and powers of personal expression. Just as consequentially, though, millions of others were doing the same, so knowledge of all kinds became far more accessible.

The distillation control room at the Esso refinery in Hampshire in 1955. It converted five and a half million gallons of crude oil each day into petrol. The processing was entirely controlled by robots (FREDDIE COOPER/MIRRORPIX/GETTY IMAGES)

With AI, I believe we’re heading for our biggest superagency moment since the advent of steam power. To get there, though, we, as a society, have to make big and complex choices and most of them involve competing visions of freedom. For example, the freedom to express whatever you want online versus the freedom to engage in democratic or other civil dialogue without having to pay the price of constant harassment. The freedom to advocate for the deregulation of everything, right after we re-regulate borders, free trade and reproductive freedom.

These aren’t just partisan clashes, they reflect something essential about freedom. As Berlin put it in his lecture: “Freedom for the pike is death for the minnows…. Freedom for an Oxford don is a very different thing from freedom for an Egyptian peasant.”

The pike wants absolute freedom to hunt whenever it suits it. The minnow makes a principled case for the right to swim without constant fear of hungry pikes. These freedoms cannot exist fully together. Nor is there a perfectly rational way to balance or optimise this clash of values. So the best we can do is seek imperfect and dynamic compromises: some risk for the minnows, some constraints on the pike.

Every society where competing visions of the good life coexist requires both negative liberty and positive liberty to thrive — the former to create zones of autonomy, free will and experimentation, free from external interference; the latter to provide frameworks and resources, like public utilities, or law enforcement agencies, or education systems, that can help enable people to pursue and fulfil their potential. Which ultimately means that the work of civilisation involves trade-offs.

When truly novel and powerful technologies like AI appear, defensive reflexes tend to kick in. Consider fitting an imaginary city with even more CCTV cameras than Britain already has. Cameras everywhere, and they’re all augmented with facial recognition software, microphones, licence plate readers and behavioural monitoring algorithms. Buildings augmented with sensors tracking entry, exit, occupancy and movement. Smart poles collecting data on everything from air quality to pedestrian foot traffic.

The most optimistic version of this vision is a network of devices that make intelligent and liberating use of all the data they collect. A traffic camera linked to AI doesn’t just record violations — it can adapt traffic light patterns in real time to optimise flow so you spend less time idling at intersections. A smart building’s environmental sensors don’t just measure temperature — they learn occupants’ preferences and anticipate their needs.

The dystopian version, meanwhile, is straight out of George Orwell’s 1984. The goal is coercion, compliance, control.

But what about versions that attempt to manage the probable trade-offs of pluralism with a more even hand? Such a city, for example, might be said to represent a very clear triumph of positive liberty. Even a blind senior citizen, walking home alone at night and wearing an expensive diamond necklace or watch, would be able to access that city’s resources and amenities with a strong sense of security. At the same time, this imaginary city would also necessarily minimise privacy and liberty — or, at the very least, freedom from many different kinds of constraints.

This is what I presume Berlin would describe as a collision between genuinely good things. Because who doesn’t want “virtually zero crime”? But also, who doesn’t want “sufficient privacy in public”?

As our technologies grow more powerful and capable of acting in autonomous ways themselves, the potential number of scenarios where we might deploy them are going to multiply. And thus create more and more instances where values will clash.

Take driver alcohol detection systems. These use touch and breath-based sensors embedded in various parts of a car’s interior to passively measure a driver’s blood alcohol level. These sensors don’t incorporate AI themselves. But your car could also be equipped with cameras and additional sensors that do use AI to analyse things like posture, grip patterns and airflow directions. In this way, they can help confirm that it’s the driver, rather than any passengers who might also be in the car, whose blood-alcohol content is being measured. If your car says you’re over the legal limit, it refuses to start. So you can either call an Uber or stay put until it determines you’re good to go.

It turns out there was legislation passed in the US that could make systems like this mandatory on all new cars as early as 2026. For various reasons, it may not happen that quickly. Or ever. We might just make the leap to fully autonomous vehicles before we see cars with this feature become the federally mandated rule of the road.

On the other hand, it’s also easy to envisage this functionality eventually showing up in more limited ways. For example, in delivery trucks or rental cars or just as an option you choose with your insurer, to get a discount. In fact, I think mechanisms like these are most likely to appear first as terms of service or contractual agreements, not public laws.

Another example like this involves a company called MSG Entertainment, which operates Madison Square Garden, the large indoor arena in New York City. For several years now, MSG Entertainment has implemented a controversial policy that uses facial recognition technology to identify and deny entry to attorneys who work at law firms in litigation against it. As patrons enter Madison Square Garden or another iconic venue MSG Entertainment owns, Radio City Music Hall, their faces are scanned and compared against a database. If they match to an attorney on the ban list, that person is refused entry, even if they have a valid ticket.

The Madison Square Garden policy has already prompted numerous lawsuits, and continuing debates about the appropriate balance between property rights and public accommodation. In the coming years, instances like these will become more common. And clearly they represent a significant paradigm shift. One where what can be described as “perfect control” becomes increasingly possible.

In this scenario, whatever the policy is, that policy is enforced, every time. And suddenly, you, an individual human, have neither negative liberty nor positive liberty. AI is making all the choices.

It’s not just that CCTV networks equipped with facial recognition might function as a powerful way to reduce or even entirely eliminate muggings, assaults, and other kinds of crimes that I’m sure most people are broadly in favour of reducing or eliminating. At some point, as the technologies evolve, any instance of jaywalking at rush hour could result in an automatic fine, in just the way that speeding through a red-light camera does now. Noise violations, off-leash pets, public intoxication: all of these things could effectively become zero-tolerance offences.

If you’re like me, a society actually operating in this fashion might strike you as absurd, intolerable, even inhumane. And yet the laws that prohibit these various actions already exist. Presumably we’re supposed to obey them. So I imagine there may be people who are likely to favour this more exacting form of enforcement. All of which emphasises the point that as AI evolves, it will continue to compel us to consider and even redefine how we think about essentially human values such as freedom, autonomy, privacy and agency.

Which trade-offs will we opt for? And how do we move forward productively in this environment — to create the best possible world for us as individuals, yes, and also as members of a community? This is where we come to the importance of designing and deploying AI tools that prioritise individual agency.

Questions about individual agency lie at the heart of most of the major concerns about AI. For example, questions about job displacement are questions about individual human agency: will I have the economic means to support myself, and opportunities to engage in pursuits I find meaningful? As are questions about disinformation and misinformation: how do I know whom and what to trust as I make decisions that impact my life?

For the first time ever, synthetic intelligence, not just knowledge, is becoming as flexibly deployable as synthetic energy has been since the rise of steam power. Intelligence is now a tool — a scalable, highly configurable, self-compounding engine for progress. But who gets to use that tool and in what contexts?

When OpenAI, the company that developed ChatGPT, launched in 2015, two of its co-founders posted an essay introducing their mission. “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible,” they wrote. In essence, they were making a case for both negative liberty and positive liberty. AI tools, they suggested, should be extensions of individual human wills, not extensions of states or corporations. Individuals should be able to access and use these tools themselves, without external actors prohibiting or mandating their use.

Why is this specific framing so important? On this, we can look once again to Berlin. While he made the case that both negative and positive liberty are necessary for a balanced and flourishing society, he also expressed some key reservations about positive liberty. Specifically, he worried that positive liberty — and its emphasis on fulfilling one’s potential — could be co-opted in authoritarian ways.

After all, if there’s a better version of yourself that might be realised under certain optimal conditions, well, then, why should society leave it to you to implement the policies and take the actions that facilitate those conditions? Why should it trust you to make the right choices? We know you don’t always stop at stop signs or obey speed limits.

This, in other words, is the road Berlin warned against, because of how it might lead to tyranny. It’s also the road ChatGPT is designed to avoid as well.

In broad ways, I think of tools like ChatGPT as a new form of informational GPS. A navigation app might be telling you to go left, but you can still choose to go straight if, for whatever reason, you think that’s the best choice. When that happens, the app adjusts to your decision and recalibrates itself. It still tries to take you to the destination you’ve chosen, but you can always keep making choices of your own too. And that’s how ChatGPT works as well. It provides informational guidance, but you can always re-orient or redirect it simply by giving it new instructions.

In addition to preserving autonomy, hands-on AI tools activate superagency. Which, to revisit the definition I gave earlier, is what happens when millions of people start using powerful new tools in self-directed ways — and new competencies, innovation and generativity start cascading throughout society. In this way, a doctor using AI to make better diagnoses isn’t just fulfilling their own potential — they’re creating better health outcomes for their entire patient population. A teacher leveraging AI to personalise learning isn’t just becoming a better educator — they’re elevating the educational experience for every student they teach.

But crucial as it is to preserve space for individual agency in how an AI works, it’s equally crucial that we apply AI’s power in more centralised ways. We can all benefit, individually and collectively, from AI’s pattern recognition capabilities to track emerging infectious diseases and co-ordinate rapid public health responses across continents; and from AI’s analytical capabilities to help manage increasingly scarce resources like water and arable land in ways that serve entire populations equitably.

But we also know this is an era of increasingly polarised publics, where value pluralism reigns supreme. So how can nation states hope to achieve the social cohesion that’s needed to undertake big, ambitious, and potentially divisive projects, using complex new technologies that many people have major concerns about?

I believe the best way to do that is to continue what we’ve been doing since ChatGPT’s release two years ago: give people opportunities to use AI directly, in ways they find meaningful. Because in the end, what are you likely to trust more? Some abstract new technology government experts decide to unilaterally introduce without much — or even any — input from you? Or a technology you have a growing personal connection to, because you regularly use a form of it?

When people use ChatGPT to automatically personalise their CV to dozens of different job ads, or teach their child fractions, or understand their ageing parent’s medical diagnosis better, they develop both practical fluency and earned trust. And through hands-on engagement, we become more likely to appreciate the potential upsides of broader applications too. We see it enhancing our own lives, so we can imagine it working in institutional or public-sector scenarios too. “What’s in it for me?” leads naturally to “What’s in it for us?”

Reid Hoffman is an internet entrepreneur, venture capitalist, the co-founder of LinkedIn and author of the forthcoming book, Superagency: What Could Possibly Go Right with Our AI Future.

This is an edited version of the 2024 Isaiah Berlin lecture, delivered in London this month.