Skip to main content

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

What if Artificial Intelligence saves the planet?

People instinctively assume that AI will lead to catastrophe. But what will the world look like if we get it right?

Image: The New European

We live in an era of intellectual pessimism. While the left laments collapsing biodiversity, climate catastrophe, soaring inequality, and the exploitative logic of capitalism, the right has become increasingly fixated on rapid cultural and demographic change, immigration and the decline of traditional family structures. In the face of such gloom, the optimist risks seeming naive, or even callously indifferent towards the many problems we face. And where once technology was seen as a potential solution to social and economic challenges, now it is more likely to be seen as their source, in the form of divisive social media, distracting smartphones, or the harm done by extreme or misleading online content.

While governments are torn between these competing negative narratives – particularly when it comes to Artificial Intelligence – there is a curious dearth of voices offering a more positive vision. Tech companies may well be tripping over themselves with breathless press releases, and social media has quickly filled up with hype-merchants promoting AI as if it were a get-rich-quick scheme. But serious, constructive and positive commentary is thin on the ground.

Even so, there are some academics, intellectuals and policymakers who are engaged with the question of what a positive outcome may look like in our development and use of AI; Max Tegmark’s Future of Life Institute, for example, ran a competition earlier this year dedicated to mapping out positive futures. However, in the current intellectual climate, optimism is being drowned out by shrill negative judgements. While there is a place for voices of caution – Jonathan Freedland certainly provided that in a recent Guardian piece headlined “The future of AI is chilling” – it would be a mistake to regard AI solely as a source of harm and threat, a new techno-pathogen against which we must inoculate ourselves.

With an eye on history, we should bear in mind that techno-panics can exact a heavy toll, albeit one that manifests more as missed opportunities than as concrete catastrophes. Perhaps the most famous such missed opportunity in the west is that of nuclear power. Haunted by the ghosts of Chernobyl, Three Mile Island and Fukushima, we have largely given up on construction of new nuclear plants, in many cases opting for the less visible but more pernicious negative effects of fossil fuels. More consequential for many in the developing world, perhaps, is the case of Golden Rice, a genetically modified crop capable of saving millions of Vitamin A-deficient children from starvation. Its widespread adoption remains stalled, all progress now entangled in a byzantine web of compliance and uncertainty.

The lesson is not that we should adopt a laissez-faire attitude to new technologies; in the case of both nuclear power and genetically modified crops there are serious risks to be managed. But caution is not cost-free. Even as we congratulate ourselves for harms averted, we should not ignore the invisible graveyards filled with those who could have benefited from opportunities we declined to take.


The new wave of powerful AI systems represents the first truly radical advance of the 21st century. The first glimpse many people had of this new technological wave came in the form of OpenAI’s ChatGPT, released in November last year. This is one of a new flurry of systems known as Large Language Models, which are statistical distillations of billions of pages of text and conversation.

Capable of answering scientific or technical questions, summarising articles, providing advice, writing code, or even sustaining a casual conversation, they have caught the popular imagination. According to one industry survey, almost a third of workers in white-collar professions are already making use of ChatGPT at work. The majority have not yet informed their employers.

But this new wave of AI systems did not spring from a technological vacuum, even though it may have appeared that way. Instead it is the first widely available commercial manifestation of developments that have been under way for over a decade. As a research field, AI has had “summers” and “winters”: periods of intense hype and investment, followed by disillusionment and a drop in funding. The current AI summer dates back around 15 years and has come about through increasingly affordable processor power. There have been signs that a technological revolution was coming, such as the advances in image categorisation and machine translation, but until quite recently, even a reasonably well-informed observer of technological trends could have missed their significance.

Though ChatGPT may have been more evolutionary than revolutionary, its astonishing popularity – reaching a million users in less than a month – marked a shift in public awareness of what companies like OpenAI and DeepMind were actually doing. While businesses, policymakers, and the public were struggling to come to terms with what these tools could achieve, the response from much of the scientific and academic commentariat was much more negative.

One trend in the negative reaction to AI is represented by AI sceptics such as Timnit Gebru and Margaret Mitchell, computer scientists and AI ethicists who were formerly employed by Google. Both have been trenchant critics of Large Language Models, which they say merely feed back to us a probabilistic cocktail of our own utterances, famously describing them as “Stochastic Parrots”. Far from being revolutionary, these systems, they suggest, have a conservative or even reactionary character, embodying and concentrating existing systematic patterns of discrimination, exploitation and oppression.

In contrast to these more socio-political concerns, a quite different and far more existential source of unease about AI came to greater public attention in March this year when more than a thousand prominent figures, including Elon Musk, and scientists such as Yoshua Bengio, Max Tegmark and Gary Marcus, signed a petition demanding a slow-down of AI research. Their worry was motivated less by the political and societal implications of large models, but by disquiet about our ability to control AI and ensure it correctly aligns with our goals. Indeed, some people in AI see ChatGPT’s capabilities as a nascent threat to human existence itself.

Gebru, Mitchell and others say the signatories of this petition are succumbing to “fearmongering and AI hype,” and accuse them of steering public debate towards imaginary risks and ignoring “ongoing harms”. These harms include worries about surveillance, the erosion of privacy, the use and abuse of large datasets, and injustices such as algorithmic bias.

These ideas are a long way from the thought that we might one day lose control of AI or that it might threaten our very existence. Terrifying though that idea may sound, it also seems rather fanciful – as if a word processor could suddenly decide to topple the British government. After all, aren’t AIs just another kind of device? Couldn’t we simply program them to be benign? And if all else fails, couldn’t we just pull the plug? Matters, alas, are not quite so simple.

To start with, modern AI systems are not hand-coded in the manner of computer programs of old. Instead, the role of human programmers is largely to develop learning algorithms and curate training data, with AI systems being an emergent product, akin to a microbiologist seeding petri dishes with bacterial cultures. While these systems are subsequently fine-tuned, they nonetheless often display unexpected or undesired attributes, as demonstrated earlier this year when Microsoft’s GPT-powered Bing search engine advised a journalist to divorce his wife, while expressing a desire to steal nuclear codes and create a deadly virus.

Looking back over the last 60 years, we see that AI has gone from a novelty to an arguably transformative technology, with one human ability after another being equalled or surpassed by artificial intelligence, whether in games like chess and “Go” or in verbal benchmarks such as the Graduate Record Examination used in the US and Canada – GPT-4 ranks in the top 10% of human test-takers.

And while AI systems were once highly specialised, the current wave of large AI models are far more general in their capabilities, equally capable of conversing, coding and creating images. More worryingly still, they also display a growing talent for manipulation and persuasion; Open AI’s GPT-4, for example, can pass a wide range of social cognition tasks involving inferring subtle motives in agents, while Facebook’s CICERO system is able to play the famously Machiavellian game of “Diplomacy” at a competitive human level.

Given that these systems’ cognitive abilities are continuing to improve rapidly, we cannot dismiss the idea that, one day, we will be outfoxed by a “misaligned” AI system. To borrow an analogy from the AI safety literature, consider the case of a child who has inherited a large fortune and who must appoint a trustworthy executor to manage their affairs. Without appropriate legal safeguards, they are vulnerable to manipulation and deception from unscrupulous aides. Many young medieval monarchs fell foul of their advisers precisely because of such an asymmetry in knowledge, guile, and ruthlessness.

As soon as we build AIs that are more cunning and sophisticated than ourselves, we may find ourselves in a similar position. The simple answer as to why we cannot turn off a malevolent AI system, then, is that any superintelligence worthy of the name will not tip its hand so easily; by the time we realise anything is amiss, humanity’s fate will have been sealed, whether subtly through subversion of our political systems, or more bluntly through traditional science-fiction scenarios like seizing control of our nuclear weapons or unleashing lethal genetic diseases under the guise of a cure for cancer.

Regardless of whether we fear AI for its near-term political and social harm or for the longer-term risks of existential catastrophe, we have important reasons to take the risks of AI seriously. Consequently, it would be reasonable to expect the emergence of concrete goals, for example greater regulation, transparency and accountability for big tech. But the different tribes who are alarmed by the rise of AI – from Silicon Valley libertarians on the one hand, to progressives and campaigners for social justice on the other – make such temporary truces challenging.


It is increasingly clear to me as a researcher in this field that the current wave of generative AI is not just the latest iteration in a cycle of hype and disappointment, but a significant moment in the history of our species. In the last five years alone, generative AI has moved faster than even the most fevered optimists expected, as can readily be seen in the improvements in image models.

It is not hard to see areas in which AI can have a positive impact even in the near-term. Healthcare, for example, is an industry that faces constantly increasing costs and staff shortages, and as many developed economies experience a rapidly growing elderly population and a shrinking workforce, this will only increase. In principle, by using biometric data gathered from smartphones and wearable devices, AI could provide profound benefits, allowing early detection of cancers, the better monitoring of infectious disease and drug discoveries. Researchers in the US have recently used AI to identify a new antibiotic named abaucin, which could prove effective at killing bacteria immune to most current drugs. AI-powered language models could make basic diagnostic medicine more accessible, especially in parts of the world where healthcare is currently unaffordable for many people.

Careful society-wide discussions will need to be had about how to protect core values, such as patient confidentiality, and how to mitigate the risk of AIs making diagnostic errors. But we must remember that excessive caution or slowness to adopt these new tools carries a cost.

AI also has potentially huge applications in education, not just in making it more affordable but also more accessible and more equitable. In 1984, the educational psychologist Benjamin Bloom found that one-to-one tuition vastly improved students’ performance as compared to those taught in large group settings. This widely replicated finding has become known as Bloom’s 2-Sigma Problem, and is problematic only insofar as it was assumed that one-to-one tuition could not be provided to every student (merely those whose parents could afford it). This constraint may cease to apply in future. Enterprising teachers are already finding ways to use generative AI in the classroom, with services such as Math-GPT beginning to approximate the kind of guidance provided by a personal tutor.

There are again costs and harms to be carefully weighed in deploying AI in education, from short-term worries about the misuse of ChatGPT for cheating in coursework, to potential longer-term risks, such as the decline of writing skills. But a careful reckoning of these costs and benefits requires us to balance culturally dominant techno pessimism with open-minded imagination about the better future that might be open to us.

These are just two examples of fields where even the near-term benefits of artificial intelligence are clear. There are doubtless many further applications, as yet undreamed of. Tools like ChatGPT may seem like relatively mundane clerical assistants for now, but generative AI goes beyond gimmicks: it is a general-purpose technology, and in the coming decade we will find any number of powerful ways to apply it. Whether the technological revolution going on around us will rival the agricultural and industrial revolutions of previous eras in scale and impact remains to be seen – but there can be little doubt that the world a decade from now will be a very different place.

But that is no reason to sink into fatalistic pessimism or techno determinism – the idea that technology will inevitably assert its influence over human affairs. Collectively, we have the power to decide how we choose to deploy AI and what safeguards to put in place.

Here, it seems to me, there is a vital role to be played by our artists, writers, and academics in energising the collective public imagination not merely with visions of dystopias to be avoided, but with positive visions towards which we can strive. There is no shortage of narratives informing us of the potential dire consequences of AI, from the dystopian near-future visions of Black Mirror to the AI apocalypses of The Terminator and The Matrix. But what might the world look like if we get AI right? That, it seems to me, is the far more important question.

Dr Henry Shevlin is Associate Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where he also serves as Director of the Kinds of Intelligence Programme.

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

See inside the What if Artificial Intelligence saves the planet? edition

Image: The New European

Alastair Campbell’s Diary: Hands up if you think Brexit has failed

Seven years on, populist politicians are running out of targets for their Brexit blame game

A Poland football fan cheers on his side as they take on Russia in a Euro 2012 match at the National Stadium in Warsaw. Photo: Shaun Botterill/Getty

When Poland overtakes the UK

By 2030 its economy will be bigger than Britain’s. Why are we being left behind?