It’s been a while since I said “Wow!” at a technology product launch. As a jaded veteran of the dotcom boom, I have learned to be sceptical about the idea that a particular technology “changes everything”. But last week, for the first time amid the current hype about artificial intelligence, I had one of those moments.
The product in question is, at first sight, a miniature submarine – about two metres long – produced by a German company called Helsing and set to be made in Britain. The SG1-Fathom glides unpowered, with no moving parts, and searches for enemy ships and submarines using the same technology as a warship does: “passive sonar”, which listens without emitting signals of its own.
But the payload of the system is in the AI it carries. Lura can, claim its makers, detect the tell-tale sound of a Russian sub at volumes 10 times quieter than a human sonar operator; and classify the target 40 times faster. Just as the makers of ChatGPT have “trained” their product on a vast library of existing data, Helsing have trained theirs on a library of acoustic recordings.
Like ChatGPT, once in use, the AI system will train itself on the new inputs it receives. When it recognises something that sounds like an enemy submarine, it surfaces and pings location data to its controllers via satellite.
Today this job is done by Britain’s fleet of six Type-23 class frigates, each with a helicopter that could drop maybe 30 sonar buoys into the sea to search for enemy vessels.
The price of the SG-1 unit is not public, but it’s designed to be cheap enough to buy (or quite possibly lease) in much larger numbers. Deploy a thousand of these into the waters off Russia’s arctic naval bases and you could neutralise the military selling point of nuclear-powered submarines: their stealth.
But while you’re celebrating this triumph of Anglo-German ingenuity, consider this. If Russia or China were to invent something similar, they could place in jeopardy our own nuclear deterrent – whose submarines regularly leave their base at Faslane to enter a game of cat and mouse with Russian subs trying to detect and follow them.
In short, AI – which has already changed the game of land warfare in Ukraine – could be about to change the much bigger game of nuclear deterrence between major states.
The Royal Navy has not yet decided whether to buy the SG-1: other solutions are on offer. In all events, we are in a maritime AI arms race.
For the Russians, the obvious countermeasure against a barrier of underwater drones would be to blow up everything in the water with a nuclear depth charge. Failing that, they might design their own fleet of mini-subs to kill ours, spoof the sonar, flood the sea with noise, take down our satellites or produce an AI model that can out-think us.
The point is: the earlier we get into the game of defence AI, the more likely it is that we can stay ahead.
That, in turn, demands a mindset change from the UK defence industry and government. In the future, what matters most might be whose sound library is the most detailed, how quick the AI is at learning, or how much computing power can be pushed to the drone rather than the central server – not just the stuff we’re used to, like ship design or human skill.
But the biggest change of all may lie in how such innovations shape geopolitics. Sea power has been key to the success of most great empires, ours included. But though you can assert “sea control” – intercepting maritime cargoes at specific choke points and sinking the enemy’s ships – you can never fully “control” the sea as a domain of warfare in the way an army can control the land.
Nuclear-powered subs became the capital ships of the 21st century because, despite their colossal size, they are like a needle in a haystack to find once they get out into the open ocean.
If you lay a barrier of intelligent, silent sensors – for example in the sea between the Shetland and Faroe Islands – you can make it unacceptably risky for Russia to sail a submarine through it. You can, in short, make naval warfare much less fluid and more like trench warfare: sticky, predictable, observable and costly.
There are, of course, ethical challenges with AI. If you were to stick explosives on to the pointy end of Helsing’s mini-sub, and give it permission to target the enemy autonomously, you had better hope its AI does not “hallucinate” in the way that ChatGPT is prone to.
But the bigger problem may be strategic. If, within five years’ time, these things are everywhere, and every nuclear-armed power on earth knows its subs are detectable, how do they react? Put more missiles into land silos? Put tactical nukes on bombers? Or do all great powers simply accept that their submarines are observable, and make the best of the predictability that might bring to international relations?
I don’t know the answers – but I do know that generic problem we now face. AI is altering the dynamics of human endeavour in every sphere it’s being applied to: from student essay writing to medical diagnosis to anti-submarine warfare.
To manage the AI revolution, the political class has to become far more literate about its potential. There are thousands of civil servants and business leaders who know how to play the current game of defence procurement: slowly, with great risk aversion and with a tendency to buy upgrades of stuff they’ve bought before.
Now they have to get their heads around managing the innovation process for technologies whose future development they cannot know: making faster decisions, accepting greater risks and becoming prepared to leap into the unknown – none qualities valued in Whitehall.
Get it right, and we can stay permanently ahead of states that want to harm us. Get it wrong, and any technological advantage we enjoy today evaporates.