I know, the titles get worse each time.
Today I want to look at drones, on account of the recent PR offensive surrounding Taranis, which was probably best covered over at Think Defence (p.s. many thanks to TD for the 700 additional pageviews his recent promo generated!). This won't be specific to Taranis, more of a general thoughts.
Again, not so much about Taranis which is clearly a very shiny, very advanced piece of kit. But drones in general seem to have come to the fore over the last ten years or so, with all kinds of ideas being passed around about how they will revolutionise warfare, but with little end product so far.
As "eye in the sky" type roles, Unmanned Aerial Vehicles (UAVs) have certainly found a niche which manned platforms will find it difficult to touch. The characteristics of UAVs make them ideally suited to long endurance missions and giving them a bit of extra firepower while they're at it makes a lot of sense.
I struggle however to see how they're going to replace manned aircraft in the more fast and furious roles. A big part of my reasoning for this is because UAVs derive a lot of their advantage right now from being cheap and cheerful. But the second you start building Taranis like drones the costs quickly rise, as you shift from basic, easy to manufacture structures to more complicated shapes and materials. The engines go from being sedate turbo props, happily spinning along at moderate speeds, to the high end turbines required to achieve the performance needed for modern combat aircraft.
Once you've slapped a radar, comms and computers on it, you've basically reached the cost level of a current manned aircraft minus the cost of the ejection seat. But then as we've seen with the F-35 program, the cost of the aircraft is just one part of it. The testing and development is a whole other ball game.
And whereas in the manned ball game a fella jumps in the plane and takes control, with instantaneous feedback as to how the aircraft is flying and when something might be about to go wrong, the pilot of the experimental drone is always slightly behind the curve time wise due to transmission delays and can't really get a feel for his new charge while sitting in a trailer back on the air base.
When taken along with the challenges of programming any automated systems, it means the development and testing cycle of future high end unmanned drones is likely to be even more expensive than it is now for manned aircraft. All that can be forgiven though if the aircraft offers something special on the business end.
Trouble is they don't. At least not right now. UAV's naturally have to gravitate to either one of two control methods; remote human piloting or autonomous operation (with some potential blurred ground between the two open).
Remotely Piloted Aircraft (RPA) suffer from issues related to the time lag of control signals, interference with the control signals and a lack of situational awareness. It's not too hard to imagine a world where enemy manned fighters could end up carrying pods for jamming control signals, such that when they merged with an RPA they could jam the signal during the crucial early stages of what would otherwise be a turning fight, leaving the RPA cruising along while the enemy pulls around hard to get in behind it.
Assuming the jamming didn't work, you're still left with a situation whereby the RPA pilot back at his base has to wait for the feedback from devices such as cameras to reach him, before he then sends out control instructions which also incur a delay in reaching the RPA. In a high stakes dogfight, where time is critical and split second reactions can be the deciding factor, the lag in reaction time from this back and forth process could prove to be extremely fatal for the RPA. That's of course if we assume that the pilot of the RPA can even see the danger coming in the first place.
For the autonomous route, you have a dilemma. Artificial Intelligence as it exists now is notoriously unreliable. Indeed, "Intelligence" is perhaps not the best word to describe the current state of the art. Blindly following a series of pre-programmed instructions and responses to external stimuli is a more accurate description, which is where the dilemma comes in.
The more sophisticated the programming the more it costs to design, develop, write the code and then test it. The less sophisticated the programming, then the less capable it is and the more vulnerable it is to manipulation by humans who can figure out the parameters upon which it operates.
Even at the more sophisticated levels though there are problems. Chess computer programs for example have a relatively limited set of calculations to make. The board remains the same size with the same layout for every game. The number of pieces is always the same and the movement options for those pieces never changes. The rules of the game always remain the same. And yet even with a constant environment to work with and despite their move sets (called a "book") being written by experienced chess Grand Masters, many programs have struggled to overcome top human players in fair competition.
One of the advantages that human opponents have looked to is the ability to manipulate the computers programming, sometimes using what the computer perceives to be "stupid" moves, or simply just unconventional moves that the programmers might not have anticipated, in order to force the computer to search its less populated database of strategies to deal with these unusual tactics.
One man - American computer engineer Omar Syed - even invented a completely new game based around chess that was deliberately designed to be difficult for computers to master. The game, Arimaa, has been dominated by human players since its inception. This is largely on account of the huge number of potential opening layouts (there are no fixed starting positions) which makes it difficult for computer programmers to populate their computers database with responses to all possible scenarios.
The comparator here to something like air to air combat is the ability of humans to select a significant variety of opening moves and other tricks, all of which have to be accounted for in order for an AI to be able to successfully recognise them and take appropriate action. Any move by a human pilot that the computer does not recognise from its database is liable to cause the computer serious problems.
This is all before we talk about the medium of feedback. In the case of the chess computers, someone at some point has to tell the computer what move the human opponent has made in order for it to accurately respond. Suppose for a second that a human chess master played a computer chess program, but now the chess program was fed false information about the human masters move. How could the computer program compete when it wasn't aware of the real circumstances of the game?
In the case of an autonomous UAV, it would be utterly reliant on external sensors to engage in combat with a human opponent. Some combination of electro-optical devices, infra-red seekers and/or radar would be needed for the UAV to keep track of the enemy aircraft. That means that a variety of measures from jamming, to chaff and flares can be employed to confuse the UAVs sensors, and thus its programming.
Even if we take the UAV out of the air to air arena and place it in the ground attack arena, perhaps against a fixed target, we still can see problems. At the minute we have a capability of a very similar nature in the case of cruise missiles like Tomahawk and Storm Shadow. An autonomous UAV would only differ in the sense that once it arrived at the target location it would release its bombs, either leaving them to self-guide with GPS or by using a laser to mark the target for them.
So far weapons like Tomahawk have had a good success rate, but even they have their weaknesses. Such automated weapons require some kind of navigational system to guide them, which can either be GPS or some form of terrain following radar. GPS is dicey on account of the recent proliferation of GPS jamming equipment and technical expertise. That leaves terrain following radars as the only viable option.
And in fairness it's not a bad option. Modern AESA radars and optical imagers should be able to work to a resolution that allows guidance by contour matching and/or terrain feature matching, although it does open up a number of potential problems in the form of detection of the radar and the limitation on the number of potential routes.
Now assuming that our UAV can find its way to the target alone, do we trust an autonomous system in this day and age to identify the right target and engage it? Essentially that's what we're doing with cruise missiles, but it does pose a question about the ethics of letting the computer doing the final targeting, especially if the aircraft is set to launch multiple munitions.
With these problems taken into account, I'm just not convinced that UAVs offer the revolutionary step that some seem to think. I can see something like Taranis being used as one system among many, perhaps limited to the early days of an operation where the risk of a mishap is weighed against the need to penetrate a well organised and potent air defence network. For that purpose the number of UAVs would not need to be huge.
In a sense I'm torn, because we clearly have done some excellent work in the UK in developing UAVs, and there is obvious potential to market these systems abroad to partners and allies. But at the same time I'm very cautious about the long term viability of pouring money into UAVs. I completely accept that if we don't invest and UAVs turn out to be the hot ticket of the future then we'll have missed out on a prime opportunity, but then this has to be weighed against the potential of pouring lots of money down what could potentially become a very big drain.