by DAVID AXE
The future of aerial warfare was on dramatic display on Feb. 4 at Edwards Air Force Base in California. At around 2:00 PM local time, a 38-foot-long, bat-shaped, jet-powered robotic aircraft lifted off from the runway and climbed to 5,000 feet. The Unmanned Aerial Vehicle orbited the airfield for 30 minutes before descending to a flawless, autonomous landing.
It was the first flight for the first X-47B prototype designed and built by Northrop Grumman, and a preview of coming decades during which highly-autonomous robotic warplanes will increasingly replace remotely-piloted flying robots and traditional, manned planes. The X-47, more than a decade in development, represents the vehicle portion of the Navy’s $1-billion Unmanned Combat Air System Carrier Demonstration program — essentially, an experiment in flying robots from a carrier deck. An X-47B prototype is slated to go to sea sometime in 2013.
The first program to field an operational, autonomous, pilotless combat aircraft should be the Navy’s Unmanned Carrier-Launched Airborne Surveillance and Strike. UCLASS is still just a concept, but is working towards a 2018 fielding date. Boeing and General Atomics have received UCLASS study contracts, but Northrop is the clear frontrunner thanks to the X-47. That means the X-47 is likely to form the basis of the world’s first, true robotic warplane.
Offiziere.ch spoke to Carl Johnson, Northrop Grumman’s vice president of program management, about the X-47 and the implications of warplane autonomy. What follows are excerpts from that revealing conversation.
I’d start by saying that the idea of a [Remotely Piloted Vehicle] is 1990s technology, as opposed to an [Unmanned Aerial System]. An autonomous vehicle, you can load it with a mission plan, but it has the ability to think on-board and dynamically adjust that mission plan for a variety of reasons, whether it be … for a perceived threat or, if it’s a vehicle doing an autonomous landing, should there be an obstruction. It quickly decides and generally the criteria is: how quickly does the vehicle need to react? If it can wait for a human in the loop to make a decision, then it will be designed that way. If it feels it has to react and can’t stand the latency, it’s going to be autonomous and adjust its flight plan.
We are currently under contract for the UCAS-D program. UCAS-D is a demonstration program. We have developed two carrier-relevant unmanned platforms that are autonomous and have … a machine-to machine interface. They communicate with a carrier over an air-ship interface that is direct communication. There is a man in the loop. He can monitor and override the autonomous systems, but the vehicle comes in and lands on the ship on its own. The ship finds it, [the vehicle] follows directions for getting in on the pattern and lands on the moving deck.
The other element of this demonstration is air-to-air refueling, which requires … the vehicle to assume a position behind the tanker, respond to commands, move in, hook up and disconnect and go about its way. But in order to have a system that will do that, there need to be decision-making tools available to the vehicle. These algorithms help the pilot know when it’s time to go tank. The demonstration itself will demonstrate capabilities for autonomous tanking and landing, but in parallel we must be able to have a man in the loop. Those decision-making tools are available to that human so he knows what the [autonomous] processes are and when he can interject.
Every UAV we build is autonomous, in that they have a “six degrees of freedom” model that knows the flight-control laws for that vehicle and, given two points in space, will find the best route to get to those points in space. … That’s one level of autonomy. The difference between what autonomy means today and in the future is today we load the mission plan … to the vehicle and the vehicle then follows its instructions. What it doesn’t have are decision-making algorithms to allow it to vary from that path.
The way to think of it is that all of it builds on each other as we develop the technology. We’ve matured a lot of flight-control laws. We’ve mated six degrees of freedom models. We’ve matured the architecture to increase reliability. Those are evolutionary kinds of things. The maturing of decision aids so you can move them from the ground to the air vehicle — this [X-47B] will be first UAS that does that. Essentially, if you look at “thinking” RPVs with computers on the ground shipping instructions to the vehicle … [by contrast] on fully autonomous UAS, the computer is in the air and your computer is accessing that.
A Global Hawk, for example, if it loses its communications link, will follow its instructions. It will turn around and come home or it will proceed with the original mission plan and come home when it’s done. The idea that if you lose the link, you lose airplane — that was the original reason for developing the autonomy we have today.
Regarding the armed part of a mission: those decision aids would today require a human in the loop. On UAVs today, it is a joint decision that’s made when weapon is going to be released. It’s not up to one individual to make that decision, generally speaking. It won’t be that single UAS that will make that decision. There will be a need for confirmation that the target you’re looking at is the target you want and not something that will create a … bigger problem than you sought to eliminate. Even though it’s possible for a UAS to find a target and identify it and give those coordinates electronically to a weapon, it won’t do that unless it’s told to. The technology is there, but there is still a need for a human in the loop. UAS aren’t going to replace the need for a thinking human being to make descisions that are inflenced by experience of a wide range of situational considerations that you just can’t program into a machine.