Are The Days Of Manned Fighters REALLY Numbered?

COMMENTARY Defense

Are The Days Of Manned Fighters REALLY Numbered?

Sep 25, 2020 3 min read
COMMENTARY BY
John Venable

Senior Research Fellow, Defense Policy

John “JV” Venable, a 25-year veteran of the U.S. Air Force is a Senior Research Fellow for Defense Policy at The Heritage Foundation.
An F-16 Fighting Falcon assigned to the 114th Fighter Wing, South Dakota Air National Guard, receives fuel from a KC-10 Extender on July 3, 2020.  Photo by: Staff Sgt. Stephanie Serrano, af.mil

Key Takeaways

DARPA just pitted “Banger,” a real F-16 fighter pilot, against an Artificial Intelligence (AI) program which proceeded to beat its human rival.

The exact range, altitude, airspeed and nose position of the manned fighter is calculated and immediately fed into the AI simulation.

There is no system in the world that can touch a human’s ability to capture and process those tasks.

DARPA just pitted “Banger,” a real F-16 fighter pilot, against an Artificial Intelligence (AI) program which proceeded to beat its human rival in five back-to-back simulated dogfights. This impressive showing may lead many to assume that manned fighter aircraft will soon be a thing of the past. Before jumping on the AI bandwagon, ponder a few fundamental truths.

Several decades ago, an Air Force fighter pilot named John Boyd distilled the art of dog-fighting into four steps: Observe your opponent for both obvious and subtle cues, Orient the bandit’s maneuvering in relation to your jet, Decide what to do, and then Act to defeat him. He called the rapidly changing dynamics associated with fighting a thinking enemy, particularly in a highly maneuverable fighter, the OODA Loop, for Observe, Orient, Decide and Act.

The software loaded into computers we now refer to as AI can readily handle the last two steps, but what feeds that “intelligent” system with the ever-changing cues for the “observe” and “orient” steps? The only operational fighter with built-in technology that can “see” other aircraft is the F-35, and while it has sensors that can optically capture an opposing platform in any quadrant, it sees with an acuity of only 20-30.  Even then, it cannot discern the type of bandit it is fighting—its configuration, aspect, heading-crossing angle, or the “rate” at which its nose is tracking—unless the bandit is in front of the F-35 where its radar can track the target. Even then, it gets only some of what it needs.    

Lacking the ability to independently observe and orient means AI cannot feed itself the inputs required for the programmed elements to kick in and defeat an adversary in the decide and act steps. So, how did the AI simulation gain those details in this fight with Banger?

That information came not from some visual interpretation of the other aircraft on the simulator’s screen, but from “perfect information” supplied by the simulator. The exact range, altitude, airspeed and nose position of the manned fighter is calculated and immediately fed into the AI simulation. That level of clarity can never be gained in a dynamic, neutral fight—but with it, even humans are hard to beat.

In the early 1990s, I was booked as an adversary for two other F-16 pilots in a two versus one (2-V-1) scenario. Unless you have lobster eyeballs, it’s really hard to keep track of two aircraft that are maneuvering to kill you, so if everything else is equal in a 2-V-1 scenario, the “1” always gets killed. On that particular day, we had problems with one of the radios we would normally use, which forced my adversaries to talk “amongst themselves” on the same radio I was using. To successfully prosecute a 2-V-1 attack, coordination and communication is critical. Every time one fighter would take his nose off of me to get more airspeed and maneuvering space, he would tell the other pilot so he could pitch back in to attack. That day, in the process of telling each other what they were doing, they told me.

Using their verbal cues, I was able to keep track of their maneuvering, anticipate their next move and sustain an artificially high level of situational awareness. I killed both of them in all five of our engagements that afternoon and didn’t die once. They were really embarrassed after we landed but, as I told them in the debrief, it was their exceptional comm that allowed me to “cheat” and win that afternoon.

Comparatively speaking, the cues I received during that sortie over Korea pale in comparison to the perfect information the DARPA simulation fed to Banger’s AI opponent in their virtual fights. There was no need for the machine to “look outside” and find Banger, then try to assess how much airspeed he had, when his afterburner was cooking, when he went to idle with his speed brakes deployed, or how much “G” he was pulling. The simulation fed all that info to the AI fighter in real time.

While those might seem like petty elements, the observe and orient steps are the heart and soul of dogfighting—the two most critical elements in the OODA sequence and there is no system in the world that can touch a human’s ability to capture and process those tasks.

Advances in sensors and sensor packages will continue, of course. At some point, they will allow an AI-flown platform to excel at air combat. When that day arrives, the cues those sensors provide will still pale in comparison to the direct feeds DARPA provided Banger’s AI opponent in this simulation.

I’ll bet Banger didn’t hear that in the debrief.

This piece originally appeared in Breaking Defense