It seems the right stuff may no longer be the sole purview of human pilots.
A pilot A.I. developed by a doctoral graduate from the University of Cincinnati has shown that it can not only beat other A.I.s, but also a professional fighter pilot with decades of experience. In a series of flight combat simulations, the A.I. successfully evaded retired U.S. Air Force Colonel Gene “Geno” Lee, and shot him down every time.
And “Geno” is no slouch. He’s a former Air Force Battle Manager and adversary tactics instructor. He’s controlled or flown in thousands of air-to-air intercepts as mission commander or pilot. In short, the guy knows what he’s doing. Plus he’s been fighting A.I. opponents in flight simulators for decades.
But he says this one is different. “I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed.”
The A.I., dubbed ALPHA, was developed by Psibernetix, a company founded by University of Cincinnati doctoral graduate Nick Ernest, in collaboration with the Air Force Research Laboratory. According to the developers, ALPHA was specifically designed for research purposes in simulated air-combat missions.
The secret to ALPHA’s superhuman flying skills is a decision-making system called a genetic fuzzy tree, a subtype of fuzzy logic algorithms. The system approaches complex problems much like a human would, says Ernest, breaking the larger task into smaller subtasks, which include high-level tactics, firing, evasion, and defensiveness. By considering only the most relevant variables, it can make complex decisions with extreme speed. As a result, the A.I. can calculate the best maneuvers in a complex, dynamic environment, over 250 times faster than its human opponent can blink.
The results of the dogfight simulations are published in the Journal of
Regular readers know that I’ve seldom met a link I wouldn’t follow and the report in the Journal is an eye-opener. Really worth reading the Introduction, you can decide if you want to go on from there.
…given an average human visual reaction time of 0.15 to 0.30 seconds, and an even longer time to think of optimal plans and coordinate them with friendly forces, there is a huge window of improvement that an Artificial Intelligence (AI) can capitalize upon. While many proponents for an increase in autonomous capabilities herald the ability to design aircraft that can perform extremely high-g maneuvers as well as the benefit of reducing risk to our pilots, this white paper will primarily focus on the increase in capabilities of real-time decision making.
The ability to have extreme performance and computational efficiency as well as to be robust to uncertainties and randomness, adaptable to changing scenarios, verified and validated to follow safety specifications and operating doctrines via formal methods, and easily designed and implemented are just some of the strengths that this type of control brings.
Seems to me that if this can handle hostile environments and multiple threats, civilian sense-and-avoid scenarios should be a piece of cake.
UPDATE Here is an excellent follow on story from a defense perspective