DARPA hosted from August 18th-20th the AlphaDogfight event, pitting AI experienced based learning tactics to play against anonymous instructor combat fighter pilots to assess the feasibility of utilizing machine learning to iterate a “zero experience” pilot into a seasoned combat veteran. AI algorithms were given no prior experience our foundational knowledge with which to “fly” fighter aircraft in a simulation, and over the process of millions of iterations of scenarios are able to eventually learn how to control combat aircraft. While the successful implementation of machine learning algorithms to pilot an aircraft have already been proven, the unique considerations required for this scenario involved assigning weight to certain errors that the algorithms would inevitably make, such that future iterations would be more adverse to performing riskier maneuvers than it would to more docile flight paths.
The estimated experience of the winning algorithm produced by Heron Systems equated to ~12 years of human fighter pilot experience, amassed over a much shorter period, thanks to the use of multiple GPU’s. The company stated that their algorithm had been through over 4 billion simulations to acquire that experience. In the final dogfight, the algorithm won five times to the humans’ zero in a nose to nose “guns only” simulation.
While aerial mobility aircraft will never perform evasive maneuvers in dogfights or aggressive aerobatics to gain the upper hand on a foe, the same decision making processes and best guidance that allowed Heron to win the DARPA AlphaDogfight competition would also apply to the algorithms of future aerial mobility flight path control services. For instance, assessing more weight to errors that would jeopardize the safety of more persons, not just those that are flying within an eVTOL, would allow for algorithms to iterate toward flight path and decision making processes that minimize risk and maximize safety. While the dogfight algorithm produced by Heron isn’t necessarily plug and play for aerial mobility, the IP lies within the methods to establish the framework and ground rules from which the algorithm learns.
Why it’s important: Defense contractors have proven AI’s ability to gain similar levels of experience to those of lifetime pilots, and allow for enough decision making by machine learning algorithms along the way to enable more intelligent solutions than just codification of if, then statements for every possible scenario. This more organic approach toward establishing reasonable frameworks for complete autonomous flight path control will be a key enabling technology for the future success of the aerial mobility industry.