Pentagon Considers Ending JEDI, Enabling Bigger Role for AI

The Pentagon may end the JEDI (Joint Enterprise Defense Infrastructure) cloud-computing project, awarded to Microsoft in 2019. Since then, it has been in litigation with Amazon, which was passed over for the $10 billion contract that will consolidate the Pentagon’s array of data systems and provide access to real-time information. The Defense Advanced Research Projects Agency (DARPA) is also exploring the use of artificial intelligence in automating military systems, including weapons.

The Wall Street Journal reports that Deputy Defense Secretary Kathleen Hicks said the Pentagon will “have to assess where we are with regard to the ongoing litigation around JEDI and determine what the best path forward is for the department.” On January 28, the Pentagon also reported to Congress that “the prospect of such a lengthy litigation process might bring the future of the JEDI Cloud procurement into question.”

Some legislators, such as Steve Womack (R-Arkansas), prefer a multi-vendor approach that “reduces the risk of legal challenges from excluded companies.” IT Acquisition Advisory Council executive director John Weiler, a longtime JEDI critic, pointed out that, “the government could seek to patch together a new cloud program by expanding several existing Defense Department information-technology contracts.”

Amazon’s suit contends that “then-President Donald Trump exerted improper pressure on the Pentagon to keep the contract from going to Amazon because it is led by Jeff Bezos,” who Trump blamed for “unfavorable coverage of his administration in The Washington Post, which Bezos bought in 2013.”

The Pentagon denied that this played a role. Oracle had earlier sued, claiming that Amazon was being favored by the Pentagon, and last week Senator Mike Lee (R-Utah) and Congressman Ken Buck (R-Colorado) requested a Justice Department investigation into those allegations.

Wired reports that, “General John Murray of the U.S. Army Futures Command … last month [said] that swarms of robots will force military planners, policymakers, and society to think about whether a person should make every decision about using lethal force in new autonomous systems.”

“Is it even necessary to have a human in the loop?” he asked.

At MIT, the Air Force Artificial Intelligence Accelerator executive director Michael Kanaan said that, “AI should perform more identifying and distinguishing potential targets while humans make high-level decisions.” A report from the Congress-created National Security Commission on Artificial Intelligence (NSCAI) recommended, “that the U.S. resist calls for an international ban on the development of autonomous weapons.”

DARPA program manager Timothy Chung noted that, “when faced with attacks on several fronts, human control can sometimes get in the way of a mission, because people are unable to react quickly enough.”

One concern if AI plays a larger role is that its technology “can harbor biases or behave unpredictably.” MIT professor Max Tegmark suggested that, “lethal autonomous weapons cheap enough that every terrorist can afford them are not in America’s national security interest.” Rather, he said, AI-based weapons should be “stigmatized and banned like biological weapons.”