After the initial delight around the world over the advent of ChatGPT and AI image generators, government officials have begun worrying about the darker ways they could be used. On Tuesday, the Pentagon began meetings with tech industry leaders to accelerate the discovery and implementation of the most useful military applications. The consensus: Emerging artificial intelligence technology could be a game changer for the military, but it needs intensive testing to ensure it works reliably and that there aren’t vulnerabilities that could be exploited by adversaries. Craig Martell, head of the Pentagon’s Chief Digital and Artificial Intelligence Office, or CDAO, told a packed ballroom at the Washington Hilton that his team was trying to balance speed with caution in implementing cutting-edge AI technologies, as he opened a four-day symposium on the topic. “Everybody wants to be data-driven,” Martell said. “Everybody wants it so badly that they are willing to believe in magic.” The ability of large language models, or LLMs, such as ChatGPT to review gargantuan troves of information within seconds and crystallize it into a few key points suggests alluring possibilities for militaries and intelligence agencies, which have been grappling with how to sift through the ever-growing oceans of raw intelligence available in the digital age. “The flow of information into an individual, especially in high-activity environments, is huge,” U.S. Navy Capt. M. Xavier Lugo, mission commander of the recently formed generative AI task force at the CDAO, said at the symposium. “Having reliable summarization techniques that can help us manage that information is crucial.”