Australia and the United States, as members of the Five Eyes intelligence-sharing alliance, face an urgent challenge: harnessing the complex benefits of artificial intelligence (AI) to enhance intelligence gathering and analysis, crucial for both maintaining peace and waging war.

A recent report by the U.S.-based Special Competitive Studies Project (SCSP) and the Australian Strategic Policy Institute (ASPI) emphasizes that human-machine teaming (HMT) could transform the efficiency, scale, depth, and speed of analytic insights generated by intelligence operations. “Time is of the essence. If the U.S. Intelligence Community and its partners do not begin integrating generative AI tools into their workflow, we will always be vulnerable to our adversaries,” said SCSP President Ylli Bajraktar.

Founded by Eric Schmidt, former CEO of Google, SCSP benefits from the insights of leading thinkers in American defense, including former House Armed Services Committee chair Mac Thornberry and former Deputy Defense Secretary Robert Work. ASPI is a national security think tank in Canberra, largely funded by Australian and U.S. governments and other partners.

The report’s authors note that current Large Language Models (LLMs), such as ChatGPT, are attracting significant investment. Some experts predict that artificial general intelligence (AGI)—an AI that achieves or surpasses human-level learning, perception, and cognitive flexibility—may emerge by the end of this decade. This is commonly regarded as the pinnacle of artificial intelligence by the public.

The authors argue that even today’s narrower AIs will likely exceed existing systems’ capabilities, enabling them to solve complex problems, autonomously collect and sort data, and deliver comprehensive assessments quickly. Nevertheless, the report underscores the necessity of human involvement in AI operations.

“AI human-machine teaming will enable intelligence analysts to concentrate on applying their expertise where it matters most in an increasingly competitive strategic environment,” stated ASPI Executive Director Justn Bassi.

However, the challenges are complex and raise fundamental concerns, including the verification of information and sources and their reliability—critical functions within the intelligence community.

“AI’s ability to identify patterns that human analysts cannot manually verify creates a dilemma: whether to deploy AI and risk making poor decisions based on unverifiable analysis or to risk losing critical intelligence opportunities by not using AI,” the report states. Experts have long cautioned that many AI systems function as „black boxes,” where inputs produce outputs without clear explanations of the reasoning behind those outputs.

“This dilemma raises an essential question about how much transparency should be sacrificed for decision advantage. In such contexts, AI may need to be treated as a source of intelligence akin to human informants, with its reliability assessed based on past performance and contextual understanding,” the report outlines. It suggests the necessity of human spot-checking of randomly selected inputs and using alternate sources to confirm AI-generated insights. This approach would require new frameworks and methodologies to evaluate AI systems as intelligence resources, considering factors like their track record, the quality of outputs, and potential biases or limitations.

As AI technology continues to evolve, planning for the capabilities of generative AI (GenAI) is crucial. The U.S. and Australian intelligence communities must develop strategies to adapt and effectively leverage today’s Large Language Models.

“To stay ahead of the rapid pace of AI advancement, analytic managers should concentrate on what GenAI could deliver in the next three to five years rather than merely focusing on what it can achieve today,” the report’s authors conclude.

LĂSAȚI UN MESAJ

Please enter your comment!
Please enter your name here