Artificial Intelligence AI uses in the US Defense Department
January 3, 2022
Website Be Like In 100 Years

Artificial Intelligence AI uses in the US Defense Department

AI has become a catch-all term for various computer science disciplines, applications, and use cases. AI is widely understood as a technology that implements intelligent task execution in machines, and it frequently describes a future state and a current reality.Concrete advances in the field of artificial intelligence (AI) have resulted from the increased availability of data and computing power, as well as advances in machine learning (ML) and electronics miniaturization. Artificial Intelligence is becoming increasingly profitable in commercial sectors such as banking and retail. It will most likely be used in military operations command and control, maintenance and logistics, the creation of new weaponry, force training and sustainment, and force training and sustainment.

AI and machine learning also enable other emerging security technologies such as hypersonic missile defense, network and communication management and resiliency, the Internet of Things, and fifth-generation wireless.

 

Inadequacies in technology, workforce, computing infrastructure, data, and policy impede the US Department of Defense's (DOD) ability to develop, acquire, and deploy AI capabilities critical to national security. To address these issues, the department should modernize software procurement procedures, reform hiring authorities, reduce security clearance processing time, actively invest in areas that the private sector overlooks, such as machine learning system testing and evaluation, and be ready to show internal and external audiences a return on investment in AI infrastructure development

Background

US DOD has a long history of working with AI, having invested in AI research and development (R&D) and using rule-based and expert systems—once thought to be cutting-edge AI—for decades. Project Maven, a programme that combines machine learning and computer vision to enhance video analysis in intelligence, surveillance, and reconnaissance missions, is an example of AI adoption in data-rich areas of the DOD with strict analytic needs or repeated tasking. Unmanned systems and robotics, electronic warfare and electromagnetic spectrum management, logistics and maintenance, command and control, humanitarian assistance and disaster relief, and cyber operations are all areas where AI is being developed. The Department of Defense concentrates on putting AI capabilities into action by combining machine learning with other computer vision, robotics, natural language processing, and optimization techniques.

Throughout the AI Ecosystem, There Are Difficulties

The Department of Defense's artificial intelligence ecosystem—a complex network of technology, people, computing infrastructure, data, and policy—is still in its early stages. A reduction in overall federal R&D funding, the changing nature of work in an increasingly digital economy, a skills shortage in computer science and otherSTEM fields, and rapid innovation in the commercial sector that outpaces the DOD's efforts are all significant challenges to the department's efforts.DESPITE THESE FLAWS, the DOD can address several issues to operationalize AI for national security.

 

1. Technology

While AI continues to produce impressive results in both public and private applications, it is still a developing technology that is generally only useful in the situations for which it has been programmed. Datasets, training methods, and algorithms developed for one purpose are rarely transferable to another, and a misunderstanding of technical limitations may lead to an over reliance on AI, exacerbating the risk and consequences of misuse.

 

Because the DOD lacks a system for testing and evaluating AI and ML security, products are more easily exploitable. For example, researchers discovered that placing a sticker over a stop sign can easily fool computer vision systems for autonomous vehicles, causing the car to mistake stop signs for speed limit signs. Without its own testing system for these issues, the DOD exposes itself to potentially catastrophic accidents. At the very least, it will be impossible to accurately evaluate the quality and safety of AI products obtained from vendors.

Furthermore, the DOD acquisitions process is not designed to accommodate AI's high level of experimentation and modification. Instead, it assumes that once purchased, and products are ready to use. Like the stop sign example above shows, this is not true for AI, which must be constantly improved as new data becomes available and new vulnerabilities are discovered.

 

 2. People

The Department of Defense lacks a workforce with foundational AI literacy, limiting its ability to acquire and deploy AI successfully. Despite acknowledging this shortcoming, DOD efforts to recruit new AI-savvy talent face obstacles. First, it is unclear where these employees will work within the components—the agencies, organizations, and military services that fall under the DOD umbrella. Second, managers in charge of finding competent technical talent usually lack the fundamental information needed to assess individuals' capabilities. Third, some members of the defense community are opposed to the cultural changes that may be necessary to develop a technical workforce, and their arguments against bringing new skill sets into the workforce are usually overblown and based on misconceptions of STEM experts.

 

Because US DOD processes do not incentivize leaders to update and modernize IT equipment, operating systems, computing power, and software packages at the rate required by the current rate of technological evolution, the DOD struggles to implement up-to-date software.DOD employees do not have easy access to system permissions. The Department ofDefense also struggles to integrate AI with legacy systems that are incompatible with modern computing capabilities. Legacy systems include obsolete hardware, such as floppy discs, and outdated software that hasn't been updated in decades. The legacy systems that the department has decided to keep will not be modernized overnight and will not be phased out anytime soon.

 

Another barrier to successful AI adoption is the DOD's data-scarce environment. Frequently, the data required for AI systems is simply not collected. The existing data is frequently"dirty," meaning it is siloed, flawed, and unstructured, rendering it largely unusable for machine-learning applications.

 

 3. Policies

The Department of Defense has fallen behind in communicating with the public about using AI under current policy. This is partly due to the general public's lack of understanding of AI and unclear technical terminology. The terms artificial intelligence, autonomy, autonomous, and automation are frequently used interchangeably in discussions about AI. These concepts are distinct, but they overlap, blurring the distinctions between systems. Is a suicide drone, for example, meaningfully different from a loitering cruise missile? Do rules-based systems from decades ago qualify as AI compared to machine learning-enabled systems? Furthermore, because of the variety of terms included under the AI umbrella, the public is unsure of the total number of AI projects that exist, let alone the dollar value of those projects.