The Dangerous Rise of AI Technology in the Military

By Devin Dubon

With the rise of technology and Artificial Intelligence (AI) use in the public and private sectors, worries have been raised over its possibilities for military use.

AI has become a socially accepted norm amongst many civilian uses, with the stock market almost entirely relying on it. It is not long until a possible military use for it arises. In fact, the United States military already utilizes AI in mainly non-combative roles such as training simulators, information gathering, and planning tools such as DART.

DART (Dynamic Analysis and Re-planning Tool) is a an artificial intelligence program used to increase the efficiency of logistics and transportation. It was widely effective and has saved more money than what was spent on all AI research in the previous 30 years.

Research is currently being made to improve the decision-making skills of AI. With its
established effectiveness, this could lead to AI being implemented in other programs–in which mistakes could result in deadly consequences.

If AI were to be put in charge of a weapons programs– as many have predicted–then any
corruption or glitch would lead to many accidental deaths around the world.

AI programs already have precedent for wide-reaching consequences when mistakes are made. In 2010, a small glitch in a stock market AI program led to a “flash crash,” which resulted in a loss of trillions of dollars. If this type of crash were to effect an AI in charge of weapons, the damage could have been catastrophic.

This sentiment is also expressed in a letter sent to the United Nations by the leaders of many technological companies–including Elon Musk. “Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter says. “Once this Pandora’s box is opened, it will be hard to close. Therefore we implore the High Contracting Parties to find a way to protect us all from these dangers.”

Similar letters have been sent before, including one in 2015 with over 3,000 signatories, including big names such as Stephen Hawking, Steve Wozniak, and Elon Musk. The letter expresses of the dangers that AI weapons could make. These weapons, the letter states, could be used for “assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”

Not only this, but as the technology increases attempts could be made to create non-human combatants, or robot warriors. The Defense Advanced Research Projects Agency, or DARPA, is already making strides in this direction.

DARPA is tasked to experiment with extreme ideas to create new technology for warfare.
They’ve experimented with many different combat drones and robots to be used in combat situations.

One such robot, BigDog, is already deployed with the U.S. Marines. BigDog is equipped with a robotic arm capable of chucking a 50-pound cinder block across a room at high speeds. It is currently being used to clear obstacles in search and rescue missions, and to move debris off a fallen soldier. Many speculate that these robotic soldiers could potentially replace human soldiers in an effort to save lives.

Some of the major concerns about this, however, are a robots ability to tell apart allies from the enemy–a problem which we have. Another concern is a robot’s decision of how much force is considered reasonable in a given scenario.

With many major problems, the use of AI in the military is dangerous. Autonomous control of military programs may be effective for benign support roles, but this is the start of a slippery slope. When they’re eventually put in charge of weapons systems, accidental casualties and destruction could occur.

mlecharbinger Avatar

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

Insert the contact form shortcode with the additional CSS class- "avatarnews-newsletter-section"

By signing up, you agree to the our terms and our Privacy Policy agreement.