The U.S. military may still be working on rules and concepts for the use of armed unmanned vehicles, but a variety was already on sale at the Association of the American Army exhibit this week.
For example, the German company Rheinmetall debuted Master Mission- CXTa medium-sized, unmanned wheeled vehicle that can be outfitted to remote-controlled weapon pull or launch – or autonomously find and destroy targets.
“The limitations of the weapon system are a customer decision, not what we deliver,” said Alan Tremblay, the company’s vice president of business development and innovation. If the client says, ‘I’m comfortable with the rules of engagement, the law of armed conflict and the Geneva Convention’ [angle]to make this thing find a target and engage a target “- it can do it”.
More conventionally, Tremblay said, “You can bring your network on a tablet and do remote interaction, but you’re not limited by the level of autonomy.”
The company has sold about thirty unmanned trucks within four years, but expects the CXT, whose hybrid diesel-electric powertrain can reach a metric ton, to become the flagship.
“There are no limits,” he said. “It all depends on the tactical requirements of the customer, in order to do the post.”
So far, Tremblay said, Western countries in particular have been somewhat reluctant to procure and deploy fully autonomous mobile weapons.
“Everyone talks about keeping a man in the loop. That’s what they want today. What do they want in two or three years?” he said.
That’s why Rheinmetall builds its unmanned ground vehicles so that they can be modified as customer wishes change, Tremblay said.
The US Army’s Robotic Combat Vehicle Program could be one of the biggest customers for armed ground vehicles of the future. But the program still has a lot of work to do before it starts making big orders. In theory, Rheinmetall’s approach should align well with the military, whose unmanned vehicle program managers stress they need vehicles with an open architecture that can accept upgrades as technology improves — particularly fast-moving AI tools.
The service wants to “separate hardware from software development,” which will enable the Army to build the brains of future robotic combat vehicles separately from the platform, said Col. Jeffrey Gorand, US Army program manager for Maneuverable Combat Systems.
The military is in the best position to do this software development because of the important role that soldiers will play in the development of land combat robots in the future.
Brigadier General. General Jeffrey Norman, Director of the Army’s Next Generation Combat Vehicle Functional Team, described recent trials that took place at Fort Hood, Texas.
“We’ve learned a lot from soldiers about the capabilities they would like to have in MAVs, the things they want to do for them with autonomy, and then other things they’d rather do on their own that they might not need to have AI for them,” Norman said. Soldiers are very excited about the abilities [robotic combat vehicles] Advance to help discover enemy vehicles…to discover opponents and discover anomalies in the environment. Soldiers still want to be in the loop to decide and assess setting those targets for those anomalies.”
It comes after other experiments in which human soldiers tested how well bots could fire human operators in the background, and sign off on decisions.
In June 2021, the Army began a series of live-fire trials with an experimental ground robot named Project Origin. In a video produced that month, Project Origin director Todd Willert described two versions of the experiments. In the first, the robot fired but the human operator attempted to do targeting using inputs from the robot and from a drone.
The second version of the experiment introduced the AI targeting software via the drone platform. Wellert said the software “automatically determines where that target is. Now we’re just going to make a scenario to basically say, ‘It’s too far away’ but it was amazing. I mean, we were able to get rounds straight on the target within eight rounds, and then provide a suppression.” Instant, and that was an enemy. They wouldn’t even know where the rounds came from.”
The US military has a longstanding doctrine that humans should be “on the alert” in targeting decisions. But this policy allows Exceptions, so it is more of a preference than a fixed rule. The Pentagon has also published a long list of ethical principles to guide not only decisions about what robots can shoot, but also how to design and test them to ensure they behave as expected. But other types of autonomy, such as self-mobility, are certainly permissible.
In July 2021, General John Murray, then head of the Army’s Future Command leading the development of future robotic combat vehicles, recalled his tank training days. What counts as stellar human performance won’t meet a machine’s smell test, tells NPR’s on point.
“They’d say you had to have a 90% success rate, which is pretty good on the flashcards to qualify to sit in that gunner’s seat. But with the right training data and the right training, I mean, I can’t imagine not being able to have a system, an algorithm, to be Able to do a better than 90% job of saying “that’s the kind of car you’re looking at” and then letting that human make that decision about whether the machine is right or not, and pull the trigger.
It says a lot about the future role of human operators in the robotic battlefield and how human constraints will one day become a bigger factor than AI constraints.
Elizabeth Howe contributed to this post.
#big #army #conference #full #armed #robots