As autonomous drones become increasingly vital on the battlefield, the Pentagon faces the challenge of balancing ethical constraints with combat capabilities.
Commentary
How soon will the United States deploy lethal autonomous robots alongside human soldiers? While the exact timeline is uncertain, this reality might be closer than we think.
Technological advancements have made this possibility increasingly feasible, but they also raise significant ethical challenges, particularly regarding the potential phasing out of the human role in lethal decision-making.
With rapid developments in artificial intelligence (AI), the latest attack and reconnaissance drones can carry out increasingly complex missions with minimal—or even no—human intervention.
Currently, operators such as the Ukrainian military or U.S. Army personnel need specialized training before using first-person view (FPV) drones and face many operational limitations, such as wearing a virtual reality (VR) headset or specialized immersive goggles. The AI-driven Bolt-M eliminates the need for complex training, meeting combat requirements while providing more information and functionality than existing drones.
The Bolt-M is built for rapid deployment, emphasizing ease of operation and portability. It offers options like autonomous waypoint navigation, tracking, and engagement. With over 40 minutes of flight time and a control range of about 12 miles, it effectively supports ground combat. It can carry up to a 3-pound payload of munitions, delivering powerful attacks on static or moving ground targets, including light vehicles, infantry, and trenches.
Once AI identifies a target, the operator can assign a target area to the Bolt-M. The system can accurately track and aim the target, whether out of sight or moving. Built-in visual and guidance algorithms ensure effective attacks even if the drone loses connection with the operator.
The Bolt-M also assists operators in understanding the battlefield: tracking, monitoring, and attacking targets as instructed. For example, a tank with added camouflage might not be recognized by the computer. However, the system can relay this information back to the operator for decision-making. Importantly, these lethal drones can maintain control over targets and autonomously complete previously issued orders even if the link to the operator is severed.
As drones become more effective on the battlefield, demand for autonomous attack drones is rapidly increasing. For companies like Anduril, achieving autonomous attack capability is no longer a technical issue; the real challenge is balancing ethical constraints with lethal autonomous operations. Industry players aim to make their systems as powerful as possible within the framework of government policies, rules of engagement, regulations, and user requirements.
One key lesson from the Ukrainian battlefield is that conditions change quickly. Different countries, whether allies or adversaries, may have different ethical standards regarding the development and use of lethal autonomous weapons. This largely depends on what happens on the battlefield.
The lack of consensus is serious because, while the Pentagon emphasizes AI ethics and the need to ensure a human is “in the loop” for lethal force, there is no guarantee that adversaries will accept similar constraints. This situation brings unprecedented risks for the Pentagon, and it explains why the U.S. military, government, and industry are putting so much effort into optimizing the use of AI, autonomy, and machine learning in operations and weapons development.
Pure technology, lacking human qualities, may incur ethical risks and prove insufficient for handling the complexities of the battlefield.
Source link
Add comment