"가공할 드론, 군사로봇은 해커들의 표적이다" Armed drones and military robots...Experts fear we are being lulled into a false sense...(VIDEO)
Armed drones and military robots have 'limitless potential for disaster': Experts fear we are being lulled into a false sense of security by autonomous machines
- Autonomous weapons are increasingly prevalent in modern warfare
- But security experts warn of a two-pronged threat for serious disaster
- Limited subjective-decision making means humans must be in the frame
- And the threat of cyber attacks from hackers will continue to loom large
황기철 콘페이퍼 에디터
ki chul, hwang conpaper editor
Handing over more of the decision-making to machines may ease the human burden in warfare, but those in charge should be wary of being lulled into a false sense of security, warn experts.
They caution that autonomous machines - such as drones and or advanced guided missile systems - might not only make the wrong decisions, they could also be used against us by hackers.
Failing to address either of these aspects, they said, could generate an 'almost limitless' potential for disaster.
Scroll down for video
Autonomous weapons, such as drones (stock image) or advanced guided missile systems, might not only make the wrong decisions, they could also be used against us by hackers. Failing to address either of these aspects, they said, could generate an 'almost limitless' potential for disaster, warn experts
The stark warnings follow in the wake of a recent report from the Center for a New American Security (CNAS) that raised concerns of the decision-making ability of autonomous weapons systems, including drones.
The Autonomous Weapons and Operational Risk report, published earlier this week, was produced by former US Secretary of Defence official Paul Scharre.
Now, leading the CNAS's programme on future warfare, Scharre questioned whether automation of weapons will lead to 'robutopia or robocalpyse.'
The report highlights that while no state has officially confirmed plans to build fully autonomous weapons, 'few countries have renounced them either.'

Autonomous weapons are in danger of going rogue, warn experts. Hollywood has long-warned of the threat of AI in weapons, such as in the movie Robocop (still pictured) where a droid fails to recognise civilians
Hollywood has long laid out the rocky road to autonomous weapons, with malfunctioning droids and robots a sci-fi staple.
A classic example comes from the 1987 movie Robocop, in which a security droid fails to recognise targets from civilians, with fatal - albeit fictional - results.
Dr Sarah Kreps an Associate Professor in Cornell University's Department of Government and an expert in drone warfare, cautions that following the road to autonomous weapons will lead to two main problems: a lack of subjective-decision making - which humans use to tell friend from foe - and hacking.
Explaining the limitations of machine intelligence to recognise targets, the security expert highlights the need to keep humans in the frame.
The inherent confusion of a war zone can make it difficult to pick out those intent on doing harm from those caught in the crossfire.
'You can't put subjective decisions about who's a combatant or civilian into an algorithm. This has implications for targeting decisions,' said Dr Kreps.
'A human, or rather many humans, should be in the loop to analyse individuals' behaviours to see whether they are directly and actively involved in combat.
'Enemy status is often a subjective judgment and something that cannot easily be programmed into an autonomous weapon. We should not be lulled into thinking that technology can make these decisions easier.'

Experts warn machines lack subjective-decision making which humans use to tell friend from foe. If the machines pick the targets for missile strikes (stock image), can we be sure they are making the right choice?
But beyond the lack of human judgement, autonomy brings added the threat of cyber-attacks. If the security systems safeguarding the autonomous technology can be overridden by hackers, it could cause havoc on the battle field.
'There are benign cases of interruptions, like a computer bug, but also less benign cases like hacking' she explained.
'If groups can hack into the Pentagon's system of security clearances, they can almost certainly hack into the system that controls autonomous weapons, in which case the potential for disaster is almost limitless.'
The CNAS report states that, while difficult for security reasons, there is a need for greater transparency from countries on how they will likely approach autonomous weapons.
Sharre wrote: 'Few states have issued clear national policies on the use of autonomy in weapons. Given the potential for dangerous interactions between autonomous systems, a common set of international expectations is critical.'
kcontents
"from past to future"
culture and Arts
conpaper