2018年8月29日水曜日

国際教養学部AO入試 supplement, autonomous weapons



Read the following transcript of a CNN report on autonomous weapons and answer the questions.

Should We Fear Killer Robots?  November 14, 2017
https://edition.cnn.com/2017/11/14/opinions/ai-killer-robots-opinion-scharre/index.html

(CNN)Physicist Stephen Hawking recently warned of the dangers of artificial intelligence and "powerful autonomous weapons." Autonomous technology is racing forward, but international discussions on managing the potential risks are already underway.
This week, nations enter the fourth year of international discussions at the United Nations on lethal autonomous weapons, or what some have called "killer robots." The UN talks are oriented on future weapons, but simple automated weapons to shoot down incoming missiles have been widely used for decades.

The same computer technology that powers self-driving cars could be used to power intelligent, autonomous weapons.

Recent advances in machine intelligence are enabling more advanced weapons that could hunt for targets on their own. Earlier this year, Russian arms manufacturer Kalashnikov announced it was developing a "fully automated combat module" based on neural networks that could allow a weapon to "identify targets and make decisions."

Whether or not Kalashnikov's claims are true, the underlying technology that will enable self-targeting machines is coming.

For the past several years, a consortium of nongovernmental organizations have called for a ban on lethal autonomous weapons before they can be built. One of their concerns has been that robotic weapons could result in greater civilian casualties. Opponents of a ban have countered that autonomous weapons might be able to more precisely target the enemy and avoid civilians better than humans can, just as self-driving cars might someday make roads safer.

Machine image classifiers, using neural networks, have been able to beat humans at some benchmark image recognition tests. Machines also excel at situations requiring speed and precision.

These advantages suggest that machines might be able to outperform humans in some situations in war, such as quickly determining whether a person is holding a weapon. Machines can also track human body movements and may even be able to catch potentially suspicious activity, such as a person reaching for what could be a concealed weapon, faster and more reliably than a human.

Machine intelligence currently has many weaknesses, however. Neural networks are vulnerable to a form of spoofing attack (sending false data) that can fool the network. Fake "fooling images" can be used to manipulate image classifying systems into believe one image is another, and with very high confidence.

Moreover, these fooling images can be secretly embedded inside regular images in a way that is undetectable to humans. Adversaries don't need to know the source code or training data a neural network uses in order to trick the network, making this a troubling vulnerability for real-world applications of these systems.

More generally, machine intelligence today is brittle and lacks the robustness and flexibility of human intelligence. Even some of the most impressive machine learning systems, such as DeepMind's AlphaGo, are only narrowly intelligent. While AlphaGo is far superior to humans at playing the ancient Chinese game Go, reportedly its performance drops off significantly when playing on a differently sized board than the standard 19x19 Go board it learned on.

The brittleness of machine intelligence is a problem in war, where "the enemy gets a vote" and can deliberately try to push machines beyond the bounds of their programming. Humans are able to flexibly adapt to novel situations, an important advantage on the battlefield.

Humans are also able to understand the moral consequences of war, which machines cannot even remotely approximate today. Many decisions in war do not have easy answers and require weighing competing values.

As an Army Ranger who fought in Iraq and Afghanistan, I faced these situations myself. Machines cannot weigh the value of a human life. The vice chairman of the US Joint Chiefs of Staff, Gen. Paul Selva, has repeatedly highlighted the importance of maintaining human responsibility over the use of force. In July of this year, he told the Senate Armed Services Committee, "I don't think it's reasonable for us to put robots in charge of whether or not we take a human life."



Questions
1.    What are autonomous weapons?
                                                                                               
                                                                                               

2.    What is concerned about them?
                                                                                               
                                                                                                
                                                                                               
                                                                                                
                                                                                               
                                                                                               

3.    Suppose you were an infantry soldier of a developed country. Do you think you would welcome the use of autonomous weapons in operations? Why or why not?
                                                                                               
                                                                                                
                                                                                               
                                                                                               
                                                                                               
                                                                                               

Answer Keys
1.    Autonomous weapons are highly-developed robots embedded with AI used for the purpose of killing enemies on the battlefield. They automatically choose targets and decide how to deal with them.

2.    The main concern is increase of the death toll of ordinary citizens by introduction of autonomous weapons. Although it is argued that they will be freer from mistakes and more efficient than humans, they have vulnerabilities inherent in machines. They are easily to be deceived and disrupted by undetectable images intended to confuse the system and planted among other regular images. Machines also lack flexibility. A slight change of specification or environment will lower the performance. Last but not least, they cannot deal with moral decisions. Preciousness of life is not their concern.

3.    I would have mixed feelings about introduction of .autonomous weapons into battle grounds if I were an infantry soldier of a developed country. If killer robots are deployed in the front lines instead of humans, the cases of injury and death in the troop of my country will go down. Misidentification and friendly fire may become rare. Yet replacing human soldiers with machines in the work of shooting down humans in ground warfare will change the nature of battles. It starts the automatization of sweeping operation. These machines are usually used not against developed countries but over so-called terrorist groups in poor countries. Their use is technically manhunt by machines. I wonder if it is morally permissible. Also, there is no standard in actual operations. If a slight anomaly can make the machine not only useless but also dangerous, the rate of collateral damage will be high.


0 件のコメント:

コメントを投稿

注: コメントを投稿できるのは、このブログのメンバーだけです。