Terminator-style AI weapon ‘hunted down humans without being told’ warns UN report
AIのriskに関し、以前のnoteに書きましたが、今のAIはある特定のtaskを行うだけで、汎用性は狭いのですが、今後、General AI、即ち、人間の認識力・知能・思考・判断等を代替できる、あるいは、人間以上の能力を持つAIが出てきたときに、そのAIは常に人間にとって正しい判断をするか、これは未知の領域です。しかし、AIの進化に伴い、AIの人間に対するrisk potentialは非常に高まると思われます。さて、今回の記事は、AIで制御しているdroneが、人間の命令を無視して敵を攻撃した事実を報道しています。General AIのriskについて述べましたが、まず、直近の課題として、取り組む必要のある重要な課題です。
わたしのnoteにおいては、最新の科学・経済・社会等の問題に関して、英語の記事を引用し、その英文が読み易いように加工し、英語の勉強ツールと最新情報収集ツールとしてご利用頂くことをmain missionとさせて頂きます。勿論、私論を書かせて頂くこともしばしです。
Terminator-style AI weapon ‘hunted down humans without being told’ warns UN report
By MANON DARK / 02:32, Mon, May 31, 2021
An AI drone is thought to have "hunted down" a human target last year without being ordered to by a human controller, according to a report prepared for the United Nations.
The revelation (暴露、発覚/rèvəléiʃən) raises concern over terminator-style AI weapons which could kill people in conflict without any human control. The drone was deployed in March last year during the conflict between the Libyan government forces and a breakaway military faction led by Khalifa Haftar, commander of the Libyan National Army.
The report on the incident from the UN Security Council’s Panel of Experts on Libya was obtained by the New Scientist magazine.
・The drone was a Kargu-2 quadcopter created by Turkish military tech company STM.
・The weapon has an explosive charge and can be aimed at a target and detonates on impact.
The report, [published earlier this year], said how Haftar’s forces were “hunted down and remotely engaged” by the drones which were operating in a “highly effective” autonomous mode which required no human controller.
The report added: “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition (軍需物資、武器弾薬/mjuːníʃən): in effect, a true ‘fire, forget (operatorの指示を忘れろ)and find’ capability.”
The report appears to suggest that the drones were targeting humans using their own initiative.
No deaths were confirmed in the report, however, similar weapons have caused “significant casualties” in other situations.
Homeland security specialist Zachary Kallenborn raised concerns over the accuracy of the weapons technology.
Writing in The Bulletin of the Atomic Scientists, he said: “Current machine learning-based systems cannot effectively distinguish a farmer from a soldier.
“Farmers might hold a rifle to defend their land, while soldiers might use a rake (熊手) to knock over a gun turret (砲塔◆機関銃、砲を内蔵した装甲塔/tə́ːrət). … Even adequate classification of a vehicle is difficult.”
Mr Kallenborn explained how without a human to make a judgement call, the risks are too high.
He added: “Any given autonomous weapon has some chance of messing up, but those mistakes could have a wide range of consequences.
“The highest risk autonomous weapons are those that have a high probability of error and kill a lot of people when they do.
この記事が気に入ったらサポートをしてみませんか?