* This is the summary of the Open Discussion. The detailed report is available here.
* The Japanese version is here.

Date May 24th, 2017 (Wed) 17:50~19:30
Venue WINC AICHI
Panelists Yutaka Matsuo (University of Tokyo), Toyoaki Nishida (Kyoto University), Koichi Hori (University of Tokyo), Hideaki Takeda (NII), Takashi Hase (SF Writer), Makoto Shiono (IGPI), Hiromitsu Hattori (Ritsumeikan University), Hiroshi Yamakawa (dwango), Satoshi Kurihara (The University of Electro-Communications), Danit Gal (IEEE, Peking University, Tsinghua University, Tencent)
Moderators Arisa Ema (The University of Tokyo), Katsue Nagakura (Science Writer)


The Japanese Society for Artificial Intelligence (JSAI) released its “Ethical Guidelines” in February 2017. Many other documents and resources on artificial intelligence and ethics/society have been published on these topics abroad. Hence, the open discussion was firstly informed by existing discussions on artificial intelligence and ethics/society in Japan and abroad. At a panel session, we invited Danit Gal, the chair of the outreach committee at the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, to discussed and confirm future cooperation with the JSAI.
First, committee member Arisa Ema gave a talk on “AI and ethics” and introduced three categories of “ethics”: “research ethics,” “AI ethics” and “ethical AI.” Using this distinction, she organized the characteristics of documents published by the Japanese Ministry of Internal Affairs and Communications; The Cabinet Office, Government of Japan; the JSAI; FLI and IEEE Global Initiative. Then, the chair of the ethics committee, Yutaka Matsuo, talked about the creation of guidelines. This was followed by video messages from John C. Havens, the executive director of IEEE Global Initiative introducing “Ethically Aligned Design version 1”, and Richard Mallah, director of AI Projects at the Future of Life Institute, introducing the “Asilomar AI Principles”. (video messages will be linked).
At the panel, Danit Gal commented that the JSAI “Ethical Guidelines” is unique because it guides researchers to not only communicate with society, but also learn from society. The document also emphasized ethical behavioral norms for researchers, which is an essential element for the future development of ethical AI. On the other hand, she questioned the meaning of article nine, “abidance of the ethics guidelines by AI.” While we still do not fully understand human intelligence, we are building AI in our image and then expect it to behave and be as accountable as humans. She also asked if this guideline recognizes AI as a “member or quasi-member of society” and the Japanese people see the technology as a future partner, should it consider assigning rights and obligations to AI? It seems that this guideline has not discussed such things.
Matsuo replied to this comment by emphasizing that Japanese culture tends to treat AI as a partner because it has images of AI and robots coexist, such as in Astroboy and Yaoyorozu no kami (myriads of gods and deities). In addition, he explained that article nine aims to prompt various discussions such as “What is the meaning of being a member of society?” Another panelist added that he thinks AI needed to be recognized as legal persona.
In addition, the idea of “restrictions on research based on the Ethical Guidelines” was questioned by the public via an online comment form. Panelists replied that the Ethical Guidelines will not pose restrictions on research; however, they want the guidelines to be used as an opportunity to reflect on their own research.
Next, Makoto Shiono questioned whether IEEE’s “Ethically Aligned Design” has binding force on researchers. Gal’s answer was “in short no, but in a longer answer, yes.” The document helps point attention towards key issues and recommendations, however IEEE also creates standards to help codify desired technical conduct.
In addition, Shiono asked, as a matter of urgency, how Autonomous Weapon Systems (AWS) and Lethal Autonomous Weapon Systems (LAW) are discussed abroad. Moreover, he questioned how far does she think engineers should be involved in such kind of discussion. Gal commented that AWS are already developed and can be used for both defensive and offensive purposes, like every dual-use technology. Therefore, engineers should develop their research while keeping in mind that there is a possibility of their technologies being misused to cause harm.
The JSAI “Ethical Guidelines” has an article to prevent misuse, so she returned a question, asking “even if you could develop this kind of technology, should you?” The panelists commented that having generality means having autonomy, so how to respond technically to dangerous parts of autonomy will become the future research issues. In addition, the panelists said that as engineers, they should consider the good uses of the technology. On the other hand, other panelists commented that you cannot tell what is good, therefore researchers should imagine the technology’s social impact as much as they can, and some systems that could assess possible misuses would be required. Another panelist said that autonomy itself doesn’t mean a bad thing and he expect that AI as a partner could drive human beings towards a more ethical direction. To his comment, Gal questioned whether AI will be given basic rights like freedom of expression and unsupervised autonomy if it became our partner. The panelists respond that if AI became a partner of human beings, it deserves to be given freedom of expression and AI needed to be able to communicate with people to fulfill responsibility in case AI becomes a member of society.
Lastly, Matsuo reviewed that “Ethical Guidelines” was formulated to consider what technologies can do and what we should consider as engineers. He is glad to know that the significance of this guidelines was recognized internationally and so the ethics committee would like to move on one step further to discuss the issues. Moreover, future collaboration with the IEEE and FLI was confirmed.