{"id":615,"date":"2017-06-15T23:57:14","date_gmt":"2017-06-15T14:57:14","guid":{"rendered":"http:\/\/ai-elsi.org\/?p=615"},"modified":"2017-06-15T23:57:14","modified_gmt":"2017-06-15T14:57:14","slug":"%e3%80%90summary-report%e3%80%91open-discussion-the-japanese-society-for-artificial-intelligence-2017524","status":"publish","type":"post","link":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/archives\/615","title":{"rendered":"\u3010Summary Report\u3011Open Discussion: The Japanese Society for Artificial Intelligence (2017\/5\/24)"},"content":{"rendered":"<p>* This is the summary of the Open Discussion. The detailed report is available <a href=\"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/archives\/628\">here<\/a>.<br \/>\n* The Japanese version is <a href=\"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/archives\/581\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a>.<\/p>\n<table style=\"width: 98%\">\n<tbody>\n<tr>\n<td style=\"width: 146.4px\">Date<\/td>\n<td style=\"width: 660.8px\">May 24th, 2017 (Wed) 17:50\uff5e19:30<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 146.4px\">Venue<\/td>\n<td style=\"width: 660.8px\">WINC AICHI<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 146.4px\">Panelists<\/td>\n<td style=\"width: 660.8px\">Yutaka Matsuo (University of Tokyo), Toyoaki Nishida (Kyoto University), Koichi Hori (University of Tokyo), Hideaki Takeda (NII), Takashi Hase (SF Writer), Makoto Shiono (IGPI), Hiromitsu Hattori (Ritsumeikan University), Hiroshi Yamakawa (dwango), Satoshi Kurihara (The University of Electro-Communications), Danit Gal (IEEE, Peking University, Tsinghua University, Tencent\uff09<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 146.4px\">Moderators<\/td>\n<td style=\"width: 660.8px\">Arisa Ema (The University of Tokyo), Katsue Nagakura (Science Writer)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-589\" src=\"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-content\/uploads\/sites\/19\/2017\/06\/IMG_4100.jpg\" alt=\"\" width=\"4032\" height=\"3024\" \/><br \/>\nThe Japanese Society for Artificial Intelligence (JSAI) released its \u201cEthical Guidelines\u201d in February 2017. Many other documents and resources on artificial intelligence and ethics\/society have been published on these topics abroad. Hence, the open discussion was firstly informed by existing discussions on artificial intelligence and ethics\/society in Japan and abroad. At a panel session, we invited Danit Gal, the chair of the outreach committee at the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, to discussed and confirm future cooperation with the JSAI.<br \/>\nFirst, committee member Arisa Ema gave a talk on \u201cAI and ethics\u201d and introduced three categories of \u201cethics\u201d: \u201cresearch ethics,\u201d \u201cAI ethics\u201d and \u201cethical AI.\u201d Using this distinction, she organized the characteristics of documents published by the Japanese Ministry of Internal Affairs and Communications; The Cabinet Office, Government of Japan; the JSAI; FLI and IEEE Global Initiative. Then, the chair of the ethics committee, Yutaka Matsuo, talked about the creation of guidelines. This was followed by video messages from John C. Havens, the executive director of IEEE Global Initiative introducing \u201cEthically Aligned Design version 1\u201d, and Richard Mallah, director of AI Projects at the Future of Life Institute, introducing the \u201cAsilomar AI Principles\u201d. (video messages will be linked).<br \/>\nAt the panel, Danit Gal commented that the JSAI \u201cEthical Guidelines\u201d is unique because it guides researchers to not only communicate with society, but also learn from society. The document also emphasized ethical behavioral norms for researchers, which is an essential element for the future development of ethical AI. On the other hand, she questioned the meaning of article nine, \u201cabidance of the ethics guidelines by AI.\u201d While we still do not fully understand human intelligence, we are building AI in our image and then expect it to behave and be as accountable as humans. She also asked if this guideline recognizes AI as a \u201cmember or quasi-member of society\u201d and the Japanese people see the technology as a future partner, should it consider assigning rights and obligations to AI? It seems that this guideline has not discussed such things.<br \/>\nMatsuo replied to this comment by emphasizing that Japanese culture tends to treat AI as a partner because it has images of AI and robots coexist, such as in Astroboy and Yaoyorozu no kami (myriads of gods and deities). In addition, he explained that article nine aims to prompt various discussions such as \u201cWhat is the meaning of being a member of society?\u201d Another panelist added that he thinks AI needed to be recognized as legal persona.<br \/>\nIn addition, the idea of \u201crestrictions on research based on the Ethical Guidelines\u201d was questioned by the public via an online comment form. Panelists replied that the Ethical Guidelines will not pose restrictions on research; however, they want the guidelines to be used as an opportunity to reflect on their own research.<br \/>\nNext, Makoto Shiono questioned whether IEEE\u2019s \u201cEthically Aligned Design\u201d has binding force on researchers. Gal\u2019s answer was \u201cin short no, but in a longer answer, yes.\u201d The document helps point attention towards key issues and recommendations, however IEEE also creates standards to help codify desired technical conduct.<br \/>\nIn addition, Shiono asked, as a matter of urgency, how Autonomous Weapon Systems (AWS) and Lethal Autonomous Weapon Systems (LAW) are discussed abroad. Moreover, he questioned how far does she think engineers should be involved in such kind of discussion. Gal commented that AWS are already developed and can be used for both defensive and offensive purposes, like every dual-use technology. Therefore, engineers should develop their research while keeping in mind that there is a possibility of their technologies being misused to cause harm.<br \/>\nThe JSAI \u201cEthical Guidelines\u201d has an article to prevent misuse, so she returned a question, asking \u201ceven if you could develop this kind of technology, should you?\u201d The panelists commented that having generality means having autonomy, so how to respond technically to dangerous parts of autonomy will become the future research issues. In addition, the panelists said that as engineers, they should consider the good uses of the technology. On the other hand, other panelists commented that you cannot tell what is good, therefore researchers should imagine the technology\u2019s social impact as much as they can, and some systems that could assess possible misuses would be required. Another panelist said that autonomy itself doesn\u2019t mean a bad thing and he expect that AI as a partner could drive human beings towards a more ethical direction. To his comment, Gal questioned whether AI will be given basic rights like freedom of expression and unsupervised autonomy if it became our partner. The panelists respond that if AI became a partner of human beings, it deserves to be given freedom of expression and AI needed to be able to communicate with people to fulfill responsibility in case AI becomes a member of society.<br \/>\nLastly, Matsuo reviewed that \u201cEthical Guidelines\u201d was formulated to consider what technologies can do and what we should consider as engineers. He is glad to know that the significance of this guidelines was recognized internationally and so the ethics committee would like to move on one step further to discuss the issues. Moreover, future collaboration with the IEEE and FLI was confirmed.<\/p>\n","protected":false},"excerpt":{"rendered":"* This is the summary of the Open Discussion. The detailed report is available here. * The Japanese version is &#8230;","protected":false},"author":17,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[2],"tags":[],"_links":{"self":[{"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/posts\/615"}],"collection":[{"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/users\/17"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/comments?post=615"}],"version-history":[{"count":0,"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/posts\/615\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/media?parent=615"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/categories?post=615"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/tags?post=615"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}