{"id":1008963,"date":"2020-01-06T10:55:31","date_gmt":"2020-01-06T01:55:31","guid":{"rendered":"http:\/\/www.ai-gakkai.or.jp\/rebuild2021en\/?p=1008963"},"modified":"2021-05-18T16:39:45","modified_gmt":"2021-05-18T07:39:45","slug":"vol35_no1","status":"publish","type":"page","link":"https:\/\/www.ai-gakkai.or.jp\/en\/published_books\/journals_of_jsai\/past_journals\/in2020\/vol35_no1\/","title":{"rendered":"[Journal]Artificial Intelligence Vol. 35 No.1 (Jan. 2020)"},"content":{"rendered":"<p><strong>Commentary<\/strong><br \/>\nAI for Everyone \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Ken Ueno 1<\/p>\n<p><strong>Special Issue:\u201c The Next-Generation AI That Enables Mutual Understanding with Humans, Part 2: Robotics\u201d<\/strong><br \/>\nEditor\u2019s Introduction to\u201c The Next-Generation AI That Enables Mutual Understanding with Humans, Part 2: Robotics\u201d<br \/>\n\u3000\u3000\u3000\u3000\u3000\u3000\u3000\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Tetsuya Ogata 2<br \/>\nThe Project for the Next Generation Artificial Intelligence Technologies and the AI Research Center\uff08AIRC\uff09<br \/>\n\u3000\u3000\u3000\u3000\u3000\u3000\u3000\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Junichi Tsujii 4<br \/>\nThe Concepts of the Motion Learning of Robots with Deep Predictive Learning \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Tetsuya Ogata 12<br \/>\n3D Object Recognition Technologies for Robot Manipulation<br \/>\n\u3000\u2500 For Full-automatic Tea-serving Robot \u2500 \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Manabu Hashimoto 18<br \/>\nParts Picking by Robot Learning \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Yukiyasu Domae and Kensuke Harada 25<br \/>\nThe AI in Teachingless Robotic Assembly \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Weiwei Wan and Kensuke Harada 30<br \/>\nMotion Planning of Robot Based on Learning Human Motion \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Tokuo Tsuji 34<br \/>\nIntelligent Systems and Action Learning for Deformable Object Manipulation \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Kimitoshi Yamazaki 40<br \/>\nDeep Reinforcement Learning with Smooth Policy Update and Its Application to Robotic Cloth Manipulation<br \/>\n\u3000\u3000\u3000\u3000\u3000\u3000\u3000\u3000\u3000\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Takamitsu Matsubara and Yoshihisa Tsurumine 47<br \/>\nAutonomous Mobile Robot That Understands Human and Its Environment \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026Yoko Sasaki and Shun Niijima 54<br \/>\nVariantome Driven Antibody\/Peptide Design via Artificial Intelligence and<br \/>\n\u3000Distributed Cooperative Automatic Experiment Devices<br \/>\n\u3000\u3000\u3000\u3000\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Yutaro Kyono, Shohei Suzuki, Masako Yamazaki, Satoshi Tamaki,Shoji Ihara, Hiroki Akiba and Ryu Ogawa 61<br \/>\nAutomation of Cell Culturing and Cell Differentiation Estimation using Bright Field Images<br \/>\n\u3000\u3000\u3000\u3000\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Toutai Mitsuyama, Shungo Adachi, Kaoru Katoh, Kazunobu Aoyama, Masayuki Ii and Toru Natsume 64<br \/>\nA Cloud-based VR Platform Towards Efficient Learning for Interactive Robots \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026Tetsunari Inamura and Yoshiaki Mizuchi 72<\/p>\n<p><strong>Special Issue:\u201c New Trends of Researches for Doctorial Theses\u201d<\/strong><br \/>\nEditors\u2019 Introduction to\u201c New Trends of Researches for Doctorial Theses\u201d \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026Yoshitaka Yamamoto and Yasuaki Kobayashi 79<br \/>\n\u3000Fundamental of AI\u300080 \uff0f Machine Learning and Data Mining\u300080 \uff0f Knowledge Use and Sharing\u300082 \uff0f Web Intelligence\u300083 \uff0f<br \/>\n\u3000Agent\u300083 \uff0f Soft Computing\u300084 \uff0f Natural Language Processing\u300085 \uff0f\u3000Image and Speech Processing\u300085 \uff0f<br \/>\n\u3000Human Interface and Computer-supported System\u300086 \uff0f AI Application\u300086<\/p>\n<p><strong>Special Issue:\u201cIntelligent Dialogue Systems\u201d<\/strong><br \/>\nEditors\u2019 Introduction to\u201c Intelligent Dialogue Systems\u201d \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Shogo Okada and Shinya Fujie 88<\/p>\n<p><strong>Lecture Series:\u201cThe Current State of Artificial Intelligence\u201d\uff086\uff09<\/strong><br \/>\nNatural Language Processing: Language Resources and Semantic Processing \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Yuichiroh Matsubayashi and Masayuki Asahara 89<\/p>\n<p><strong>Series:\u201cAI for the Liberal Arts\u201d\uff084\uff09<\/strong><br \/>\nAccelerating Research and Development by AI: Materials Informatics \u2026\u2026\u2026\u2026\u2026\u2026 Sho Sakurai, Dohjin Miyamoto, Koji Morikawa and Mikiya Fujii 106<\/p>\n<p><strong>Global Eye\uff0849\uff09<\/strong><br \/>\nResearch in the University of Southern California Institute for Creative Technologies and Life in the USA \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Kazunori Terada 117<\/p>\n<p><strong>Conference Reports<\/strong><br \/>\nAI ELSI Award Ceremony and Invited Talk \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 The Ethics Committee, Japanese Society of Artificial Intelligence 120<br \/>\nThe 28th International Joint Conference on Artificial Intelligence\uff08IJCAI 2019\uff09 \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Takayuki Ito 122<br \/>\nThe 13th ACM Conference on Recommender Systems\uff08RecSys 2019\uff09 \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Toshihiro Kamishima 124<\/p>\n<p><strong>Book Review<\/strong><br \/>\nMedical AI and Deep Learning Series<br \/>\n\u3000Hiroshi Fujita Supervision &#038; Edit: Intoroduction to Deep Learning for Medecal Imaging, pp. 224, Ohmsha Ltd.\uff082019\uff09;<br \/>\n\u3000Hiroshi Fujita Supervision, Daisuke Fukuoka Edit: Standard Deep Learning for Medical Imaging\u201c Introduction\u201d, pp. 176, Ohmsha Ltd.\uff082019\uff09\uff1b<br \/>\n\u3000Hiroshi Fujita Supervision, Takeshi Hara Edit: Standard Deep Learning for Medical Imaging\u201c Practice\u201d, pp. 220, Ohmsha Ltd.\uff082019\uff09<br \/>\n\u3000\u3000\u3000\u3000\u3000\u3000\u3000\u3000\u3000\u3000\u3000\u3000\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Tadanobu Furukawa 127<\/p>\n<p><strong>Article<\/strong><\/strong><br \/>\nCover Comment: Natural User Interface by Artificial Intelligence \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026Shinya Kitaoka 129<\/p>\n<p><strong>Erratum<\/strong> \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 131<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Commentary AI for Everyone \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Ken Ueno 1 Special Issue:\u201c The Next-Generation AI That Enables Mutual Understanding with Humans, Part 2: Robotics\u201d Editor\u2019s Introduction to\u201c The Next-Generation AI That Enables Mutual Understanding with Humans, Part 2: Robotics\u201d \u3000\u3000\u3000\u3000\u3000\u3000\u3000\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Tetsuya Ogata 2 The Project for the Next Generation Artificial Intelligence Technologies and the AI Research Center\uff08AIRC\uff09 \u3000\u3000\u3000\u3000\u3000\u3000\u3000\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Junichi Tsujii 4 The Concepts of the Motion Learning of Robots with Deep Predictive Learning \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Tetsuya Ogata 12 3D Object Recognition Technologies for Robot Manipulation \u3000\u2500 For Full-automatic Tea-serving Robot \u2500 \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Manabu Hashimoto 18 Parts Picking by Robot Learning \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Yukiyasu Domae and Kensuke Harada 25 The AI in Teachingless Robotic Assembly [&hellip;]<\/p>\n","protected":false},"author":17,"featured_media":0,"parent":1009045,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"categories":[16,11],"_links":{"self":[{"href":"https:\/\/www.ai-gakkai.or.jp\/en\/wp-json\/wp\/v2\/pages\/1008963"}],"collection":[{"href":"https:\/\/www.ai-gakkai.or.jp\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.ai-gakkai.or.jp\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/en\/wp-json\/wp\/v2\/users\/17"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/en\/wp-json\/wp\/v2\/comments?post=1008963"}],"version-history":[{"count":1,"href":"https:\/\/www.ai-gakkai.or.jp\/en\/wp-json\/wp\/v2\/pages\/1008963\/revisions"}],"predecessor-version":[{"id":1008964,"href":"https:\/\/www.ai-gakkai.or.jp\/en\/wp-json\/wp\/v2\/pages\/1008963\/revisions\/1008964"}],"up":[{"embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/en\/wp-json\/wp\/v2\/pages\/1009045"}],"wp:attachment":[{"href":"https:\/\/www.ai-gakkai.or.jp\/en\/wp-json\/wp\/v2\/media?parent=1008963"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/en\/wp-json\/wp\/v2\/categories?post=1008963"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}