{"id":948,"date":"2019-12-14T22:44:25","date_gmt":"2019-12-14T13:44:25","guid":{"rendered":"http:\/\/ai-elsi.org\/?p=948"},"modified":"2019-12-14T22:44:25","modified_gmt":"2019-12-14T13:44:25","slug":"statement-on-machine-learning-and-fairness","status":"publish","type":"post","link":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/archives\/948","title":{"rendered":"Statement on Machine Learning* and Fairness"},"content":{"rendered":"<p>PDF<\/p>\n<h1 style=\"text-align: center\"><span style=\"font-weight: 400\">Statement on\u00a0<\/span><span style=\"font-weight: 400\">Machine Learning* and Fairness<\/span><\/h1>\n<p style=\"text-align: right\"><span style=\"font-weight: 400\">December 10, 2019<br \/>\n<\/span><span style=\"font-weight: 400\">Japan Society for Artificial Intelligence, Ethics Committee<br \/>\n<\/span><span style=\"font-weight: 400\">Japan Society for Software Science and Technology, Machine Learning Systems Engineering Group<br \/>\n<\/span><span style=\"font-weight: 400\">IEICE, Information-Based Induction Sciences and Machine Learning Group<\/span><\/p>\n<p><span style=\"font-weight: 400\">We, a group of researchers studying Machine Learning technologies and their applications (Japan Society for Artificial Intelligence, Ethics Committee; Japan Society for Software Science and Technology, Machine Learning Systems Engineering Group; and IEICE, Information-Based Induction Sciences and Machine Learning Group, hereinafter referred to as \u201cwe\u201d) acknowledge that Machine Learning may interact with concepts of fairness in a way that is problematic. We would like to share our thoughts on how we believe the issue should be addressed and make the following two important points:<br \/>\n<\/span><span style=\"font-weight: 400\">\u00a0(1) Machine learning is nothing more than a tool to assist human decision making, and<br \/>\n<\/span><span style=\"font-weight: 400\">\u00a0(2) We are committed to improving fairness in society by studying possible uses of Machine Learning<\/span><\/p>\n<h3><span style=\"font-weight: 400\">Background<\/span><\/h3>\n<p><span style=\"font-weight: 400\">We understand that there is growing concern that the improper use of Machine Learning may have a negative impact on the fairness of outcomes. For example, in October 2018, Reuters reported that Amazon noticed that the Machine Learning system used in their hiring process was resulting in decisions that showed bias against women, and Amazon stopped using the system [1]. More generally, we recognize that improper use of Machine Learning may, intentionally or unintentionally, affect the fairness of outcomes in various contexts (see [2]).<\/span><\/p>\n<h4><span style=\"font-weight: 400\">1. Machine Learning is nothing more than a tool<\/span><\/h4>\n<p><span style=\"font-weight: 400\">Machine learning is a tool and human beings decide whether and how to use it. Machine learning has the potential to make a significant contribution to the prosperity of society, but if used inappropriately, it may also cause harm to society. To the extent that Machine Learning predicts the future based on past examples, the future predicted based on a biased past may carry that bias forward. If we want a better future than the biased past, <\/span><span style=\"font-weight: 400\">humans may need to carefully intervene in the Machine Learning process to ensure that outcomes are fair.<\/span><br \/>\n<span style=\"font-weight: 400\">At the same time, the contours of \u201cwhat is fair\u201d are determined by society, and advancements in and deployment of science and technology need to be consonant with society\u2019s values. In order to make proper use of this Machine Learning tool, we must understand exactly how it interacts with our society\u2019s values \u200b\u200bof \u201cfairness\u201d, evaluate its risks, and agree on how to implement countermeasures to deal with the identified and realised risks. <\/span><span style=\"font-weight: 400\">This needs to be understood and dealt with not only by us Machine Learning researchers but also by engineers, end users, managers, organizations, and society as a whole.<\/span><\/p>\n<h4><span style=\"font-weight: 400\">2. We contribute to fairness by aligning Machine Learning<\/span><\/h4>\n<p><span style=\"font-weight: 400\">We are committed to avoiding the risks of improper use of Machine Learning and striving to solve the problems, from both code of conduct and technology development points of view. Recently, the <\/span><span style=\"font-weight: 400\">IEEE Global Initiative published <\/span><i><span style=\"font-weight: 400\">Ethically Aligned Design, First Edition<\/span><\/i><span style=\"font-weight: 400\"> [3] in which the misuse of Machine Learning is prohibited and specific countermeasures are shown. In Japan, the Japanese Society for Artificial Intelligence defined <\/span><span style=\"font-weight: 400\">its <\/span><i><span style=\"font-weight: 400\">Ethical Guidelines<\/span><\/i><span style=\"font-weight: 400\"> in 2017 to serve as a moral foundation for its members as well as to increase their awareness of their social responsibilities and to encourage effective communications with society <\/span><span style=\"font-weight: 400\">[4]. Together with various stakeholders in Japan, we discussed how advanced information technology should be used in society, and the results of these discussions were published in March 2019 as <\/span><i><span style=\"font-weight: 400\">Social Principles of Human-Centric AI <\/span><\/i><span style=\"font-weight: 400\">[5]. One guiding principle of this work is \u2018diversity and inclusion\u2019. Also, it is clearly stated that stakeholders should be responsible for fair decision making and accountable for outcomes when advanced information technology is used.<\/span><br \/>\n<span style=\"font-weight: 400\">In light of this, we are undertaking research on how to evaluate quantitatively and realize various aspects of fairness. Fairness in Machine Learning has become a prominent topic in recent symposia, and the number of research papers on fairness is increasing worldwide. When mathematically analyzing the various concepts of fairness using Machine Learning terms, it can be seen that there are many variations of fairness. As such, the concept of \u201cfairness\u201d can be made clearer by re-expressing various criteria in terms of Machine Learning. Using this approach, we hope not only to prevent undesired outcomes when using Machine Learning, but also to promote discussions regarding the various definitions of fairness.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">Looking ahead<\/span><\/h3>\n<p><span style=\"font-weight: 400\">The above two points led us to think about what to do next. The issue of fairness needs to be discussed in an ongoing manner from the perspective of both what technology can do and what society wants. <\/span><span style=\"font-weight: 400\">As society&#8217;s interest in fairness in Machine Learning increases, we should be more sensitive to <\/span><span style=\"font-weight: 400\">our social responsibilities and<\/span><span style=\"font-weight: 400\"> promote open dialogue with everybody in our society.<\/span><br \/>\n<span style=\"font-weight: 400\">* Systems using Machine Learning technology are sometimes referred to as &#8220;artificial intelligence&#8221;. However, the expression &#8220;artificial intelligence&#8221; can also refer to a prospective technology or system that may or may not be invented in the future as the outcome of artificial intelligence research. This statement is specifically concerned with existing \u201cMachine Learning\u201d technology, not speculative technology.<\/span><br \/>\n<strong>References<\/strong><br \/>\n<span style=\"font-weight: 400\">[1] <a href=\"https:\/\/www.reuters.com\/article\/us-amazon-com-jobs-automation-insight\/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon scraps secret AI recruiting tool that showed bias against women.<\/a><br \/>\n<\/span><span style=\"font-weight: 400\">[2] O&#8217;Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, 2016. Cathy O&#8217;Neill (Author), Naoko Kubo (Translation) &#8220;, 2018.<br \/>\n<\/span><span style=\"font-weight: 400\">[3] The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (The IEEE Global Initiative), <a href=\"https:\/\/ ethicsinaction. ieee.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">Ethically Aligned Design-A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition<\/a><\/span><span style=\"font-weight: 400\">, 2019.<br \/>\n<\/span><span style=\"font-size: 14.08px\">[4] Japanese Society for Artificial Intelligence, <\/span><a style=\"font-size: 14.08px\" href=\"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/archives\/514\" target=\"_blank\" rel=\"noopener noreferrer\">Ethical Guidelines<\/a><span style=\"font-size: 14.08px\">.<br \/>\n<\/span><span style=\"font-weight: 400\">[5] Cabinet Office, <a href=\"https:\/\/www.cas.go.jp\/jp\/seisaku\/jinkouchinou\/pdf\/humancentricai.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Social Principles of Human-Centric AI<\/a><\/span><span style=\"font-weight: 400\">.\u00a0<\/span><\/p>\n<p style=\"text-align: right\">(We thank Eric Fandrich for translation support)<\/p>\n","protected":false},"excerpt":{"rendered":"PDF Statement on\u00a0Machine Learning* and Fairness December 10, 2019 Japan Society for Artificial Intelligence, E &#8230;","protected":false},"author":17,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/posts\/948"}],"collection":[{"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/users\/17"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/comments?post=948"}],"version-history":[{"count":0,"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/posts\/948\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/media?parent=948"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/categories?post=948"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ai-gakkai.or.jp\/ai-elsi\/wp-json\/wp\/v2\/tags?post=948"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}