{"id":1204,"date":"2018-03-23T19:52:00","date_gmt":"2018-03-23T23:52:00","guid":{"rendered":"http:\/\/www.jsylvest.com\/blog\/?p=1204"},"modified":"2019-01-31T16:56:49","modified_gmt":"2019-01-31T21:56:49","slug":"why-we-worry-about-the-ethics-of-machine-intelligence","status":"publish","type":"post","link":"https:\/\/www.jsylvest.com\/blog\/2018\/03\/why-we-worry-about-the-ethics-of-machine-intelligence\/","title":{"rendered":"Why we worry about the Ethics of Machine Intelligence"},"content":{"rendered":"<p class=\"graf graf--p\"><em class=\"markup--em markup--p-em\">This essay was co-authored by myself and\u00a0<\/em><em class=\"markup--em markup--p-em\"><a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/twitter.com\/stevndmills\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/twitter.com\/stevndmills\">Steve Mills<\/a>.<\/em><\/p>\n<p>We worry about the ethics of Machine Intelligence (MI) and we fear our community is completely unprepared for the power we now wield. Let us tell you why.<\/p>\n<p>To be clear, we\u2019re big believers in the far-reaching good MI can do. Every week there are new advances that will dramatically improve the world. In the past month we have seen research that could <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pubmed\/29068076\">improve the way we control prosthetic devices<\/a>, <a href=\"https:\/\/arxiv.org\/pdf\/1711.05225.pdf\">detect pneumonia<\/a>, <a href=\"http:\/\/www.worldscientific.com\/doi\/abs\/10.1142\/9789813235533_0012\">understand long-term patient trajectories<\/a>, and <a href=\"http:\/\/researchrepository.murdoch.edu.au\/id\/eprint\/39097\/\">monitor ocean health<\/a>. That\u2019s in the <em>last 30 days<\/em>. By the time you read this, there will be even more examples. We really do believe MI will transform the world around us for the better, which is why we are actively involved in researching and deploying new MI capabilities and products.<\/p>\n<p>There is, however, a darker side. MI also has the potential to be used for evil. One illustrative example is a recent <a href=\"https:\/\/osf.io\/zn79k\/\">study<\/a> by Stanford University researchers who developed an algorithm to predict sexual orientation from facial images. When you consider <a href=\"https:\/\/www.npr.org\/2017\/04\/18\/524473878\/russian-republic-of-chechnya-accused-of-targeting-gay-people\">recent news<\/a> of the detainment and torturing of more than 100 male homosexuals in the Russian republic of Chechnya, you quickly see the cause for concern. This software and a few cameras positioned on busy street corners will allow the targeting of homosexuals at industrial-scale \u2013 hundreds quickly become thousands. The potential for this isn\u2019t so far-fetched. <a href=\"https:\/\/www.wsj.com\/articles\/the-all-seeing-surveillance-state-feared-in-the-west-is-a-reality-in-china-1498493020\">China is already using CCTV and facial recognition software to catch jaywalkers<\/a>. The researchers pointed out that their findings \u201cexpose[d] a threat to the privacy and safety of gay men and women.\u201d That disavowal does little to prevent outside groups from implementing the technology for mass targeting and persecution.<\/p>\n<p>Many technologies have the potential to be applied for nefarious purposes. This is not new. What is new about MI is the scale and magnitude of impact it can achieve. This scope is what will allow it to do so much good, but also so much bad. It is like no other technology that has come before. The notable exception being atomic weapons, a comparison <a href=\"http:\/\/mashable.com\/2014\/08\/03\/elon-musk-artificial-intelligence\/#w9TQi97GKgqq\">others have already drawn<\/a>. We hesitate to draw such a comparison for fear of perpetuating a sensationalistic narrative that distracts from this conversation about ethics. That said, it\u2019s the closest parallel we can think of in terms of the scale (potential to impact <em>tens of millions<\/em> of people) and magnitude (potential to do <em>physical harm<\/em>).<\/p>\n<p>None of this is why we worry so much about the ethics of MI. We worry because MI is unique in so many ways that we are left completely unprepared to have this discussion.<\/p>\n<p><strong>Ethics is not [<em>yet<\/em>] a core commitment in the MI field<\/strong>. Compare this with medicine where a commitment to ethics has existed for centuries in the form of the <a href=\"https:\/\/www.nlm.nih.gov\/hmd\/greek\/greek_oath.html\">Hippocratic Oath<\/a>. Members of the physics community now <a href=\"http:\/\/www.lasg.org\/what\/pledge.htm\">pledge<\/a> their intent to do no harm with their science. In other fields ethics is part of the very ethos. Not so with MI. Compared to other disciplines the field is so young we haven\u2019t had time to mature and learn lessons from the past. <em>We must look to these other fields and their hard-earned lessons to guide our own behavior. <\/em><\/p>\n<p><strong>Computer scientists and mathematicians have never before wielded this kind of power<\/strong>. The atomic bomb is one exception; cyber weapons may be another. Both of these, however, represent <em>intentional<\/em> applications of technology. \u00a0While the public was unaware of the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Manhattan_Project\">Manhattan Project<\/a>, the scientists involved knew the goal and made an informed decision to take part. The Stanford study described earlier has clear nefarious applications; many other research efforts in MI may not. Researchers run the risk of unwittingly conducting studies that have applications they never envisioned and do not condone. Furthermore, research into atomic weapons could only be implemented by a small number of nation-states with access to proper materials and expertise. Contrast that with MI, where a reasonably talented coder who has taken some open source machine learning classes can easily implement and effectively \u2018weaponize\u2019 published techniques. Within our field, we have never had to worry about this degree of power to do harm. <em>We must reset our thinking and approach our work with a new degree of rigor, humility, and caution.<\/em><\/p>\n<p><strong>Ethical oversight bodies from other scientific fields seem ill-prepared for MI<\/strong>. Looking to existing ethical oversight bodies is a logical approach. Even <a href=\"https:\/\/www.boozallen.com\/s\/insight\/thought-leadership\/our-response-to-artificial-intelligence.html\">we suggested<\/a> that MI is a \u201cgrand experiment on all of humanity\u201d and should follow <a href=\"https:\/\/www.hhs.gov\/ohrp\/regulations-and-policy\/belmont-report\/index.html\">principals borrowed from human subject research<\/a>. The fact that Stanford\u2019s Institutional Review Board (IRB), a respected body within the research community, reviewed and approved research with questionable applications should give us all pause. Researchers have long <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC1473133\/\">raised questions about the broken IRB system<\/a>. An IRB system designed to protect the interests of study participants may be unsuited for situations in which potential harm accrues not to the subjects but to society at large. It\u2019s clear that the standards that have served other scientific fields for decades or even centuries <a href=\"http:\/\/delivery.acm.org\/10.1145\/2940000\/2935882\/p31-metcalf.pdf?ip=128.229.4.2&amp;id=2935882&amp;acc=CHORUS&amp;key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&amp;CFID=849536715&amp;CFTOKEN=53665798&amp;__acm__=1515522175_a8242c8312717d32c\">may not be prepared for MI\u2019s unique data and technology issues<\/a>. These challenges are compounded even further by the general lack of MI expertise, or sometimes even technology expertise, within the members of these boards. <em>We should continue to work with existing oversight bodies, but we must also take an active role in educating them and evolving their thinking towards MI. <\/em><\/p>\n<p><strong>MI ethical concerns are often not obvious<\/strong>. This differs dramatically from other scientific fields where ethical dilemmas are self-evident. That\u2019s not to say they are easy to navigate. A <a href=\"https:\/\/www.washingtonpost.com\/news\/to-your-health\/wp\/2017\/12\/01\/a-man-collapsed-with-do-not-resuscitate-tattooed-on-his-chest-doctors-didnt-know-what-to-do\/?utm_term=.60e6e8ae1ad0\">recent story<\/a> about an unconscious emergency room patient with a \u201cDo Not Resuscitate\u201d tattoo is a perfect example. Medical staff had to decide whether they should administer life-saving treatment despite the presence of the tattoo. They were faced with a very complex, but very obvious, ethical dilemma. The same is rarely true in MI where unintended consequences may not be immediately apparent and issues like bias can be hidden in complex algorithms. <em>We have a responsibility to ourselves and our peers to be on the lookout for ethical issues and raise concerns as soon as they emerge. \u00a0<\/em><\/p>\n<p><strong>MI technology is moving faster than our approach to ethics<\/strong>. Other scientific fields have had hundreds of years for their approach to ethics to evolve alongside the science. MI is still nascent yet we are already moving technology from the \u2018lab\u2019 to full deployment. The speed at which that transition is happening has led to notable ethical issues including potential <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">racism in criminal sentencing<\/a> and <a href=\"https:\/\/www.theguardian.com\/science\/2016\/sep\/01\/how-algorithms-rule-our-working-lives\">discrimination in job hiring<\/a>. The ethics of MI needs to be studied as much as the core technology if we ever hope to catch up and avoid these issues in the future. <em>We need to catalyze an ongoing conversation around ethics much as we see in other fields like medicine, where there is <\/em><a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC1084045\/\"><em>active research<\/em><\/a><em> and <\/em><a href=\"http:\/\/www.nejm.org\/doi\/full\/10.1056\/NEJMc1713344#article\"><em>discussion within the community<\/em><\/a><em>.\u00a0 <\/em><\/p>\n<p>The issue that looms behind all of this, however, is the fact that we can\u2019t \u2018put the genie back in the bottle\u2019 once it has been released. We can\u2019t undo the Stanford research now that it\u2019s been published. As a community, we will forever be accountable for the technology that we create.<\/p>\n<p>In the age of MI, corporate and personal values take on entirely new importance. We have to decide what we stand for and use that as a measure to evaluate our decisions. We can\u2019t wait for issues to present themselves. We must be proactive and think in hypotheticals to anticipate the situations we will inevitably face.<\/p>\n<p>Be assured that every organization will be faced with hard choices related to MI. Choices that could hurt the bottom line or, worse, harm the well-being of people now or in the future. We will need to decide, for example, if and how we want to be involved in <a href=\"https:\/\/qz.com\/1131472\/more-than-50-experts-just-told-dhs-that-using-ai-for-extreme-vetting-is-dangerously-misguided\/\">Government efforts to vet immigrants<\/a> or create <a href=\"https:\/\/gizmodo.com\/hackers-have-already-started-to-weaponize-artificial-in-1797688425\">technology that could ultimately help hackers<\/a>. If we fail to accept that these choices inevitably exist, we run the risk of compromising our values. We need to stand strong in our beliefs and live the values we espouse for ourselves, our organizations, and our field of study. Ethics, like many things, is a slippery slope. Compromising once almost always leads to compromising again.<\/p>\n<p>We must also recognize that the values of others may not mirror our own. We should approach those situations without prejudice. Instead of anger or defensiveness we should use them as an opportunity to have a meaningful dialog around ethics and values. When others raise concerns about our own actions, we must approach those conversations with humility and civility. Only then can we move forward as a community.<\/p>\n<p>Machines are neither moral or immoral. We must work together to ensure they behave in a way that benefits, not harms, humanity. We don\u2019t purport to have the answers to these complex issues. We simply request that you keep asking questions and take part in the discussion.<\/p>\n<hr \/>\n<p>This has been <a href=\"https:\/\/medium.com\/@jsylvest\/why-we-worry-about-the-ethics-of-machine-intelligence-2321f7c807b3\">crossposted to Medium<\/a> and to <a href=\"https:\/\/www.boozallen.com\/s\/insight\/blog\/why-we-worry-about-the-ethics-of-artificial-intelligence.html\">the Booz Allen website<\/a> as well.<\/p>\n<p class=\"graf graf--p\"><em class=\"markup--em markup--p-em\">We\u2019re not the only one discussing these issues. Check out <\/em><a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/medium.com\/@pervade_team\/the-study-has-been-approved-by-the-irb-gayface-ai-research-hype-and-the-pervasive-data-ethics-3b36c5a53eec\" target=\"_blank\" data-href=\"https:\/\/medium.com\/@pervade_team\/the-study-has-been-approved-by-the-irb-gayface-ai-research-hype-and-the-pervasive-data-ethics-3b36c5a53eec\"><em class=\"markup--em markup--p-em\">this Medium post by the NSF-Funded group Pervasive Data Ethics for Computational Research<\/em><\/a><em class=\"markup--em markup--p-em\">, <\/em><a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/www.youtube.com\/watch?v=fMym_BKWQzk\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/www.youtube.com\/watch?v=fMym_BKWQzk\"><em class=\"markup--em markup--p-em\">Kate Crawford\u2019s amazing NIPS keynote<\/em><\/a><em class=\"markup--em markup--p-em\">, <\/em><a class=\"markup--anchor markup--p-anchor\" href=\"http:\/\/www.wired.co.uk\/article\/mustafa-suleyman-deepmind-ai-morals-ethics\" target=\"_blank\" rel=\"noopener\" data-href=\"http:\/\/www.wired.co.uk\/article\/mustafa-suleyman-deepmind-ai-morals-ethics\"><em class=\"markup--em markup--p-em\">Mustafa Suleyman\u2019s recent essay in Wired UK<\/em><\/a><em class=\"markup--em markup--p-em\">, and <\/em><a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/www.buzzfeed.com\/bryorsnefjella\/the-a-bomb-moment-for-computer-science\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/www.buzzfeed.com\/bryorsnefjella\/the-a-bomb-moment-for-computer-science\"><em class=\"markup--em markup--p-em\">Bryor Snefjella\u2019s recent piece in BuzzFeed<\/em><\/a><em class=\"markup--em markup--p-em\">.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This essay was co-authored by myself and\u00a0Steve Mills. We worry about the ethics of Machine Intelligence (MI) and we fear our community is completely unprepared for the power we now wield. Let us tell you why. To be clear, we\u2019re &hellip; <a href=\"https:\/\/www.jsylvest.com\/blog\/2018\/03\/why-we-worry-about-the-ethics-of-machine-intelligence\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[10],"tags":[24,3,28,41,34,40,15],"class_list":["post-1204","post","type-post","status-publish","format-standard","hentry","category-cs","tag-ai","tag-computer-science","tag-consulting","tag-ethics","tag-machine-learning","tag-ml","tag-technology","wpautop"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p3sddF-jq","jetpack-related-posts":[{"id":1188,"url":"https:\/\/www.jsylvest.com\/blog\/2018\/02\/aies-2018\/","url_meta":{"origin":1204,"position":0},"title":"AIES 2018","author":"jsylvest","date":"9 February 2018","format":false,"excerpt":"Last week I attended the first annual conference on AI, Ethics & Society where I presented some work on a Decision Tree\/Random Forest algorithm that makes decisions that are less biased or discriminatory. ((In the colloquial rather than technical sense)) You can read all the juicy details in our paper.\u2026","rel":"","context":"In &quot;CS \/ Science \/ Tech \/ Coding&quot;","block_context":{"text":"CS \/ Science \/ Tech \/ Coding","link":"https:\/\/www.jsylvest.com\/blog\/category\/cs\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1098,"url":"https:\/\/www.jsylvest.com\/blog\/2018\/07\/hume-on-knowledge\/","url_meta":{"origin":1204,"position":1},"title":"Hume on Knowledge","author":"jsylvest","date":"1 July 2018","format":"aside","excerpt":"All knowledge degenerates into probability. \u2014 David Hume, \"A Treatise on Human Nature,\" \u00a7IV.1","rel":"","context":"In &quot;Quotes&quot;","block_context":{"text":"Quotes","link":"https:\/\/www.jsylvest.com\/blog\/category\/quotes\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1126,"url":"https:\/\/www.jsylvest.com\/blog\/2017\/11\/ais-one-trick-pony-has-a-hell-of-a-trick\/","url_meta":{"origin":1204,"position":2},"title":"AI's \"one trick pony\" has a hell of a trick","author":"jsylvest","date":"10 November 2017","format":false,"excerpt":"The MIT Technology Review has a recent article by James Somers about error backpropagation, \"Is AI Riding a One-Trick Pony?\" Overall, I agree with the message in the article. We need to keep thinking of new paradigms because the SotA right now is very useful, but not correct in any\u2026","rel":"","context":"In &quot;CS \/ Science \/ Tech \/ Coding&quot;","block_context":{"text":"CS \/ Science \/ Tech \/ Coding","link":"https:\/\/www.jsylvest.com\/blog\/category\/cs\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1124,"url":"https:\/\/www.jsylvest.com\/blog\/2017\/10\/national-ai-strategy\/","url_meta":{"origin":1204,"position":3},"title":"National AI Strategy","author":"jsylvest","date":"9 October 2017","format":false,"excerpt":"Some of my co-workers published a sponsored piece in the Atlantic calling for a national AI strategy,\u00a0which was tied in to\u00a0some discussions at the\u00a0Washington Ideas event. I'm 100% on board with the US having a strategy, but I want to offer one caveat: \"comprehensive national strategies\" are susceptible to becoming\u2026","rel":"","context":"In &quot;Business \/ Economics&quot;","block_context":{"text":"Business \/ Economics","link":"https:\/\/www.jsylvest.com\/blog\/category\/business-2\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1144,"url":"https:\/\/www.jsylvest.com\/blog\/2017\/12\/malconv\/","url_meta":{"origin":1204,"position":4},"title":"MalConv: Lessons learned from Deep Learning on executables","author":"jsylvest","date":"8 December 2017","format":false,"excerpt":"I don't usually write up my technical work here, mostly because I spend enough hours as is doing technical writing. But a co-author, Jon Barker, recently wrote\u00a0a post on the NVIDIA Parallel For All blog about one of our papers on neural networks for detecting malware, so I thought I'd\u2026","rel":"","context":"In &quot;CS \/ Science \/ Tech \/ Coding&quot;","block_context":{"text":"CS \/ Science \/ Tech \/ Coding","link":"https:\/\/www.jsylvest.com\/blog\/category\/cs\/"},"img":{"alt_text":"The MalConv architecture","src":"https:\/\/i0.wp.com\/www.jsylvest.com\/blog\/wp-content\/uploads\/2017\/12\/malconv.png?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.jsylvest.com\/blog\/wp-content\/uploads\/2017\/12\/malconv.png?resize=350%2C200 1x, https:\/\/i0.wp.com\/www.jsylvest.com\/blog\/wp-content\/uploads\/2017\/12\/malconv.png?resize=525%2C300 1.5x"},"classes":[]},{"id":406,"url":"https:\/\/www.jsylvest.com\/blog\/2013\/05\/reading-list-for-28-may-2013\/","url_meta":{"origin":1204,"position":5},"title":"Reading List for 28 May 2013","author":"jsylvest","date":"28 May 2013","format":false,"excerpt":"For Science! Patrick Morrison & Emerson Murphy-Hill :: Is Programming Knowledge Related To Age? An Exploration of Stack Overflow [pdf] As a CS guy who's tip-toed into psychology here and there I would offer Morrison & Murphy-Hill this advice: tread very, very lightly when making claims regarding the words \"knowledge\"\u2026","rel":"","context":"In &quot;Reading Lists&quot;","block_context":{"text":"Reading Lists","link":"https:\/\/www.jsylvest.com\/blog\/category\/reading-lists\/"},"img":{"alt_text":"busy_sciencing","src":"https:\/\/i0.wp.com\/www.jsylvest.com\/blog\/wp-content\/uploads\/2013\/05\/busy_sciencing.jpeg?resize=350%2C200","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/www.jsylvest.com\/blog\/wp-json\/wp\/v2\/posts\/1204","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.jsylvest.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.jsylvest.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.jsylvest.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.jsylvest.com\/blog\/wp-json\/wp\/v2\/comments?post=1204"}],"version-history":[{"count":12,"href":"https:\/\/www.jsylvest.com\/blog\/wp-json\/wp\/v2\/posts\/1204\/revisions"}],"predecessor-version":[{"id":1334,"href":"https:\/\/www.jsylvest.com\/blog\/wp-json\/wp\/v2\/posts\/1204\/revisions\/1334"}],"wp:attachment":[{"href":"https:\/\/www.jsylvest.com\/blog\/wp-json\/wp\/v2\/media?parent=1204"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.jsylvest.com\/blog\/wp-json\/wp\/v2\/categories?post=1204"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.jsylvest.com\/blog\/wp-json\/wp\/v2\/tags?post=1204"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}