Tag Archives: computation

AIES 2018

Last week I attended the first annual conference on AI, Ethics & Society where I presented some work on a Decision Tree/Random Forest algorithm that makes decisions that are less biased or discriminatory. ((In the colloquial rather than technical sense)) You can read all the juicy details in our paper. This isn't a summary of our paper, although that blog post is coming soon. Instead I want to use this space to post some reaction to the conference itself. I was going to put this on a twitter thread, but it quickly grew out of control. So, in no particular order, here goes nothing:

Many of the talks people gave were applicable to GOFAI but don't fit with contemporary approaches. Approaches to improving/limiting/regulating/policing rule-based or expert systems won't work well (if at all) with emergent systems.

Many, many people are making the mistake of thinking that all machine learning is black box. Decision trees are ML but also some of the most transparent models possible. Everyone involved in this AI ethics discussion should learn a rudimentary taxonomy of AI systems. It would avoid mistakes and conflations like this, and it would take maybe an hour of time.

Now that I think of it, it would be great if next year's program included some tutorials. A crash course in AI taxonomy would be useful, as would a walk-through of what an AI programmer does day-to-day. (I think it would help people to understand what kinds of control we can have over AI behavior if they knew a little more about what went in to getting any sort of behavior at all.) I'd be interested in some lessons on liability law and engineering, or how standards organization operate.

Lots of people are letting the perfect be the enemy of the good. I heard plenty of complaints about solutions that alleviate problems but don't eliminate them completely, or work in a majority of situations but don't cover every possible sub-case.

Some of that was the standard posturing that happens at academic conferences ("well, sure, but have you ever thought of this??!") but that's a poor excuse for this kind of gotcha-ism.

Any academic conference has people who ask questions to show off how intelligent they are. This one had the added scourge of people asking questions to show off how intelligent and righteous they are. If ever there was a time to enforce concise Q&A rules, this is it.

We’re starting from near scratch here and working on a big problem. Adding any new tool to the toolbox should be welcome. Taking any small step towards the goal should be welcome.

People were in that room because they care about these problems. I heard too much grumbly backbiting about presenters that care about ethics, but don't care about it exactly the right way.

We can solve problems, or we can enforce orthodoxy, but I doubt we can do both.

It didn't occur to me at the time, but in retrospect I'm surprised how circumscribed the ethical scenarios being discussed were. There was very little talk of privacy, for instance, and not much about social networks/filter bubbles/"fake news"/etc. that has been such a part of the zeitgeist.

Speaking of zeitgeist, I didn't have to hear the word "blockchain" even one single time, for which I am thankful.

If I had to give a rough breakdown of topics, it would be 30% AV/trolley problems, 20% discrimination, 45% meta-discussion, and 5% everything else.

One questioner brought up Jonathan Haidt's Moral Foundations Theory at the very end of the last day. I think he slightly misinterpreted Haidt (but I'm not sure since the questioner was laudably concise), but I was waiting all weekend for someone to bring him up at all.

If any audience would recognize the difference between “bias” in the colloquial sense and “bias” in the technical, ML/stats sense, I would have hoped it was here. No such luck. This wasn't a huge problem in practice, but it’s still annoying.

There’s a ton of hand-waving about how many of the policies being proposed for ethical AI will actually work at the implementation level. “Hand-waving” is even too generous of a term. It’s one thing to propose rules, but how do you make that work when fingers are hitting keyboards?

I’ll give people some slack here because most talks were very short, but “we’ll figure out what we want, and then tell the engineers to go make it happen somehow” is not really a plan. The plan needs to be grounded in what's possible starting at its conception, not left as an implementation detail for the technicians to figure out later.

"We'll figure out what to do, and then tell the geeks to do it" is not an effective plan. One of the ways it can fail is because it is tinged with elitism. (I don't think participants intended to be elitist, but that's how some of these talks could be read.) I fully endorse working with experts in ethics, sociology, law, psychology, etc. But if the technicians involved interpret what those experts say — accurately or not — as "we, the appointed high priesthood of ethics, will tell you, the dirty code morlocks, what right and wrong is, and you will make our vision reality" then the technicians will not be well inclined to listen to those experts.

Everyone wants to 'Do The Right Thing'. Let's work together to help each other do that and refrain as much as possible from finger pointing at people who are 'Doing It Wrong.' Berating people who have fallen short of your ethical standards — even those who have fallen way, way short — feels immensely satisfying and is a solid way to solidify your in-group, but it's not productive in the long run. That doesn't mean we need to equivocate or let people off the hook for substandard behavior, but it does mean that the response should be to lead people away from their errors as much as possible rather than punishing for the sake of punishing.

I wish the policy & philosophy people here knew more about how AI is actually created.

(I’m sure the non-tech people wish I knew more about how moral philosophy, law, etc. works.)

Nonetheless, engineers are going to keep building AI systems whether or not philosophers etc. get on board. If the latter want to help drive development there is some onus on them to better learn the lay of the land. That’s not just, but they have they weaker bargaining position so I think it's how things will have to be.

Of course I'm an engineer, so this is admittedly a self-serving opinion. I still think it's accurate though.

Even if every corporation, university, and government lab stopped working on AI because of ethical concerns, the research would slow but not stop. I can not emphasize enough how low the barriers to entry in this space are. Anyone with access to arXiv, github, and a $2000 gaming computer or some AWS credits can get in the game.

I was always happy to hear participants recognize that while AI decision making can be unethical/amoral, human decision making is also often terrible. It’s not enough to say the machine is bad if you don’t ask “bad compared to what alternative?”. Analyze on the right margin! Okay, the AI recidivism model has non-zero bias. How biased is the parole board? Don't compare real machines to ideal humans.

Similarly, don't compare real-world AI systems with ideal regulations or standards. Consider how regulations will end up in the real world. Say what you will about the Public Choice folks, but their central axiom is hard to dispute: actors in the public sector aren't angels either.

One poster explicitly mentioned Hume and the Induction Problem, which I would love to see taught in all Data Science classes.

Several commenters brought up the very important point that datasets are not reality. This map-is-not-the-territory point also deserves to be repeated in every Data Science classroom far more often.

That said, I still put more trust in quantitative analysis over qualitative. But let's be humble. A data set is not the world, it is a lens with which we view the world, and with it we see but through a glass darkly.

I'm afraid that overall this post makes me seem much more negative on AIES than I really am. Complaining is easier than complementing. Sorry. I think this has been a good conference full of good people trying to do a good job. It was also a very friendly crowd, so as someone with a not insignificant amount of social anxiety, thank you to all the attendees.

Posted in CS / Science / Tech / Coding | Tagged , , , , , | Leave a comment

National AI Strategy

Some of my co-workers published a sponsored piece in the Atlantic calling for a national AI strategy, which was tied in to some discussions at the Washington Ideas event.

I'm 100% on board with the US having a strategy, but I want to offer one caveat: "comprehensive national strategies" are susceptible to becoming top-down, centralized plans, which I think is dangerous.

I'm generally disinclined to centralized planning, for both efficiency and philosophical reasons. I'm not going to take the time now to explain why; I doubt anything I could scratch out here would shift people very much along any kind of Keynes-Hayek spectrum.

So why am I bothering to bring this up? Mostly because I think it would be especially ill-conceived to adopt central planning when it comes to AI. The recent progress in AI has been largely a result of abandoning top-down techniques in favor of bottom-up ones. We've abandoned hand-coded visual feature detectors for convolutional neural networks. We've abandoned human-engineered grammar models for statistical machine translation. In one discipline after another emergent behavior has outpaced decades worth of expert-designed techniques. To layer top-down policy-making on a field built of bottom-up science would be a waste, and an ironic one at that.


PS Having spoken to two of the three authors of this piece, I don't mean to imply that they support centralized planning of the AI industry. This is just something I would be on guard against.

Posted in Business / Economics, CS / Science / Tech / Coding | Tagged , , , , | Leave a comment

Marketing to Algorithms?

Toby Gunton :: Computer says no – why brands might end up marketing to algorithms

I know plenty about algorithms, and enough about marketing. ((Enough to draw a paycheck from a department of marketing for a few years, at least.)) And despite that, I'm not sure what this headline actually means. It's eye catching, to be sure, but what would marketing to an algorithm look like?

When you get down to it, marketing is applied psychology. Algorithms don't have psyches. Whatever "marketing to algorithms" means, I don't think it's going to be recognizable as marketing.

Would you call what spammers do to slip past your filters "marketing"? (That's not rhetorical.) Does that count as marketing? Because that's pretty much what Gunton seems to be describing.

Setting aside the intriguing possibility of falling in love with an artificial intelligence, the film [Spike Jonez's Her] raises a potentially terrifying possibility for the marketing industry.

It suggests a world where an automated guardian manages our lives, taking away the awkward detail; the boring tasks of daily existence, leaving us with the bits we enjoy, or where we make a contribution. In this world our virtual assistants would quite naturally act as barriers between us and some brands and services.

Great swathes of brand relationships could become automated. Your energy bills and contracts, water, gas, car insurance, home insurance, bank, pension, life assurance, supermarket, home maintenance, transport solutions, IT and entertainment packages; all of these relationships could be managed by your beautiful personal OS.

If you're a electric company whose customers all interact with you via software daeomns, do you even have a brand identity any more? Aren't we discussing a world in which more things will be commoditized? And isn't that a good thing for most of the categories listed?

What do we really care about: getting goods and services, or expressing ourselves through the brands we identify with? Both, to an extent. But if we can no longer do that through our supermarkets or banking, won't we simply shift that focus it to other sectors: clothes, music, etc.


Arnold Kling :: Another Proto-Libertarian

2. Consider that legislation may be an inferior form of law not just recently, or occasionally, but usually. Instead, consider the ideas of Bruno Leoni, which suggest that common law that emerges from individual cases represents a spontaneous order, while legislation represents an attempt at top-down control that works less well.

I'd draw a parallel to Paul Graham's writing on dealing with spam. Bayesian filtering is the bottom-up solution; blacklists and rule sets are the top-down.


Both of these stories remind me of a couple of scenes in Greg Egan's excellent Permutation City. Egan describes a situation where people have daemons to answer their video phones that have learned (bottom-up) how to mimic your reactions well enough to screen out personal calls from automated messages. In turn marketers have software that learns how to recognize if they're talking to a real person or one of these filtering systems. The two have entered an evolutionary race to the point that people's filters are almost full-scale neurocognitive models of their personalities.

Posted in Business / Economics, CS / Science / Tech / Coding | Tagged , , , , | Leave a comment