AIES 2018

Last week I attended the first annual conference on AI, Ethics & Society where I presented some work on a Decision Tree/Random Forest algorithm that makes decisions that are less biased or discriminatory.1 You can read all the juicy details in our paper. This isn't a summary of our paper, although that blog post is coming soon. Instead I want to use this space to post some reaction to the conference itself. I was going to put this on a twitter thread, but it quickly grew out of control. So, in no particular order, here goes nothing:

Many of the talks people gave were applicable to GOFAI but don't fit with contemporary approaches. Approaches to improving/limiting/regulating/policing rule-based or expert systems won't work well (if at all) with emergent systems.

Many, many people are making the mistake of thinking that all machine learning is black box. Decision trees are ML but also some of the most transparent models possible. Everyone involved in this AI ethics discussion should learn a rudimentary taxonomy of AI systems. It would avoid mistakes and conflations like this, and it would take maybe an hour of time.

Now that I think of it, it would be great if next year's program included some tutorials. A crash course in AI taxonomy would be useful, as would a walk-through of what an AI programmer does day-to-day. (I think it would help people to understand what kinds of control we can have over AI behavior if they knew a little more about what went in to getting any sort of behavior at all.) I'd be interested in some lessons on liability law and engineering, or how standards organization operate.

Lots of people are letting the perfect be the enemy of the good. I heard plenty of complaints about solutions that alleviate problems but don't eliminate them completely, or work in a majority of situations but don't cover every possible sub-case.

Some of that was the standard posturing that happens at academic conferences ("well, sure, but have you ever thought of this??!") but that's a poor excuse for this kind of gotcha-ism.

Any academic conference has people who ask questions to show off how intelligent they are. This one had the added scourge of people asking questions to show off how intelligent and righteous they are. If ever there was a time to enforce concise Q&A rules, this is it.

We’re starting from near scratch here and working on a big problem. Adding any new tool to the toolbox should be welcome. Taking any small step towards the goal should be welcome.

People were in that room because they care about these problems. I heard too much grumbly backbiting about presenters that care about ethics, but don't care about it exactly the right way.

We can solve problems, or we can enforce orthodoxy, but I doubt we can do both.

It didn't occur to me at the time, but in retrospect I'm surprised how circumscribed the ethical scenarios being discussed were. There was very little talk of privacy, for instance, and not much about social networks/filter bubbles/"fake news"/etc. that has been such a part of the zeitgeist.

Speaking of zeitgeist, I didn't have to hear the word "blockchain" even one single time, for which I am thankful.

If I had to give a rough breakdown of topics, it would be 30% AV/trolley problems, 20% discrimination, 45% meta-discussion, and 5% everything else.

One questioner brought up Jonathan Haidt's Moral Foundations Theory at the very end of the last day. I think he slightly misinterpreted Haidt (but I'm not sure since the questioner was laudably concise), but I was waiting all weekend for someone to bring him up at all.

If any audience would recognize the difference between “bias” in the colloquial sense and “bias” in the technical, ML/stats sense, I would have hoped it was here. No such luck. This wasn't a huge problem in practice, but it’s still annoying.

There’s a ton of hand-waving about how many of the policies being proposed for ethical AI will actually work at the implementation level. “Hand-waving” is even too generous of a term. It’s one thing to propose rules, but how do you make that work when fingers are hitting keyboards?

I’ll give people some slack here because most talks were very short, but “we’ll figure out what we want, and then tell the engineers to go make it happen somehow” is not really a plan. The plan needs to be grounded in what's possible starting at its conception, not left as an implementation detail for the technicians to figure out later.

"We'll figure out what to do, and then tell the geeks to do it" is not an effective plan. One of the ways it can fail is because it is tinged with elitism. (I don't think participants intended to be elitist, but that's how some of these talks could be read.) I fully endorse working with experts in ethics, sociology, law, psychology, etc. But if the technicians involved interpret what those experts say — accurately or not — as "we, the appointed high priesthood of ethics, will tell you, the dirty code morlocks, what right and wrong is, and you will make our vision reality" then the technicians will not be well inclined to listen to those experts.

Everyone wants to 'Do The Right Thing'. Let's work together to help each other do that and refrain as much as possible from finger pointing at people who are 'Doing It Wrong.' Berating people who have fallen short of your ethical standards — even those who have fallen way, way short — feels immensely satisfying and is a solid way to solidify your in-group, but it's not productive in the long run. That doesn't mean we need to equivocate or let people off the hook for substandard behavior, but it does mean that the response should be to lead people away from their errors as much as possible rather than punishing for the sake of punishing.

I wish the policy & philosophy people here knew more about how AI is actually created.

(I’m sure the non-tech people wish I knew more about how moral philosophy, law, etc. works.)

Nonetheless, engineers are going to keep building AI systems whether or not philosophers etc. get on board. If the latter want to help drive development there is some onus on them to better learn the lay of the land. That’s not just, but they have they weaker bargaining position so I think it's how things will have to be.

Of course I'm an engineer, so this is admittedly a self-serving opinion. I still think it's accurate though.

Even if every corporation, university, and government lab stopped working on AI because of ethical concerns, the research would slow but not stop. I can not emphasize enough how low the barriers to entry in this space are. Anyone with access to arXiv, github, and a $2000 gaming computer or some AWS credits can get in the game.

I was always happy to hear participants recognize that while AI decision making can be unethical/amoral, human decision making is also often terrible. It’s not enough to say the machine is bad if you don’t ask “bad compared to what alternative?”. Analyze on the right margin! Okay, the AI recidivism model has non-zero bias. How biased is the parole board? Don't compare real machines to ideal humans.

Similarly, don't compare real-world AI systems with ideal regulations or standards. Consider how regulations will end up in the real world. Say what you will about the Public Choice folks, but their central axiom is hard to dispute: actors in the public sector aren't angels either.

One poster explicitly mentioned Hume and the Induction Problem, which I would love to see taught in all Data Science classes.

Several commenters brought up the very important point that datasets are not reality. This map-is-not-the-territory point also deserves to be repeated in every Data Science classroom far more often.

That said, I still put more trust in quantitative analysis over qualitative. But let's be humble. A data set is not the world, it is a lens with which we view the world, and with it we see but through a glass darkly.

I'm afraid that overall this post makes me seem much more negative on AIES than I really am. Complaining is easier than complementing. Sorry. I think this has been a good conference full of good people trying to do a good job. It was also a very friendly crowd, so as someone with a not insignificant amount of social anxiety, thank you to all the attendees.


  1. In the colloquial rather than technical sense []
Posted in CS / Science / Tech / Coding | Tagged , , , , , | Leave a comment

Some brief book reviews to close 2017

wild-swanA Wild Swan, Michael Cunningham

I would think we've saturated the "modern re-tellings of fairytales, but for adults" genre, but this was supremely good. They reminded me of Garrison Keillor in the way that some sadness or loss was mixed in to the stories without them being outright tragic.

(I've had this post sitting in my drafts for a very long time. How long? Since well before we all found out Keillor was a creep. So... I guess I'll amend the above to "it reminds me pre-2017 Garrison Keillor"? It's been about 15 years since I read any of his stories, so maybe I should just scrap this reference all together? Screw it.)


The View from the Cheap Seats, Neil Gaiman

A collection of non-fiction pieces: essays, transcripts of awards speeches, introductions, forwards, etc. Some felt dated, but most I can safely call "timeless." Many of them did make me want to go read the various books or authors that he was commenting on (e.g. Jeff Smith, Samuel R Delaney, Fritz Leiber, Dunsany) which seems like as good a thing as can be said about an introduction to a book. The final piece is a memorial to his friend and collaborator, Terry Pratchett, titled "A Slip of the Keyboard." It is definitely worth reading especially for Pratchett fans.


The Liberation, Ian Tregillis

This is the conclusion to Tregillis' "Mechanicals" trilogy. I found the whole series good, but not nearly as good as his "Bitter Seeds" series. "Bitter Seeds" had plot points and story lines that were woven complexly, foreshadowed with subtlety, and epic emotional highs and lows. The Mechanicals was good, but had little of that finesse.

"Mechanicals" is focused on free will and robots. It's an interesting concept, and a good way of using sci-fi to explore ideas. (Which, I suppose, is why it's been done plenty of times.) If I was a writer, I would like to do a similar story about robots, but instead of free will it would be about depression. Inside Out had one of the better depictions of depressions I've seen on screen. Depression — in my experience — isn't just regular sadness turned up to eleven. It's feeling nothing at all. Mechanical androids seem like a perfect vehicle to explore that. Instead of robots fighting to be able to act on their own preferences or desires or motivation, they would be fighting to be able to have preferences or desires or motivations in the first place.


the-gene

The Gene: An Intimate History, Siddhartha Mukherjee

Also not as good as his previous work, The Emperor of All Maladies: A Biography of Cancer, but still very, very good. As in Emperor of All Maladies, Mukherjee does a great job of blending history, science, and his own personal experiences.

I did not appreciate before reading this exactly how quickly the concept of genetics has grown. The hundred years following Darwin's work in the 1850s and Mendel in the 1850s and 1860s was head-spinningly prolific. I had also not considered that eugenics was at the very forefront of applied genetics. I had thought of eugenics as a weird sideline (indeed, I wish it had been) but according to Mukherjee's telling it was at the very center of genetics from its infancy.1

Mukherjee's discussion of penetrance (the way specific genes only affect people in probabilistic ways) was very good. I wish this concept was more widely appreciated, as compared to the binary "you have a mutation or you don't" level of understanding that is common.

Mukherjee also hammers home the idea that a mutation can not be judged to be good or bad by itself, but must be evaluated in the context of a given organism in a given environment. This is important for genetics, but important much more broadly. In my own work I've had to explain many times that certain behaviors of a neural network can not be judged in isolation. They can only be evaluated in the context of the data sets they're operating on and the tasks they're being asked to do.

I found Mukherjee to be on weakest footing when discussing the ethical implications. He seems to be engaging in too much mood affiliation.


Medieval Europe, Chris Wickham

I was looking for a good overview of medieval history. I've learned isolated pieces here and there, but my secondary education covered exactly zero European history, so I'm lacking a broad outline. This wasn't that really that book. It did a good job of describing major political themes but didn't mention any specific events. It's a valuable approach, but not the one I expected. The focus was mainly on state capacity of the different regions. (Which I actually think is a very valuable approach, just not what I was looking for.)

One take-away: France is very fortunate to have inherited Roman roads. That gave them a big leg-up in state capacity compared to their central and eastern rivals.


The Aeronaut's Windlass, Jim Butcher

This is the first in a new series in a victorian, pseudo-steampunk setting. Butcher is generally a fun read, and this is no exception. It's nice to see some fantasy novels that aren't in either a modern time period or a Tolkinesque medieval era.

I don't have a ton to say except that there were Aeronauts but there was no windlass. Is the title a metaphor that is going over my head, or is it just a catchy phrase without relation to the story?

Oh, also one thing in the world building got under my skin. Everyone in the story lives in these towers constructed by "the ancients" or some such, because the surface of the planet is poisonous and/or infested with ravenous hellbeasts. Each tower is a city-state, and people fly between them on airships. As a result, Butcher mentions over and over how much of a luxury resource wood is, because it's risky to go to the surface for timber. But what about all the other raw materials?? Where are they getting metal? Cotton? Wool? A huge library plays a role in the story; what are they making paper out of? Ships are described with complicated rigging; what is rope made from?He mentions that meat is vat-grown and therefor rare, but what about all the other food? Why is wood singled out as the one luxury?


waking-gods

Waking Gods, Sylvain Neuvel

This is the sequel to Neuvel's Sleeping Giants. Very good. Told in the same style, i.e. each chapter is a diary entry, interview transcript, communication intercept, news report, etc. which reveals the story to you little by little. Points for a good story, and double points for non-standard narrative form.


The Rise and Fall of D.O.D.O., Neal Stephenson & Nicole Galland

This had much of Stephenson's cleverness without his extremely lengthy didactic digressions. I'm not sure how much of the book was Stephenson and how much was Galland, but the combination worked very well. Recommended. I'm very much hoping there will be a sequel, but it's not clear. Parts of it relating to academia and the defense/IC sectors did not quite square with what I've observed, but it's a novel about magic and supercomputers and time travel and parallel universes, so I think I can let that slide.


The Princess Bride: S. Morgenstern's Classic Tale of True Love and High Adventure, William Goldman

I love the movie, and I'm glad I finally got around to reading the book. As everyone knows, the book is almost always better than the book. This may be an exception. Either way, they are very close in quality, perhaps because Goldman also wrote the screenplay. (He also wrote Butch Cassidy and the Sundance Kid, and I never would have guessed that both of those were written by the same person.) The only obvious parts left out of the movie were some longer character back stories, which were helpful but not necessary.

The conceit of the book is that Goldman is merely the translator/editor of a story written by the fictitious S. Morgenstern. Goldman never lets this illusion slip. The forward, introduction, introduction to the anniversary edition, epilogue, footnotes and asides: the whole time he sticks to the notion that he's merely editing an existing book. He even weaves in true stories from his life as a screenwriter to further blur the lines. I love unreliable narrators, but this is my first experience with an unreliable author.


The Blade Itself,
Before They Are Hanged, and
Last Right of Kings, Joe Abercrombie

I plowed through all of the "First Law" trilogy almost back-to-back-to-back. Definitely recommended.

Usually when an author has multiple point-of-view characters and rotates chapters between them there are some story lines that are exciting and I want to get back to, and some I have to wade through to get back to the good bits. Not so here, especially in Before They Are Hanged. I also appreciated that there was not an obvious quest or goal that everyone was seeking. It was somewhat difficult to tell what the challenge for the various characters actually was. It all comes together in the end in a very satisfying way, but it was nice not having the constant score-keeping in the back of my head about "are we closer or farther from the Ultimate Goal of destroying the mcguffin/overthrowing the tyrant/winning the throne/whatever?"


Palimpsest: A History of the Written Word, Matthew Battles

Low on factual density. Highly stylized writing. I do give it points because the final and longest chapter, titled "Logos ex Machina," considers computer programs as a type of writing. Anything that is willing to give 10 Print a place in the history of writing is okay with me. Overall, there are better books on the history of book and language.


Crucial Conversations, Kerry Patterson, et al.

I read this as part of a quasi-book club at work. Some of the people at dinner said that it was difficult practice having these crucial conversations (i.e. high stakes, emotionally laden). I suggested that there is one easy way to get lots of experience with these conversations under your belt: get married.

I'd put this in to the better class of management book, in that it's worth reading but still spins twenty pages of valuable advice up to several hundred pages of content. The world would be a more efficient place if business people were willing to spend money at Hudson Books on management pamphlets instead of books.


Olympos and Illium, Dan Simmons

Just as grand in scope and ambition as Simmons' Hyperion series, but utlimately not as good. It took well into the second book for the pieces to start to fit together, and as a result of remaining in the dark I had a hard time carrying about what was going to happen next.


Seven Days in the Art World

Seven Days in the Art World, Sarah Thornton

This was written in 2007, and revolves a lot &emdash; by necessity &emdash; around the intersection of art and money. I would love to see what would have changed if there was a post-crash follow up from 2009.

One chapter was a studio visit to Haruki Murakami's studio. This was an odd choice, since as the book makes clear he's a singularly weird artist since he spends so much of his time running a sort of branding agency. That made for interesting but unrepresentative material. I'd read a whole book composed of Thornton visiting different studios.


The Sea Peoples, S. M. Stirling

This was a let down compared to the dozen or so volumes in the series prior. The series started out with a classic speculative fiction approach: change one thing about the world and see what happens. (Modern technology stops working; neo-feudalism rises from the ashes.) Then in later volumes more mysticism was introduced to explain why the change happened, and to give some narrative structure and reason why the Baddies were so Bad. (Chaotic gods are using them as puppets to take over the world in a proxy fight against their Good God rivals.) But this latest installment is four fifths weird mystical fever dreams (literally) mixed up with homages to the King in Yellow (again, literally). It's off the rails. I'll still read the next volume, because I like my junkfood books and I enthusiastically commit the sunk costs fallacy when it comes to finishing book series. But still: off the rails.


To Rule the Waves: How the British Navy Shaped the Modern World, Arthur Herman

This was a very fun history. There's plenty of fact, but Herman does a good job of writing the "action scenes" of various engagements, for lack of a better word. His style is a little too Great Man-ish for me, but nonetheless this was a good read. There's also a non-zero chance he's overselling how important his subject matter is, but I could day that about 90% of non-fiction writers, and 99% of non-fiction writers who write about rather more obscure topics.

I would read an entire book about common English idioms with nautical origins. For example, lowering the sails on a ship is "striking sail." Sailors, who were paid chronically late by the Royal Navy, would refuse to let their ships leave harbor until they were paid back wages. To disable the ships, they would strike sail. Now a mass refusal to work is a strike.

The British Navy: Guard the Freedom of us All
I used to have this on my bedroom wall when I was a kid. That is a fact I bet you are happy that you now know.

It's a credit to Herman that I was a little emotional by the time I got to the end of the book. The Royal Navy keeps winning and winning, often against the odds, survives WWII and comes out victorious, and then is just... dismantled. It's probably the correct strategic/economic move, but that sort of unforced abdication is somewhat sad.

Of course I did grow up with a reproduction WWII-era Royal Navy morale poster on my bedroom wall, because my friend Eli brought it back from London for me, so I might be subconsciously nostalgic for the Royal Navy in a way most Americans are not.


Artemis, Andy Weir

Good, but not as good as The Martian.2 I give Weir a huge amount of credit for writing a book that grapples with why people would want to live in space in the first place. A space colony is not an economically reasonable thing to do, and I don't like it when people hand-wave that problem away.


From here down, I'm just going to list some of the books I read in the last quarter or so of 2017 that I thought were vaguely interesting. They aren't any worse than the ones above, I just don't have time to write them up and I'm sick of this post sitting in my drafts folder.

Battling the Gods: Atheism in the Ancient World, Tim Whitmarsh

Afterlife, Marcus Sakey

How to be a Stoic, Massimo Pigliucci

potato

Potato: A History of the Propitious Esculent, John Reader

Golden Age and Other Stories, Naomi Novik

Within the Sanctuary of Wings, Marie Brennan

Alphabetical: How Every Letter Tells a Story, Michael Rosen

Besieged, Kevin Hearne

Assassin's Apprentice, Robin Hobb

Stoicism Today (Volume One), Patrick Ussher et al.

Paradox Bound, Peter Clines

Dead Men Can't Complain, Peter Clines


  1. Mukherjee also does good work in not letting us get away with thinking eugenics was something unique to the Nazis; Brits and Americans were leading members of the eugenics travesty. We should confront the ugly parts of our history, where "we" is both national groups as well as ideological ones like, in this case, progressives and High Rationalists. []
  2. I feel like a lot of my reviews are "good, but not as good as their last book" (e.g. my reviews of Tregillis & Mukherjee, supra). This is probably not a terribly fair way to assess authors, but... eh. That's one way I judge books, and I think I'm not alone. []
Posted in Reading Lists, Reviews | Tagged , | Leave a comment

MalConv: Lessons learned from Deep Learning on executables

I don't usually write up my technical work here, mostly because I spend enough hours as is doing technical writing. But a co-author, Jon Barker, recently wrote a post on the NVIDIA Parallel For All blog about one of our papers on neural networks for detecting malware, so I thought I'd link to it here. (You can read the paper itself, "Malware Detection by Eating a Whole EXE" here.) Plus it was on the front page of Hacker News earlier this week, which is not something I thought would ever happen to my work.

Rather than rehashing everything in Jon's Parallel for All post about our work, I want to highlight some of the lessons we learned from doing this about ML/neural nets/deep learning.

As way of background, I'll lift a few paragraphs from Jon's introduction:

The paper introduces an artificial neural network trained to differentiate between benign and malicious Windows executable files with only the raw byte sequence of the executable as input. This approach has several practical advantages:

  • No hand-crafted features or knowledge of the compiler used are required. This means the trained model is generalizable and robust to natural variations in malware.
  • The computational complexity is linearly dependent on the sequence length (binary size), which means inference is fast and scalable to very large files.
  • Important sub-regions of the binary can be identified for forensic analysis.
  • This approach is also adaptable to new file formats, compilers and instruction set architectures—all we need is training data.

We also hope this paper demonstrates that malware detection from raw byte sequences has unique and challenging properties that make it a fruitful research area for the larger machine learning community.

One of the big issues we were confronting with our approach, MalConv, is that executables are often millions of bytes in length. That's orders of magnitude more time steps than most sequence processing networks deal with. Big data usually refers to lots and lots of small data points, but for us each individual sample was big. Saying this was a non-trivial problem is a serious understatement.

The MalConv architecture
Architecture of the malware detection network. (Image copyright NVIDIA.)

Here are three lessons we learned, not about malware or cybersecurity, but about the process of building neural networks on such unusual data.

1. Deep learning != image processing

The large majority of the work in deep learning has been done in the image domain. Of the remainder, the large majority has been in either text or speech. Many of the lessons, best practices, rules of thumb, etc., that we think apply to deep learning may actually be specific to these domains.

For instance, the community has settled around narrow convolutional filters, stacked with a lot of depth as being generally the best way to go. And for images, narrow-and-deep absolutely seems to be the correct choice. But in order to get a network that processes two million time steps to fit in memory at all (on beefy 16GB cards no less) we were forced to go wide-and-shallow.

With images, a pixel values is always a pixel value. 0x20 in a grayscale image is always darkish gray, no matter what. In an executable, a byte values are ridiculously polysemous: 0x20 may be part of an instruction, a string, a bit array, a compressed or encrypted values, an address, etc. You can't interpolate between values at all, so you can't resize or crop the way you would with images to make your data set smaller or introduce data augmentation. Binaries also play havoc with locality, since you can re-arrange functions in any order, among other things. You can't rely on any Tobbler's Law1 relationship the way you can in images, text, or speech.

2. BatchNorm isn't pixie dust

Batch Normalization has this bippity-boppity-boo magic quality. Just sprinkle it on top of your network architecture, and things that didn't converge before now do, and things that did converge now converge faster. It's worked like that every time I've tried it — on images. When we tried it on binaries it actually had the opposite effect: networks that converged slowly now didn't at all, no matter what variety of architecture we tried. It's also had no effect at all on some other esoteric data sets that I've worked on.

We discuss this at more length in the paper (§5.3), but here's the relevant figure:

BatchNorm activations
KDE plots of the convolution response (pre-ReLU) for multiple architectures. Red and orange: two layers of ResNet; green: Inception-v4; blue: our network; black dashed: a true Gaussian distribution for reference.

This is showing the pre-BN activations from MalConv (blue) and from ResNet (red & orange) and Inception-v4 (green). The purpose of BatchNorm is to output values in a standard normal, and it implicitly expects inputs that are relatively close to that. What we suspect is happening is that the input values from other networks aren't gaussian, but they're close-ish.2 The input values for MalConv display huge asperity, and aren't even unimodal. If BatchNorm is being wonky for you, I'd suggest plotting the pre-BN activations and checking to see that they're relatively smooth and unimodal.

3. The Lump of Regularization Fallacy

If you're overfitting, you probably need more regularization. Simple advice, and easily executed. Everytime I see this brought up though, people treat regularization as if it's this monolithic thing. Implicitly, people are talking as if you have some pile of regularization, and if you need to fight overfitting then you just shovel more regularization on top. It doesn't matter what kind, just add more.

We ran in to overfitting problems and tried every method we could think of: weight decay, dropout, regional dropout, gradient noise, activation noise, and on and on. The only one that had any impact was DeCov, which penalized activities in the penultimate layer that are highly correlated with each other. I have no idea what will work on your data — especially if it's not images/speech/text — so try different types. Don't just treat regularization as a single knob that you crank up or down.

I hope some of these lessons are helpful to you if you're into cybersecurity, or pushing machine learning into new domains in general. We'll be presenting the paper this is all based on at the Artificial Intelligence for Cyber Security (AICS) workshop at AAAI in February, so if you're at AAAI then stop by and talk.


  1. Everything is related, but near things are more related than far things. []
  2. I'd love to be able to quantify that closeness, but every test for normality I'm aware of doesn't apply when you have this many samples. If anyone knows of a more robust test please let me know. []
Posted in CS / Science / Tech / Coding | Tagged , , , , , , | Leave a comment

AI's "one trick pony" has a hell of a trick

The MIT Technology Review has a recent article by James Somers about error backpropagation, "Is AI Riding a One-Trick Pony?" Overall, I agree with the message in the article. We need to keep thinking of new paradigms because the SotA right now is very useful, but not correct in any rigorous way. However, as much as I agree with the thesis, I think Somers oversells it, especially in the beginning of the piece. For instance, the introductory segment concludes:

When you boil it down, AI today is deep learning, and deep learning is backprop — which is amazing, considering that backprop is more than 30 years old. It’s worth understanding how that happened—how a technique could lie in wait for so long and then cause such an explosion — because once you understand the story of backprop, you’ll start to understand the current moment in AI, and in particular the fact that maybe we’re not actually at the beginning of a revolution. Maybe we’re at the end of one.

That's a bit like saying "When you boil it down, flight is airfoils, and airfoils are Bernoulli's principle — which is amazing, considering that Bernoulli's principle is almost 300 years old." I totally endorse the idea that we ought to understand backprop; I've spent a lot of effort in the last couple of months organizing training for some of my firm's senior leadership on neural networks, and EBP/gradient descent is the heart of my presentation. But I would be very, very careful about concluding that backprop is the entire show.

Backprop was also not "lying in wait." People were working on it since it was introduced in 1986. The problem was that '86 was the height of the 2nd AI winter, which lasted another decade. Just like people should understand backprop to understand contemporary AI, they should learn about the history of AI to understand contemporary AI. Just because no one outside of CS (and precious few people in CS, for that matter) paid any attention to neural networks before 2015 doesn't mean they were completely dormant, only to spring up fully formed in some sort of intellectual Athenian birth.

I really don't want to be in the position of defending backprop. I took the trouble to write a dissertation about non-backprop neural nets for a reason, after all.1 But I also don't want to be in the position of letting sloppy arguments against neural nets go unremarked. That road leads to people mischaracterizing Minksy and Papert, abandoning neural nets for generations, and putting us epochs behind where we might have been.2


PS This is also worth a rejoinder:

Big patterns of neural activity, if you’re a mathematician, can be captured in a vector space, with each neuron’s activity corresponding to a number, and each number to a coordinate of a really big vector. In Hinton’s view, that’s what thought is: a dance of vectors.

That's not what thought is, that's how thought can be represented. Planets are not vectors, but their orbits can be profitably described that way, because "it behooves us to place the foundations of knowledge in mathematics." I'm sorry if that seems pedantic, but the distinction between a thing and its representation—besides giving semioticians something to talk about—underpins much of our interpretation of AI systems and cognitive science as well. Indeed, a huge chunk of data science work is figuring out the right representations. If you can get that, your problem is often largely solved.3

PPS This, on the other hand, I agree with entirely:

Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way. … What we know about intelligence is nothing against the vastness of what we still don’t know.

What I fear is that people read that and conclude that artificial neural networks are built on a shallow foundation, so we should give up on them as being unreliable. A much better conclusion would be that we need to keep working and build better, deeper foundations.


  1. That reason being, roughly put, that we're pretty sure the brain is not using backprop, and it seems ill-advised to ignore the mechanisms employed by the most intelligent thing we are aware of. []
  2. Plus sloppy arguments should be eschewed on the basis of the sloppiness alone, irrespective of their consequences. []
  3. IIRC both Knuth and Torvalds have aphorisms to the effect that once you have chosen the correct data structures, the correct algorithms will naturally follow. I think AI and neuroscience are dealing with a lot of friction because we haven't been able to figure out the right representations/data structures. When we do, the right learning algorithms will follow much more easily. []
Posted in CS / Science / Tech / Coding | Tagged , , , , , , , | Leave a comment

National AI Strategy

Some of my co-workers published a sponsored piece in the Atlantic calling for a national AI strategy, which was tied in to some discussions at the Washington Ideas event.

I'm 100% on board with the US having a strategy, but I want to offer one caveat: "comprehensive national strategies" are susceptible to becoming top-down, centralized plans, which I think is dangerous.

I'm generally disinclined to centralized planning, for both efficiency and philosophical reasons. I'm not going to take the time now to explain why; I doubt anything I could scratch out here would shift people very much along any kind of Keynes-Hayek spectrum.

So why am I bothering to bring this up? Mostly because I think it would be especially ill-conceived to adopt central planning when it comes to AI. The recent progress in AI has been largely a result of abandoning top-down techniques in favor of bottom-up ones. We've abandoned hand-coded visual feature detectors for convolutional neural networks. We've abandoned human-engineered grammar models for statistical machine translation. In one discipline after another emergent behavior has outpaced decades worth of expert-designed techniques. To layer top-down policy-making on a field built of bottom-up science would be a waste, and an ironic one at that.


PS Having spoken to two of the three authors of this piece, I don't mean to imply that they support centralized planning of the AI industry. This is just something I would be on guard against.

Posted in Business / Economics, CS / Science / Tech / Coding | Tagged , , , , | Leave a comment

What I've Been Reading

Banana: The Fate of the Fruit That Changed the World, Dan Koeppel

Not bad. I'm a sucker for this type of history of a single commodity or common household object. It did make we want to try to get my hands on one of the few non-Cavendish cultivars of bananas that ever make their way to America.

Banana, Dan Koeppel

Very short summary/background info: all of the bananas at American grocery stores are genetic clones. This leaves them very susceptible to disease. This is not a theoretical concern: the variety which was previously ubiquitous was wiped out in the middle of the 20th century, and the current variety is being decimated in many growing regions. The fact that they're sterile fruits propagated clonally also makes it extremely difficult to breed other, more resistant varieties the way we do with the majority of our produce. Also important to the story is that banana production for the US and European markets has historically been very oligopolistic, leading to some … unsavory … business practices.

(Digression: The artificial banana flavor used in candies tastes nothing like our modern Cavendish bananas but I have heard that it is a very good match for the flavor of the Gros Michel that the Cavendish replaced. I've never been able to find definitive confirmation of that though, and I wish it was mentioned in this book. This a minor was disappointment. On the other hand, I was unreasonably amused by Koeppel's tongue-in-cheek translation of the "Gros Michel" as "The Big Mike".)

There is a temptation when people right books about subjects that have been overlooked to try to swing the pendulum too hard the other way to make the audience realize how important the subject is. Koeppel mostly avoids that trap, but falls in to it somewhat when discussing more current history. There's a lot more to Latin American politics and the rise of Bolivarian Socialism than the treatment of workers on banana plantations, which Koeppel asserted as a primary cause. Similarly he drew an analogy between the Clinton administration filing a complaint with the WTO about EU banana import duties and the active role that United Fruit played in shaping US foreign policy in Latin America between the beginning of the century and the middle of the Cold War. Both have fruit companies involved in international relations, that's where the similarities end. One of those is egregious cronyism, and one is a rules-based order in action.

Koeppel was on the shakiest ground towards the end of the book, when he was discussing the future of the banana market. His discussion of Fair Trade could benefit from reading Amrita Narlikar and similar critiques. I do give Koeppel much credit for his recognition of Consumer Sovereignty. If the conditions of banana growers are going to improve it won't be because Chiquita/Dole/etc. become kinder and gentler, it must be because consumers decide to spend more money buying bananas. Our stated preferences for better conditions for poor agricultural workers do not match our revealed preferences as consumers.

I also commend Koeppel for admitting that researching this book caused him to change his mind about transgenic food. He had previously been anti-GMO but became convinced genetic manipulation is the only way to save the banana crop. I do wish he had extended more of the same enthusiastic acceptance of transgenics to other crops, which he only went halfway to doing. Yes, bananas sterility makes them somewhat of a special case, but only somewhat.

A couple of months back I read Dan Reader's Potato: A History of the Propitious Esculent. If you're going to pick up one book about a non-cereal staple crop (and why wouldn't you?), I liked Potato much better.


Pilot X, Tom Merritt

This is a time-travel adventure story. It seemed like it could have been a Doctor Who episode. Merritt handles the oddities that result from time travel with deftness and wit. (A couple of examples that make you stop and think. 1: "This will be the last time I meet you, but not the last time you meet me." 2: The main character spends twelve years in training for a job, but it takes only four years, because when he gets to the end of the four year period he goes back in time and starts again twice, so that there are three of him all operating in parallel.) Amusing, but not great.


The Grace of Kings, Ken Liu

The Grace of Kings, Ken Liu

I had previous read Liu's short story collection The Paper Menagerie and loved it. Grace of Kings didn't disappoint. Highly recommended. One of the blurbs on the back cover described it as "the Wuxia version of Game of the Thrones" and that pretty much covers it.

One downside to the book is that characters experience rapid changes in fortune within the span of several pages. It's a nice, fast pace — most contemporary fantasy authors would lumberingly stretch out plot points like this for scores (hundreds?) of pages — but it does rob the story of some of the potential dramatic tension. One minute I've never even considered the possibility of a character rebelling against their overlord, and then within ten minutes they've plotted their rebellion, rebelled, been suppressed, and then punished. That doesn't give me much chance to savor the possibilities of what might happen. All in all though, I prefer this pace to the prolix plodding common to so many contemporary fantasy authors. I appreciate GRRM-style world building as much as the next reader, but not every fantasy novel needs every minor character to have their entire dynastic history spelled out, complete with descriptions of their heraldry, the architecture of their family seat, their favorite meals, and their sexual peccadilloes.

I'm not actually sure 'fantasy' is the correct term for Grace of Kings, come to think of it. There's some minor divine intervention and a couple of fantastic beasts, but no outright magic. I suppose it's fantasy in a sort of Homeric way, rather than a 20th century way.

Anyway, I've got my hands on the sequel, The Wall of Storms, and will be starting it as soon as possible. Hopefully we don't have to wait too long before the third and final volume is published.


The Art of War, Sun Tzu, translated by the Denma Translation Group

I listened to this audio edition from Shambhala Press. I don't pay much attention to which publishers produce which books, but I've been quite happy with several volumes of theirs that I've bought.

I hadn't read Art of the War in probably 20 years, so this was a good refresh. The way they structured this was to first have a recording of the text being read, and then start at the beginning with another reading but this time with interspersed commentary. That was a very good way to do it.

The text itself is short, and the commentary in this edition is good, so I'd recommend this even if you have read Art of War before.


The Map Thief, Michael Blanding

This is the story of an antiquities dealer specializing in rare maps, E. Forbes Smiley III, who turned to theft to build his inventory. I don't usually go for true crime stories, and indeed that was the least interesting aspect of this book. However, it was an interesting look at a little corner of the art/antiques market that I did not know about. There is also good information about the history of cartography, exploration and printing.((Everyone loves to hate on the Mercator projection, but this book does a good job of explaining how revolutionarily useful Mercator's cartography was in the 16th century. His projection is a tool that is remarkably good for its intended purpose, i.e. helping navigate over long sea voyages. It shouldn't be used the way it has been (hung on every classroom wall, making choropleth infographics, etc.) but that doesn't make it a bad tool per se, just one that is misused. The fitness of a technology, just like that of a biological organism, can only be usefully evaluated in the environment it is adapted for.))

Perhaps the most interesting part of the case for me came after Smiley confessed and the various libraries he stole from had to go about figuring out what was missing and from whom. In the case of normal thefts, or even art thefts, this is pretty straight forward, but the nature of the material — rare, poorly or partially catalogued, incompletely and idiosyncraticly described, existing in various editions with only marginal differences, etc. — make it quite a puzzle. Coming up with a good cataloging system for oddities like antique maps would make a good exercise for a library science/information systems/database project. (Indeed it was only thanks to the work of a former Army intelligence analyst that things got sorted out as well as they did.) Even something superficially simple like figuring out which copy of a printed map is which makes for a good computer vision challenge.

There are also Game Theoretic concerns at work: libraries would benefit if they all operated together to publicize thefts in order to track down stolen materials, but it is in every individual library's interest to cover up thefts so as not to besmirch their reputation and alienate donors, who expect that materials they contribute will be kept safe. The equilibrium is not straightforward, nor is it likely to be optimal.

Posted in Reading Lists, Reviews | Tagged , , | Leave a comment

Friston

Two of my favorite blogs — Slate Star Codex (topics: psychiatry, social commentary) and Marginal Revolution (topics: economics, everything else) — have both linked to Karl Friston papers in the last 24 hours. Since one of my bosses is a Friston enthusiast, and he's the only Friston devotee I've ever met, and neither of these blogs has anything to do with what I work on, this gave me a Worlds-Are-Colliding feeling.

A George divided against itself can not stand.
A George divided against itself can not stand.

I haven't read either paper yet ("An aberrant precision account of autism" and "Predicting green: really radical (plant) predictive processing") but I do want to respond to SSC's commentary. Here's what he had to say:

A while ago I quoted a paper by Lawson, Rees & Friston about predictive-processing-based hypotheses of autism. They said:

This provides a simple explanation for the pronounced social-communication difficulties in autism; given that other agents are arguably the most difficult things to predict. In the complex world of social interactions, the many-to-one mappings between causes and sensory input are dramatically increased and difficult to learn; especially if one cannot contextualize the prediction errors that drive that learning.

And I was really struck by the phrase “arguably the most difficult thing to predict”. Really? People are harder to predict than, I don’t know, the weather? Weird little flying bugs? Political trends? M. Night Shyamalan movies? And of all the things about people that should be hard to predict, ordinary conversations?

I totally endorse the rest of his post, but here I need to disagree. Other people being the hardest thing to predict seems perfectly reasonable to me. The weather isn't that hard to predict decently well: just guess that the weather tomorrow will be like it is today and you'll be pretty damn accurate. Add in some basic seasonal trends — it's early summer, so tomorrow will be like today but a little warmer — and you'll get closer yet. This is obviously not perfect, but it's also not that much worse than what you can do with sophisticated meteorological modeling. Importantly, the space between the naive approach and the sophisticated approach doesn't leave a lot of room to evolve or learn better predictive ability.

Weird flying bugs aren't that hard to predict either; even dumb frogs manage to catch them enough to stay alive. I'm not trying to be mean to amphibians here, but on any scale of inter-species intelligence they're pretty stupid. The space between how well a frog can predict the flight of a mosquito and how well some advanced avionics system could do so is potentially large, but there's very little to be gained by closing that predictive gap.

Political trends are hard to predict, but only because you're predicting other human agents aggregated on a much larger scale. A scale that was completely unnecessary for us to predict, I might add, until the evolutionary eye-blink of ten thousand years or so ago.

Predicting movies is easier than predicting other human agents, because dramatic entertainments — produced by humans, depicting humans — are just a subset of interacting with other human agents. If you have a good model of how other people will behave, then you also have a good model of how other people will behave when they are acting as story tellers, or when they are characters. (If characters don't conform to the audience's model of human agents at least roughly, they aren't good characters.)

Maybe a better restatement of Friston et al. would be "people are are arguably the most difficult things to predict from the domain of things we have needed to predict precisely and have any hope of predicting precisely."

Posted in Uncategorized | Tagged | Leave a comment

Will AI steal our jobs?

As an AI researcher, I think I am required to have an opinion about this. Here's what I have to say to the various tribes.

AI-pessimists: please remember that the Luddites have been wrong about technology causing economic cataclysm every time so far. We're talking about several consecutive centuries of wrongness.1 Please revise your confidence estimates downwards.

AI-optimists: please remember that just because the pessimists have always been wrong in the past does not mean that they must always be wrong in the future. It is not a natural law that the optimists must be right. That labor markets have adapted in the long term does not mean that they must adapt, to say nothing of short-term dislocations. Please revise your confidence estimates downwards.

Everyone: many forms of technology are substitutes for labor. Many forms of technology are complements to labor. Often a single form of technology is both simultaneously. It is impossible to determine a priori which effect will dominate.2 This is true of everything from the mouldboard plough to a convolutional neural network. Don't casually assert AI/ML/robots are qualitatively different. (For example, why does Bill Gates think we need a special tax on robots that is distinct from a tax on any other capital equipment?)

As always, please exercise cognitive and epistemic humility.


  1. I am aware of the work of Gregory Clark and others related to Industrial Revolution era wage and consumption stagnation. If a disaster requires complicated statistical models to provide evidence it exists, I say its scale can not have been that disastrous. []
  2. Who correctly predicted that the introduction of ATMs would coincide with an increase in employment of bank tellers? Anyone? Anyone? Beuller? []
Posted in Business / Economics, CS / Science / Tech / Coding | Tagged , , , , , | Leave a comment