Author Archives: jsylvest

Book List: 2018Q1

Yes, I realize it is now most of the way through the 2nd quarter of the year. Whatever. Here are the books I read in the first three months.

Cover of "Sourdough" by Robin Sloan
"Sourdough," Robin Sloan

Sourdough, Robin Sloan

I love technology, and I love baking bread. I'm pretty much right in the cross hairs for target audience of this one, and I loved it. The protagonist is a programmer who takes up baking. It's so refreshing to read an author that has actual experience with technology. My only complaint is that the protagonist takes to baking bread so easily and flawlessly (even building her own backyard oven overnight) that it made me feel inadequate. Then I remembered that this is fictional, so I stopped moping and decided it was time to get back on the sourdough train and build up a starter again.

Actually I do have one other complaint: how can the author of Mr. Penumbra's 24-Hour Bookstore publish a book without a colophon in it? The Walbaum face used to set this great choice here, and it deserves to be recognized as such.


Persepolis Rising, James S. A. Corey

This is another solid entry in the Expanse series. I actually felt bad when bad things happened to the antagonist, so well done to the author/s for making a sympathetic villain. (Although even calling him "villain" kind of misses the point.)


Cover of "The Storm Before the Storm" by Mike Duncan
"The Storm Before the Storm," Mike Duncan

The Storm Before the Storm: The Beginning of the End of the Roman Republic, Mike Duncan

This is Duncan's first book. He's the creator of the podcast series "The History of Rome" and "Revolutions." The former was instrumental in getting me interested in podcasts ((along with EconTalk and the old archives of Car Talk)), and the latter is one of the best podcasts going.

He did a great job with this book, and it's a severely under-reported period of history. Roman history is always Caesar, Caesar, Caesar and I understand why the final transition from Republic to Empire gets all the top billing. But that's just the final stage. This rewinds the clock about a century to talk about what set all of that in motion. It's a valuable story by itself, but it's also a great thing to read in 2018: every time I read the news I feel a little bit more mos maiorum getting chipped away.


Vacationland: True Stories from Painful Beaches, John Hodgman

Nostalgic essays from someone who thinks nostalgia is a very toxic impulse. Good balance of humor and pathos. If you're a Judge John Hodgman listener you probably know what to expect here, because he's mentioned some of these stories and it takes on a similar tone of the more heartfelt portion of his judgements. I think he overdid it a little with the self-guilt about being financially successful, but that wasn't a major problem.


Cover of "Lesser Beasts" by Mark Essig
"Lesser Beasts," Mark Essig

Lesser Beasts: A Snout-to-Tail History of the Humble Pig, Mark Essig

Fascinating. I would have liked a little more on the role of pigs outside of Europe/Middle East/North America, but Essig says right up front that he's limited the geographic scope so I can't fault him too much. (There is a small amount of material of about East Asia.)

He presented some interesting historic background on religious taboos against pork. (It was trivial for Egyptian religious authorities to proscribe pork since it was strictly a food for the underclass anyway. He presents a thesis that it was similarly easy for Jews to outlaw pork because it was a very small part of the Levantine diet to begin with. I'm not nearly knowledgeable to know how this compares to the story I had heard before, whereby the prohibition was a round about way of avoiding trichinosis, etc. but it seems like an overlooked factor. Essig also considers the interaction between the pork-avoiding Jews and pork-loving Romans as an in-group/out-group marker. If the Romans had been ambivalent about pork would it loom as large in modern understanding of kosher food? I don't know, I'm way out of my element here.)

I would love to know more about why pork can be preserved so much more effectively than beef, etc. What is it about pork that makes it so amenable to drying, salting, etc.?

There was good discussion toward the tend about conditions in contemporary hog farms that was... uncomfortable. Similar to my review of Banana, I give credit to Essig for recognizing that changing conditions will only come about if consumers chose to accept higher prices rather than via political action.


The End of All Things, John Scalzi

I usually have a hard time giving up on series after I've started them, but I've had enough of the "Old Man's War" books at this point. This volume was so heavy-handed with theming. In addition, the structure was weak. Rather than a complete novel, it's one novella and a handful of connected stories. There's nothing wrong with that per se, but combined with the way the moral of the story was slathered on so thickly it made the entire book feel sloppy and lazy.

This, along with some other series I've read recently ((along with the way Disney has expanded the Star Wars series)), has lead me to conclude that a high-ROI strategy for a mercenary writer is to put together four solid, popular novels and then just keep cranking out short stories set in the same universe ad infinitum. I suspect there are enough completists like me that will keep gobbling up even sub-mediocre output if it has characters and settings we know. (But I've never written any fiction, so what do I know?)


Cover of "Heart of Europe" by Peter Wilson
"Heart of Europe," Peter Wilson

Heart of Europe: A History of the Holy Roman Empire, Peter Wilson

This is a beast of a book. Long, comprehensive and detailed. It's main theme is "everything you know is wrong, or at least much more subtle and complicated" but since I knew very little about the HRE to begin with it took me quite a while to get a grip on this book.

The entire book is thematically rather than chronologically oriented. As much as I liked this, I think I would have liked it much more if I had a better mental model of the basic events.

This is a particularly good book to be reading in 2018. Many of our biggest political questions around the world seem to be rooted in wrestling with issues of sovereignty, so there's a lot to be learned from the mixed-sovereignty thing the HRE had going on.


Cover of "Norse Mythology" by Neil Gaiman.
"Norse Mythology," Neil Gaiman

Norse Mythology, Neil Gaiman

This is pretty self-recommending, and it lives up to that.

I held off reading this until I could get a physical rather than digital copy because it was such a handsome book. The one exception was some incorrectly set apostrophes in the otherwise beautiful Cochin typeface. (I posted some examples to Twitter, and Gaiman brought it to WW Norton's attention, who say they'll be fixing it in the future.)

I liked Gaiman's description of the magical chains used to bind Fenris:

Odin brooded and he pondered and he thought. All the wisdom of Mimir's well was his, and the wisdom he had gained from hanging from the world-tree, a sacrifice to himself. At last he called the light elf Skirnir, Frey's messenger, to his side, and he described the chain called Gleipnir. Skirnir rode his horse across the rainbow bridge to Svartalfheim, with instructions to the dwarfs for how to create a chain unlike anything ever made before.

The dwarfs listened to Skirnir describe the commission, and they shivered, and they named their price. Skirnir agreed, as he had been instructed to do by Odin, although the dwarfs' price was high. The dwarfs gathered the ingredients they would need to make Gleipnir.

These were the six things the dwarfs gathered:

For firstly, the footsteps of a cat.
For secondly, the beard of a woman.
For thirdly, the roots of a mountain.
For fourthly, the sinews of a bear.
For fifthly, the breath of a fish.
For sixth and lastly, the spittle of a bird.

Each of these things was used to make Gleipnir. (You say you have not seen these things? Of course you have not. The dwarfs used them in their crafting.)

I don't know if that's a standard explanation for how non-existent things were used, but I think it's a charming touch.


A Plague of Giants, Kevin Hearne

This is the first in Hearne's new series called "The Seven Kennings." It was good, but Hearne sort of throws the reader in the deep end at the beginning and it was tough figuring out what this world was and how it worked. Luckily it was a long book, so there was plenty of runway to get things sorted out. Unluckily, it was a loooong book. And the whole thing was just the first act of a much longer story, with no resolution of its own. Why do fantasy authors do this? Why do we readers like them to? Look, I enjoyed this, and will be happily reading the next volume, but come on. Hearne's prior "Iron Druid" series was composed of modest length books each one of which contained a three-act adventure, and all of them fit into a larger sequence. He can clearly deliver good story without needing thousands of pages.

(Yes, I am being grumpy about this. No, that is not fair to Hearne; he can write whatever books he damn well pleases. I would just like to be able to get some fantasy books that don't take two dozen hours to listen to even when I'm cruising through them at 2x speed.)


The Buried Book: The Loss and Rediscovery of the Great Epic of Gilgamesh, David Damrosch

I liked the way this was structured, working backwards from the translation of the Epic, to its discovery, through the development Assyriology more generally, back to the time is was originally written and then further back in history when the the story itself is taking place.

Some of the people involved in this chain were fascinating. Besides Gilgamesh itself, it was an interesting look into the operation of the Victorian academy. I can imagine a lot of contemporary "Blue Tribe" folks being really interested in the descriptions regarding privilege (or lack of, especially w.r.t. the ethnicity of Hormuzd Rassam but also the working class background of George Smith). I can see a lot of Red Tribe folks focusing on the up-by-their-bootstraps self-improvement that those guys pulled off via non-state-sponsored education. Both of them would be right. The interplay between those two themes is a whole discussion I don't want to get into now. What I'm going to do instead is copy out the first few lines from a poem called "Gilgamesh, Enkidu and the Nether World" that appears as an epilogue on the tablets that contain the Epic of Gilgamesh itself. It begins:

In those days, in those distant days
In those nights, in those remote nights
In those years, in those far away years

I find these lines viscerally appealing in a way I can't explain, and I think I might make them my next calligraphy project. What I'd really like to do is find out what the original cuneiform looks like and superimpose the translation on that, but I haven't been able to track it down with confidence. I've done one linocut with cuneiform before, and it would be fun to combine printing and calligraphy somehow.


The Clockwork Dynasty, Daniel Wilson

A semi-steampunk sort of mystery/adventure. Not terrible, but I wouldn't recommend it.


Dark State, Charles Stross

I'm tempted to copy-and-paste what I said about the Scalzi book above. This "Merchant Princes" series has run it's course. It was a fascinating premise to begin with, but it's degenerated into a venue for the author to complain about contemporary politics with a thin veneer of action. (And I actually agree with many of the complaints Scalzi is making, but... it's boring.)

Posted in Reviews, Uncategorized | Tagged , , | Leave a comment

Why we worry about the Ethics of Machine Intelligence

This essay was co-authored by myself and Steve Mills.

We worry about the ethics of Machine Intelligence (MI) and we fear our community is completely unprepared for the power we now wield. Let us tell you why.

To be clear, we’re big believers in the far-reaching good MI can do. Every week there are new advances that will dramatically improve the world. In the past month we have seen research that could improve the way we control prosthetic devices, detect pneumonia, understand long-term patient trajectories, and monitor ocean health. That’s in the last 30 days. By the time you read this, there will be even more examples. We really do believe MI will transform the world around us for the better, which is why we are actively involved in researching and deploying new MI capabilities and products.

There is, however, a darker side. MI also has the potential to be used for evil. One illustrative example is a recent study by Stanford University researchers who developed an algorithm to predict sexual orientation from facial images. When you consider recent news of the detainment and torturing of more than 100 male homosexuals in the Russian republic of Chechnya, you quickly see the cause for concern. This software and a few cameras positioned on busy street corners will allow the targeting of homosexuals at industrial-scale – hundreds quickly become thousands. The potential for this isn’t so far-fetched. China is already using CCTV and facial recognition software to catch jaywalkers. The researchers pointed out that their findings “expose[d] a threat to the privacy and safety of gay men and women.” That disavowal does little to prevent outside groups from implementing the technology for mass targeting and persecution.

Many technologies have the potential to be applied for nefarious purposes. This is not new. What is new about MI is the scale and magnitude of impact it can achieve. This scope is what will allow it to do so much good, but also so much bad. It is like no other technology that has come before. The notable exception being atomic weapons, a comparison others have already drawn. We hesitate to draw such a comparison for fear of perpetuating a sensationalistic narrative that distracts from this conversation about ethics. That said, it’s the closest parallel we can think of in terms of the scale (potential to impact tens of millions of people) and magnitude (potential to do physical harm).

None of this is why we worry so much about the ethics of MI. We worry because MI is unique in so many ways that we are left completely unprepared to have this discussion.

Ethics is not [yet] a core commitment in the MI field. Compare this with medicine where a commitment to ethics has existed for centuries in the form of the Hippocratic Oath. Members of the physics community now pledge their intent to do no harm with their science. In other fields ethics is part of the very ethos. Not so with MI. Compared to other disciplines the field is so young we haven’t had time to mature and learn lessons from the past. We must look to these other fields and their hard-earned lessons to guide our own behavior.

Computer scientists and mathematicians have never before wielded this kind of power. The atomic bomb is one exception; cyber weapons may be another. Both of these, however, represent intentional applications of technology.  While the public was unaware of the Manhattan Project, the scientists involved knew the goal and made an informed decision to take part. The Stanford study described earlier has clear nefarious applications; many other research efforts in MI may not. Researchers run the risk of unwittingly conducting studies that have applications they never envisioned and do not condone. Furthermore, research into atomic weapons could only be implemented by a small number of nation-states with access to proper materials and expertise. Contrast that with MI, where a reasonably talented coder who has taken some open source machine learning classes can easily implement and effectively ‘weaponize’ published techniques. Within our field, we have never had to worry about this degree of power to do harm. We must reset our thinking and approach our work with a new degree of rigor, humility, and caution.

Ethical oversight bodies from other scientific fields seem ill-prepared for MI. Looking to existing ethical oversight bodies is a logical approach. Even we suggested that MI is a “grand experiment on all of humanity” and should follow principals borrowed from human subject research. The fact that Stanford’s Institutional Review Board (IRB), a respected body within the research community, reviewed and approved research with questionable applications should give us all pause. Researchers have long raised questions about the broken IRB system. An IRB system designed to protect the interests of study participants may be unsuited for situations in which potential harm accrues not to the subjects but to society at large. It’s clear that the standards that have served other scientific fields for decades or even centuries may not be prepared for MI’s unique data and technology issues. These challenges are compounded even further by the general lack of MI expertise, or sometimes even technology expertise, within the members of these boards. We should continue to work with existing oversight bodies, but we must also take an active role in educating them and evolving their thinking towards MI.

MI ethical concerns are often not obvious. This differs dramatically from other scientific fields where ethical dilemmas are self-evident. That’s not to say they are easy to navigate. A recent story about an unconscious emergency room patient with a “Do Not Resuscitate” tattoo is a perfect example. Medical staff had to decide whether they should administer life-saving treatment despite the presence of the tattoo. They were faced with a very complex, but very obvious, ethical dilemma. The same is rarely true in MI where unintended consequences may not be immediately apparent and issues like bias can be hidden in complex algorithms. We have a responsibility to ourselves and our peers to be on the lookout for ethical issues and raise concerns as soon as they emerge.  

MI technology is moving faster than our approach to ethics. Other scientific fields have had hundreds of years for their approach to ethics to evolve alongside the science. MI is still nascent yet we are already moving technology from the ‘lab’ to full deployment. The speed at which that transition is happening has led to notable ethical issues including potential racism in criminal sentencing and discrimination in job hiring. The ethics of MI needs to be studied as much as the core technology if we ever hope to catch up and avoid these issues in the future. We need to catalyze an ongoing conversation around ethics much as we see in other fields like medicine, where there is active research and discussion within the community

The issue that looms behind all of this, however, is the fact that we can’t ‘put the genie back in the bottle’ once it has been released. We can’t undo the Stanford research now that it’s been published. As a community, we will forever be accountable for the technology that we create.

In the age of MI, corporate and personal values take on entirely new importance. We have to decide what we stand for and use that as a measure to evaluate our decisions. We can’t wait for issues to present themselves. We must be proactive and think in hypotheticals to anticipate the situations we will inevitably face.

Be assured that every organization will be faced with hard choices related to MI. Choices that could hurt the bottom line or, worse, harm the well-being of people now or in the future. We will need to decide, for example, if and how we want to be involved in Government efforts to vet immigrants or create technology that could ultimately help hackers. If we fail to accept that these choices inevitably exist, we run the risk of compromising our values. We need to stand strong in our beliefs and live the values we espouse for ourselves, our organizations, and our field of study. Ethics, like many things, is a slippery slope. Compromising once almost always leads to compromising again.

We must also recognize that the values of others may not mirror our own. We should approach those situations without prejudice. Instead of anger or defensiveness we should use them as an opportunity to have a meaningful dialog around ethics and values. When others raise concerns about our own actions, we must approach those conversations with humility and civility. Only then can we move forward as a community.

Machines are neither moral or immoral. We must work together to ensure they behave in a way that benefits, not harms, humanity. We don’t purport to have the answers to these complex issues. We simply request that you keep asking questions and take part in the discussion.


This has been crossposted to Medium and to the Booz Allen website as well.

We’re not the only one discussing these issues. Check out this Medium post by the NSF-Funded group Pervasive Data Ethics for Computational Research, Kate Crawford’s amazing NIPS keynote, Mustafa Suleyman’s recent essay in Wired UK, and Bryor Snefjella’s recent piece in BuzzFeed.

Posted in CS / Science / Tech / Coding | Tagged , , , , , , | Leave a comment

AIES 2018

Last week I attended the first annual conference on AI, Ethics & Society where I presented some work on a Decision Tree/Random Forest algorithm that makes decisions that are less biased or discriminatory. ((In the colloquial rather than technical sense)) You can read all the juicy details in our paper. This isn't a summary of our paper, although that blog post is coming soon. Instead I want to use this space to post some reaction to the conference itself. I was going to put this on a twitter thread, but it quickly grew out of control. So, in no particular order, here goes nothing:

Many of the talks people gave were applicable to GOFAI but don't fit with contemporary approaches. Approaches to improving/limiting/regulating/policing rule-based or expert systems won't work well (if at all) with emergent systems.

Many, many people are making the mistake of thinking that all machine learning is black box. Decision trees are ML but also some of the most transparent models possible. Everyone involved in this AI ethics discussion should learn a rudimentary taxonomy of AI systems. It would avoid mistakes and conflations like this, and it would take maybe an hour of time.

Now that I think of it, it would be great if next year's program included some tutorials. A crash course in AI taxonomy would be useful, as would a walk-through of what an AI programmer does day-to-day. (I think it would help people to understand what kinds of control we can have over AI behavior if they knew a little more about what went in to getting any sort of behavior at all.) I'd be interested in some lessons on liability law and engineering, or how standards organization operate.

Lots of people are letting the perfect be the enemy of the good. I heard plenty of complaints about solutions that alleviate problems but don't eliminate them completely, or work in a majority of situations but don't cover every possible sub-case.

Some of that was the standard posturing that happens at academic conferences ("well, sure, but have you ever thought of this??!") but that's a poor excuse for this kind of gotcha-ism.

Any academic conference has people who ask questions to show off how intelligent they are. This one had the added scourge of people asking questions to show off how intelligent and righteous they are. If ever there was a time to enforce concise Q&A rules, this is it.

We’re starting from near scratch here and working on a big problem. Adding any new tool to the toolbox should be welcome. Taking any small step towards the goal should be welcome.

People were in that room because they care about these problems. I heard too much grumbly backbiting about presenters that care about ethics, but don't care about it exactly the right way.

We can solve problems, or we can enforce orthodoxy, but I doubt we can do both.

It didn't occur to me at the time, but in retrospect I'm surprised how circumscribed the ethical scenarios being discussed were. There was very little talk of privacy, for instance, and not much about social networks/filter bubbles/"fake news"/etc. that has been such a part of the zeitgeist.

Speaking of zeitgeist, I didn't have to hear the word "blockchain" even one single time, for which I am thankful.

If I had to give a rough breakdown of topics, it would be 30% AV/trolley problems, 20% discrimination, 45% meta-discussion, and 5% everything else.

One questioner brought up Jonathan Haidt's Moral Foundations Theory at the very end of the last day. I think he slightly misinterpreted Haidt (but I'm not sure since the questioner was laudably concise), but I was waiting all weekend for someone to bring him up at all.

If any audience would recognize the difference between “bias” in the colloquial sense and “bias” in the technical, ML/stats sense, I would have hoped it was here. No such luck. This wasn't a huge problem in practice, but it’s still annoying.

There’s a ton of hand-waving about how many of the policies being proposed for ethical AI will actually work at the implementation level. “Hand-waving” is even too generous of a term. It’s one thing to propose rules, but how do you make that work when fingers are hitting keyboards?

I’ll give people some slack here because most talks were very short, but “we’ll figure out what we want, and then tell the engineers to go make it happen somehow” is not really a plan. The plan needs to be grounded in what's possible starting at its conception, not left as an implementation detail for the technicians to figure out later.

"We'll figure out what to do, and then tell the geeks to do it" is not an effective plan. One of the ways it can fail is because it is tinged with elitism. (I don't think participants intended to be elitist, but that's how some of these talks could be read.) I fully endorse working with experts in ethics, sociology, law, psychology, etc. But if the technicians involved interpret what those experts say — accurately or not — as "we, the appointed high priesthood of ethics, will tell you, the dirty code morlocks, what right and wrong is, and you will make our vision reality" then the technicians will not be well inclined to listen to those experts.

Everyone wants to 'Do The Right Thing'. Let's work together to help each other do that and refrain as much as possible from finger pointing at people who are 'Doing It Wrong.' Berating people who have fallen short of your ethical standards — even those who have fallen way, way short — feels immensely satisfying and is a solid way to solidify your in-group, but it's not productive in the long run. That doesn't mean we need to equivocate or let people off the hook for substandard behavior, but it does mean that the response should be to lead people away from their errors as much as possible rather than punishing for the sake of punishing.

I wish the policy & philosophy people here knew more about how AI is actually created.

(I’m sure the non-tech people wish I knew more about how moral philosophy, law, etc. works.)

Nonetheless, engineers are going to keep building AI systems whether or not philosophers etc. get on board. If the latter want to help drive development there is some onus on them to better learn the lay of the land. That’s not just, but they have they weaker bargaining position so I think it's how things will have to be.

Of course I'm an engineer, so this is admittedly a self-serving opinion. I still think it's accurate though.

Even if every corporation, university, and government lab stopped working on AI because of ethical concerns, the research would slow but not stop. I can not emphasize enough how low the barriers to entry in this space are. Anyone with access to arXiv, github, and a $2000 gaming computer or some AWS credits can get in the game.

I was always happy to hear participants recognize that while AI decision making can be unethical/amoral, human decision making is also often terrible. It’s not enough to say the machine is bad if you don’t ask “bad compared to what alternative?”. Analyze on the right margin! Okay, the AI recidivism model has non-zero bias. How biased is the parole board? Don't compare real machines to ideal humans.

Similarly, don't compare real-world AI systems with ideal regulations or standards. Consider how regulations will end up in the real world. Say what you will about the Public Choice folks, but their central axiom is hard to dispute: actors in the public sector aren't angels either.

One poster explicitly mentioned Hume and the Induction Problem, which I would love to see taught in all Data Science classes.

Several commenters brought up the very important point that datasets are not reality. This map-is-not-the-territory point also deserves to be repeated in every Data Science classroom far more often.

That said, I still put more trust in quantitative analysis over qualitative. But let's be humble. A data set is not the world, it is a lens with which we view the world, and with it we see but through a glass darkly.

I'm afraid that overall this post makes me seem much more negative on AIES than I really am. Complaining is easier than complementing. Sorry. I think this has been a good conference full of good people trying to do a good job. It was also a very friendly crowd, so as someone with a not insignificant amount of social anxiety, thank you to all the attendees.

Posted in CS / Science / Tech / Coding | Tagged , , , , , | Leave a comment

Some brief book reviews to close 2017

wild-swanA Wild Swan, Michael Cunningham

I would think we've saturated the "modern re-tellings of fairytales, but for adults" genre, but this was supremely good. They reminded me of Garrison Keillor in the way that some sadness or loss was mixed in to the stories without them being outright tragic.

(I've had this post sitting in my drafts for a very long time. How long? Since well before we all found out Keillor was a creep. So... I guess I'll amend the above to "it reminds me pre-2017 Garrison Keillor"? It's been about 15 years since I read any of his stories, so maybe I should just scrap this reference all together? Screw it.)


The View from the Cheap Seats, Neil Gaiman

A collection of non-fiction pieces: essays, transcripts of awards speeches, introductions, forwards, etc. Some felt dated, but most I can safely call "timeless." Many of them did make me want to go read the various books or authors that he was commenting on (e.g. Jeff Smith, Samuel R Delaney, Fritz Leiber, Dunsany) which seems like as good a thing as can be said about an introduction to a book. The final piece is a memorial to his friend and collaborator, Terry Pratchett, titled "A Slip of the Keyboard." It is definitely worth reading especially for Pratchett fans.


The Liberation, Ian Tregillis

This is the conclusion to Tregillis' "Mechanicals" trilogy. I found the whole series good, but not nearly as good as his "Bitter Seeds" series. "Bitter Seeds" had plot points and story lines that were woven complexly, foreshadowed with subtlety, and epic emotional highs and lows. The Mechanicals was good, but had little of that finesse.

"Mechanicals" is focused on free will and robots. It's an interesting concept, and a good way of using sci-fi to explore ideas. (Which, I suppose, is why it's been done plenty of times.) If I was a writer, I would like to do a similar story about robots, but instead of free will it would be about depression. Inside Out had one of the better depictions of depressions I've seen on screen. Depression — in my experience — isn't just regular sadness turned up to eleven. It's feeling nothing at all. Mechanical androids seem like a perfect vehicle to explore that. Instead of robots fighting to be able to act on their own preferences or desires or motivation, they would be fighting to be able to have preferences or desires or motivations in the first place.


the-gene

The Gene: An Intimate History, Siddhartha Mukherjee

Also not as good as his previous work, The Emperor of All Maladies: A Biography of Cancer, but still very, very good. As in Emperor of All Maladies, Mukherjee does a great job of blending history, science, and his own personal experiences.

I did not appreciate before reading this exactly how quickly the concept of genetics has grown. The hundred years following Darwin's work in the 1850s and Mendel in the 1850s and 1860s was head-spinningly prolific. I had also not considered that eugenics was at the very forefront of applied genetics. I had thought of eugenics as a weird sideline (indeed, I wish it had been) but according to Mukherjee's telling it was at the very center of genetics from its infancy. ((Mukherjee also does good work in not letting us get away with thinking eugenics was something unique to the Nazis; Brits and Americans were leading members of the eugenics travesty. We should confront the ugly parts of our history, where "we" is both national groups as well as ideological ones like, in this case, progressives and High Rationalists.))

Mukherjee's discussion of penetrance (the way specific genes only affect people in probabilistic ways) was very good. I wish this concept was more widely appreciated, as compared to the binary "you have a mutation or you don't" level of understanding that is common.

Mukherjee also hammers home the idea that a mutation can not be judged to be good or bad by itself, but must be evaluated in the context of a given organism in a given environment. This is important for genetics, but important much more broadly. In my own work I've had to explain many times that certain behaviors of a neural network can not be judged in isolation. They can only be evaluated in the context of the data sets they're operating on and the tasks they're being asked to do.

I found Mukherjee to be on weakest footing when discussing the ethical implications. He seems to be engaging in too much mood affiliation.


Medieval Europe, Chris Wickham

I was looking for a good overview of medieval history. I've learned isolated pieces here and there, but my secondary education covered exactly zero European history, so I'm lacking a broad outline. This wasn't that really that book. It did a good job of describing major political themes but didn't mention any specific events. It's a valuable approach, but not the one I expected. The focus was mainly on state capacity of the different regions. (Which I actually think is a very valuable approach, just not what I was looking for.)

One take-away: France is very fortunate to have inherited Roman roads. That gave them a big leg-up in state capacity compared to their central and eastern rivals.


The Aeronaut's Windlass, Jim Butcher

This is the first in a new series in a victorian, pseudo-steampunk setting. Butcher is generally a fun read, and this is no exception. It's nice to see some fantasy novels that aren't in either a modern time period or a Tolkinesque medieval era.

I don't have a ton to say except that there were Aeronauts but there was no windlass. Is the title a metaphor that is going over my head, or is it just a catchy phrase without relation to the story?

Oh, also one thing in the world building got under my skin. Everyone in the story lives in these towers constructed by "the ancients" or some such, because the surface of the planet is poisonous and/or infested with ravenous hellbeasts. Each tower is a city-state, and people fly between them on airships. As a result, Butcher mentions over and over how much of a luxury resource wood is, because it's risky to go to the surface for timber. But what about all the other raw materials?? Where are they getting metal? Cotton? Wool? A huge library plays a role in the story; what are they making paper out of? Ships are described with complicated rigging; what is rope made from?He mentions that meat is vat-grown and therefor rare, but what about all the other food? Why is wood singled out as the one luxury?


waking-gods

Waking Gods, Sylvain Neuvel

This is the sequel to Neuvel's Sleeping Giants. Very good. Told in the same style, i.e. each chapter is a diary entry, interview transcript, communication intercept, news report, etc. which reveals the story to you little by little. Points for a good story, and double points for non-standard narrative form.


The Rise and Fall of D.O.D.O., Neal Stephenson & Nicole Galland

This had much of Stephenson's cleverness without his extremely lengthy didactic digressions. I'm not sure how much of the book was Stephenson and how much was Galland, but the combination worked very well. Recommended. I'm very much hoping there will be a sequel, but it's not clear. Parts of it relating to academia and the defense/IC sectors did not quite square with what I've observed, but it's a novel about magic and supercomputers and time travel and parallel universes, so I think I can let that slide.


The Princess Bride: S. Morgenstern's Classic Tale of True Love and High Adventure, William Goldman

I love the movie, and I'm glad I finally got around to reading the book. As everyone knows, the book is almost always better than the book. This may be an exception. Either way, they are very close in quality, perhaps because Goldman also wrote the screenplay. (He also wrote Butch Cassidy and the Sundance Kid, and I never would have guessed that both of those were written by the same person.) The only obvious parts left out of the movie were some longer character back stories, which were helpful but not necessary.

The conceit of the book is that Goldman is merely the translator/editor of a story written by the fictitious S. Morgenstern. Goldman never lets this illusion slip. The forward, introduction, introduction to the anniversary edition, epilogue, footnotes and asides: the whole time he sticks to the notion that he's merely editing an existing book. He even weaves in true stories from his life as a screenwriter to further blur the lines. I love unreliable narrators, but this is my first experience with an unreliable author.


The Blade Itself,
Before They Are Hanged, and
Last Right of Kings, Joe Abercrombie

I plowed through all of the "First Law" trilogy almost back-to-back-to-back. Definitely recommended.

Usually when an author has multiple point-of-view characters and rotates chapters between them there are some story lines that are exciting and I want to get back to, and some I have to wade through to get back to the good bits. Not so here, especially in Before They Are Hanged. I also appreciated that there was not an obvious quest or goal that everyone was seeking. It was somewhat difficult to tell what the challenge for the various characters actually was. It all comes together in the end in a very satisfying way, but it was nice not having the constant score-keeping in the back of my head about "are we closer or farther from the Ultimate Goal of destroying the mcguffin/overthrowing the tyrant/winning the throne/whatever?"


Palimpsest: A History of the Written Word, Matthew Battles

Low on factual density. Highly stylized writing. I do give it points because the final and longest chapter, titled "Logos ex Machina," considers computer programs as a type of writing. Anything that is willing to give 10 Print a place in the history of writing is okay with me. Overall, there are better books on the history of book and language.


Crucial Conversations, Kerry Patterson, et al.

I read this as part of a quasi-book club at work. Some of the people at dinner said that it was difficult practice having these crucial conversations (i.e. high stakes, emotionally laden). I suggested that there is one easy way to get lots of experience with these conversations under your belt: get married.

I'd put this in to the better class of management book, in that it's worth reading but still spins twenty pages of valuable advice up to several hundred pages of content. The world would be a more efficient place if business people were willing to spend money at Hudson Books on management pamphlets instead of books.


Olympos and Illium, Dan Simmons

Just as grand in scope and ambition as Simmons' Hyperion series, but utlimately not as good. It took well into the second book for the pieces to start to fit together, and as a result of remaining in the dark I had a hard time carrying about what was going to happen next.


Seven Days in the Art World

Seven Days in the Art World, Sarah Thornton

This was written in 2007, and revolves a lot &emdash; by necessity &emdash; around the intersection of art and money. I would love to see what would have changed if there was a post-crash follow up from 2009.

One chapter was a studio visit to Haruki Murakami's studio. This was an odd choice, since as the book makes clear he's a singularly weird artist since he spends so much of his time running a sort of branding agency. That made for interesting but unrepresentative material. I'd read a whole book composed of Thornton visiting different studios.


The Sea Peoples, S. M. Stirling

This was a let down compared to the dozen or so volumes in the series prior. The series started out with a classic speculative fiction approach: change one thing about the world and see what happens. (Modern technology stops working; neo-feudalism rises from the ashes.) Then in later volumes more mysticism was introduced to explain why the change happened, and to give some narrative structure and reason why the Baddies were so Bad. (Chaotic gods are using them as puppets to take over the world in a proxy fight against their Good God rivals.) But this latest installment is four fifths weird mystical fever dreams (literally) mixed up with homages to the King in Yellow (again, literally). It's off the rails. I'll still read the next volume, because I like my junkfood books and I enthusiastically commit the sunk costs fallacy when it comes to finishing book series. But still: off the rails.


To Rule the Waves: How the British Navy Shaped the Modern World, Arthur Herman

This was a very fun history. There's plenty of fact, but Herman does a good job of writing the "action scenes" of various engagements, for lack of a better word. His style is a little too Great Man-ish for me, but nonetheless this was a good read. There's also a non-zero chance he's overselling how important his subject matter is, but I could day that about 90% of non-fiction writers, and 99% of non-fiction writers who write about rather more obscure topics.

I would read an entire book about common English idioms with nautical origins. For example, lowering the sails on a ship is "striking sail." Sailors, who were paid chronically late by the Royal Navy, would refuse to let their ships leave harbor until they were paid back wages. To disable the ships, they would strike sail. Now a mass refusal to work is a strike.

The British Navy: Guard the Freedom of us All
I used to have this on my bedroom wall when I was a kid. That is a fact I bet you are happy that you now know.

It's a credit to Herman that I was a little emotional by the time I got to the end of the book. The Royal Navy keeps winning and winning, often against the odds, survives WWII and comes out victorious, and then is just... dismantled. It's probably the correct strategic/economic move, but that sort of unforced abdication is somewhat sad.

Of course I did grow up with a reproduction WWII-era Royal Navy morale poster on my bedroom wall, because my friend Eli brought it back from London for me, so I might be subconsciously nostalgic for the Royal Navy in a way most Americans are not.


Artemis, Andy Weir

Good, but not as good as The Martian. ((I feel like a lot of my reviews are "good, but not as good as their last book" (e.g. my reviews of Tregillis & Mukherjee, supra). This is probably not a terribly fair way to assess authors, but... eh. That's one way I judge books, and I think I'm not alone.)) I give Weir a huge amount of credit for writing a book that grapples with why people would want to live in space in the first place. A space colony is not an economically reasonable thing to do, and I don't like it when people hand-wave that problem away.


From here down, I'm just going to list some of the books I read in the last quarter or so of 2017 that I thought were vaguely interesting. They aren't any worse than the ones above, I just don't have time to write them up and I'm sick of this post sitting in my drafts folder.

Battling the Gods: Atheism in the Ancient World, Tim Whitmarsh

Afterlife, Marcus Sakey

How to be a Stoic, Massimo Pigliucci

potato

Potato: A History of the Propitious Esculent, John Reader

Golden Age and Other Stories, Naomi Novik

Within the Sanctuary of Wings, Marie Brennan

Alphabetical: How Every Letter Tells a Story, Michael Rosen

Besieged, Kevin Hearne

Assassin's Apprentice, Robin Hobb

Stoicism Today (Volume One), Patrick Ussher et al.

Paradox Bound, Peter Clines

Dead Men Can't Complain, Peter Clines

Posted in Book List, Reviews | Tagged , | Leave a comment

MalConv: Lessons learned from Deep Learning on executables

I don't usually write up my technical work here, mostly because I spend enough hours as is doing technical writing. But a co-author, Jon Barker, recently wrote a post on the NVIDIA Parallel For All blog about one of our papers on neural networks for detecting malware, so I thought I'd link to it here. (You can read the paper itself, "Malware Detection by Eating a Whole EXE" here.) Plus it was on the front page of Hacker News earlier this week, which is not something I thought would ever happen to my work.

Rather than rehashing everything in Jon's Parallel for All post about our work, I want to highlight some of the lessons we learned from doing this about ML/neural nets/deep learning.

As way of background, I'll lift a few paragraphs from Jon's introduction:

The paper introduces an artificial neural network trained to differentiate between benign and malicious Windows executable files with only the raw byte sequence of the executable as input. This approach has several practical advantages:

  • No hand-crafted features or knowledge of the compiler used are required. This means the trained model is generalizable and robust to natural variations in malware.
  • The computational complexity is linearly dependent on the sequence length (binary size), which means inference is fast and scalable to very large files.
  • Important sub-regions of the binary can be identified for forensic analysis.
  • This approach is also adaptable to new file formats, compilers and instruction set architectures—all we need is training data.

We also hope this paper demonstrates that malware detection from raw byte sequences has unique and challenging properties that make it a fruitful research area for the larger machine learning community.

One of the big issues we were confronting with our approach, MalConv, is that executables are often millions of bytes in length. That's orders of magnitude more time steps than most sequence processing networks deal with. Big data usually refers to lots and lots of small data points, but for us each individual sample was big. Saying this was a non-trivial problem is a serious understatement.

The MalConv architecture
Architecture of the malware detection network. (Image copyright NVIDIA.)

Here are three lessons we learned, not about malware or cybersecurity, but about the process of building neural networks on such unusual data.

1. Deep learning != image processing

The large majority of the work in deep learning has been done in the image domain. Of the remainder, the large majority has been in either text or speech. Many of the lessons, best practices, rules of thumb, etc., that we think apply to deep learning may actually be specific to these domains.

For instance, the community has settled around narrow convolutional filters, stacked with a lot of depth as being generally the best way to go. And for images, narrow-and-deep absolutely seems to be the correct choice. But in order to get a network that processes two million time steps to fit in memory at all (on beefy 16GB cards no less) we were forced to go wide-and-shallow.

With images, a pixel values is always a pixel value. 0x20 in a grayscale image is always darkish gray, no matter what. In an executable, a byte values are ridiculously polysemous: 0x20 may be part of an instruction, a string, a bit array, a compressed or encrypted values, an address, etc. You can't interpolate between values at all, so you can't resize or crop the way you would with images to make your data set smaller or introduce data augmentation. Binaries also play havoc with locality, since you can re-arrange functions in any order, among other things. You can't rely on any Tobbler's Law ((Everything is related, but near things are more related than far things.)) relationship the way you can in images, text, or speech.

2. BatchNorm isn't pixie dust

Batch Normalization has this bippity-boppity-boo magic quality. Just sprinkle it on top of your network architecture, and things that didn't converge before now do, and things that did converge now converge faster. It's worked like that every time I've tried it — on images. When we tried it on binaries it actually had the opposite effect: networks that converged slowly now didn't at all, no matter what variety of architecture we tried. It's also had no effect at all on some other esoteric data sets that I've worked on.

We discuss this at more length in the paper (§5.3), but here's the relevant figure:

BatchNorm activations
KDE plots of the convolution response (pre-ReLU) for multiple architectures. Red and orange: two layers of ResNet; green: Inception-v4; blue: our network; black dashed: a true Gaussian distribution for reference.

This is showing the pre-BN activations from MalConv (blue) and from ResNet (red & orange) and Inception-v4 (green). The purpose of BatchNorm is to output values in a standard normal, and it implicitly expects inputs that are relatively close to that. What we suspect is happening is that the input values from other networks aren't gaussian, but they're close-ish. ((I'd love to be able to quantify that closeness, but every test for normality I'm aware of doesn't apply when you have this many samples. If anyone knows of a more robust test please let me know.)) The input values for MalConv display huge asperity, and aren't even unimodal. If BatchNorm is being wonky for you, I'd suggest plotting the pre-BN activations and checking to see that they're relatively smooth and unimodal.

3. The Lump of Regularization Fallacy

If you're overfitting, you probably need more regularization. Simple advice, and easily executed. Everytime I see this brought up though, people treat regularization as if it's this monolithic thing. Implicitly, people are talking as if you have some pile of regularization, and if you need to fight overfitting then you just shovel more regularization on top. It doesn't matter what kind, just add more.

We ran in to overfitting problems and tried every method we could think of: weight decay, dropout, regional dropout, gradient noise, activation noise, and on and on. The only one that had any impact was DeCov, which penalized activities in the penultimate layer that are highly correlated with each other. I have no idea what will work on your data — especially if it's not images/speech/text — so try different types. Don't just treat regularization as a single knob that you crank up or down.

I hope some of these lessons are helpful to you if you're into cybersecurity, or pushing machine learning into new domains in general. We'll be presenting the paper this is all based on at the Artificial Intelligence for Cyber Security (AICS) workshop at AAAI in February, so if you're at AAAI then stop by and talk.

Posted in CS / Science / Tech / Coding | Tagged , , , , , , | Leave a comment

AI's "one trick pony" has a hell of a trick

The MIT Technology Review has a recent article by James Somers about error backpropagation, "Is AI Riding a One-Trick Pony?" Overall, I agree with the message in the article. We need to keep thinking of new paradigms because the SotA right now is very useful, but not correct in any rigorous way. However, as much as I agree with the thesis, I think Somers oversells it, especially in the beginning of the piece. For instance, the introductory segment concludes:

When you boil it down, AI today is deep learning, and deep learning is backprop — which is amazing, considering that backprop is more than 30 years old. It’s worth understanding how that happened—how a technique could lie in wait for so long and then cause such an explosion — because once you understand the story of backprop, you’ll start to understand the current moment in AI, and in particular the fact that maybe we’re not actually at the beginning of a revolution. Maybe we’re at the end of one.

That's a bit like saying "When you boil it down, flight is airfoils, and airfoils are Bernoulli's principle — which is amazing, considering that Bernoulli's principle is almost 300 years old." I totally endorse the idea that we ought to understand backprop; I've spent a lot of effort in the last couple of months organizing training for some of my firm's senior leadership on neural networks, and EBP/gradient descent is the heart of my presentation. But I would be very, very careful about concluding that backprop is the entire show.

Backprop was also not "lying in wait." People were working on it since it was introduced in 1986. The problem was that '86 was the height of the 2nd AI winter, which lasted another decade. Just like people should understand backprop to understand contemporary AI, they should learn about the history of AI to understand contemporary AI. Just because no one outside of CS (and precious few people in CS, for that matter) paid any attention to neural networks before 2015 doesn't mean they were completely dormant, only to spring up fully formed in some sort of intellectual Athenian birth.

I really don't want to be in the position of defending backprop. I took the trouble to write a dissertation about non-backprop neural nets for a reason, after all. ((That reason being, roughly put, that we're pretty sure the brain is not using backprop, and it seems ill-advised to ignore the mechanisms employed by the most intelligent thing we are aware of.)) But I also don't want to be in the position of letting sloppy arguments against neural nets go unremarked. That road leads to people mischaracterizing Minksy and Papert, abandoning neural nets for generations, and putting us epochs behind where we might have been. ((Plus sloppy arguments should be eschewed on the basis of the sloppiness alone, irrespective of their consequences.))


PS This is also worth a rejoinder:

Big patterns of neural activity, if you’re a mathematician, can be captured in a vector space, with each neuron’s activity corresponding to a number, and each number to a coordinate of a really big vector. In Hinton’s view, that’s what thought is: a dance of vectors.

That's not what thought is, that's how thought can be represented. Planets are not vectors, but their orbits can be profitably described that way, because "it behooves us to place the foundations of knowledge in mathematics." I'm sorry if that seems pedantic, but the distinction between a thing and its representation—besides giving semioticians something to talk about—underpins much of our interpretation of AI systems and cognitive science as well. Indeed, a huge chunk of data science work is figuring out the right representations. If you can get that, your problem is often largely solved. ((IIRC both Knuth and Torvalds have aphorisms to the effect that once you have chosen the correct data structures, the correct algorithms will naturally follow. I think AI and neuroscience are dealing with a lot of friction because we haven't been able to figure out the right representations/data structures. When we do, the right learning algorithms will follow much more easily.))

PPS This, on the other hand, I agree with entirely:

Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way. … What we know about intelligence is nothing against the vastness of what we still don’t know.

What I fear is that people read that and conclude that artificial neural networks are built on a shallow foundation, so we should give up on them as being unreliable. A much better conclusion would be that we need to keep working and build better, deeper foundations.

Posted in CS / Science / Tech / Coding | Tagged , , , , , , , | Leave a comment

National AI Strategy

Some of my co-workers published a sponsored piece in the Atlantic calling for a national AI strategy, which was tied in to some discussions at the Washington Ideas event.

I'm 100% on board with the US having a strategy, but I want to offer one caveat: "comprehensive national strategies" are susceptible to becoming top-down, centralized plans, which I think is dangerous.

I'm generally disinclined to centralized planning, for both efficiency and philosophical reasons. I'm not going to take the time now to explain why; I doubt anything I could scratch out here would shift people very much along any kind of Keynes-Hayek spectrum.

So why am I bothering to bring this up? Mostly because I think it would be especially ill-conceived to adopt central planning when it comes to AI. The recent progress in AI has been largely a result of abandoning top-down techniques in favor of bottom-up ones. We've abandoned hand-coded visual feature detectors for convolutional neural networks. We've abandoned human-engineered grammar models for statistical machine translation. In one discipline after another emergent behavior has outpaced decades worth of expert-designed techniques. To layer top-down policy-making on a field built of bottom-up science would be a waste, and an ironic one at that.


PS Having spoken to two of the three authors of this piece, I don't mean to imply that they support centralized planning of the AI industry. This is just something I would be on guard against.

Posted in Business / Economics, CS / Science / Tech / Coding | Tagged , , , , | Leave a comment

What I've Been Reading

Banana: The Fate of the Fruit That Changed the World, Dan Koeppel

Not bad. I'm a sucker for this type of history of a single commodity or common household object. It did make we want to try to get my hands on one of the few non-Cavendish cultivars of bananas that ever make their way to America.

Banana, Dan Koeppel

Very short summary/background info: all of the bananas at American grocery stores are genetic clones. This leaves them very susceptible to disease. This is not a theoretical concern: the variety which was previously ubiquitous was wiped out in the middle of the 20th century, and the current variety is being decimated in many growing regions. The fact that they're sterile fruits propagated clonally also makes it extremely difficult to breed other, more resistant varieties the way we do with the majority of our produce. Also important to the story is that banana production for the US and European markets has historically been very oligopolistic, leading to some … unsavory … business practices.

(Digression: The artificial banana flavor used in candies tastes nothing like our modern Cavendish bananas but I have heard that it is a very good match for the flavor of the Gros Michel that the Cavendish replaced. I've never been able to find definitive confirmation of that though, and I wish it was mentioned in this book. This a minor was disappointment. On the other hand, I was unreasonably amused by Koeppel's tongue-in-cheek translation of the "Gros Michel" as "The Big Mike".)

There is a temptation when people right books about subjects that have been overlooked to try to swing the pendulum too hard the other way to make the audience realize how important the subject is. Koeppel mostly avoids that trap, but falls in to it somewhat when discussing more current history. There's a lot more to Latin American politics and the rise of Bolivarian Socialism than the treatment of workers on banana plantations, which Koeppel asserted as a primary cause. Similarly he drew an analogy between the Clinton administration filing a complaint with the WTO about EU banana import duties and the active role that United Fruit played in shaping US foreign policy in Latin America between the beginning of the century and the middle of the Cold War. Both have fruit companies involved in international relations, that's where the similarities end. One of those is egregious cronyism, and one is a rules-based order in action.

Koeppel was on the shakiest ground towards the end of the book, when he was discussing the future of the banana market. His discussion of Fair Trade could benefit from reading Amrita Narlikar and similar critiques. I do give Koeppel much credit for his recognition of Consumer Sovereignty. If the conditions of banana growers are going to improve it won't be because Chiquita/Dole/etc. become kinder and gentler, it must be because consumers decide to spend more money buying bananas. Our stated preferences for better conditions for poor agricultural workers do not match our revealed preferences as consumers.

I also commend Koeppel for admitting that researching this book caused him to change his mind about transgenic food. He had previously been anti-GMO but became convinced genetic manipulation is the only way to save the banana crop. I do wish he had extended more of the same enthusiastic acceptance of transgenics to other crops, which he only went halfway to doing. Yes, bananas sterility makes them somewhat of a special case, but only somewhat.

A couple of months back I read Dan Reader's Potato: A History of the Propitious Esculent. If you're going to pick up one book about a non-cereal staple crop (and why wouldn't you?), I liked Potato much better.


Pilot X, Tom Merritt

This is a time-travel adventure story. It seemed like it could have been a Doctor Who episode. Merritt handles the oddities that result from time travel with deftness and wit. (A couple of examples that make you stop and think. 1: "This will be the last time I meet you, but not the last time you meet me." 2: The main character spends twelve years in training for a job, but it takes only four years, because when he gets to the end of the four year period he goes back in time and starts again twice, so that there are three of him all operating in parallel.) Amusing, but not great.


The Grace of Kings, Ken Liu

The Grace of Kings, Ken Liu

I had previous read Liu's short story collection The Paper Menagerie and loved it. Grace of Kings didn't disappoint. Highly recommended. One of the blurbs on the back cover described it as "the Wuxia version of Game of the Thrones" and that pretty much covers it.

One downside to the book is that characters experience rapid changes in fortune within the span of several pages. It's a nice, fast pace — most contemporary fantasy authors would lumberingly stretch out plot points like this for scores (hundreds?) of pages — but it does rob the story of some of the potential dramatic tension. One minute I've never even considered the possibility of a character rebelling against their overlord, and then within ten minutes they've plotted their rebellion, rebelled, been suppressed, and then punished. That doesn't give me much chance to savor the possibilities of what might happen. All in all though, I prefer this pace to the prolix plodding common to so many contemporary fantasy authors. I appreciate GRRM-style world building as much as the next reader, but not every fantasy novel needs every minor character to have their entire dynastic history spelled out, complete with descriptions of their heraldry, the architecture of their family seat, their favorite meals, and their sexual peccadilloes.

I'm not actually sure 'fantasy' is the correct term for Grace of Kings, come to think of it. There's some minor divine intervention and a couple of fantastic beasts, but no outright magic. I suppose it's fantasy in a sort of Homeric way, rather than a 20th century way.

Anyway, I've got my hands on the sequel, The Wall of Storms, and will be starting it as soon as possible. Hopefully we don't have to wait too long before the third and final volume is published.


The Art of War, Sun Tzu, translated by the Denma Translation Group

I listened to this audio edition from Shambhala Press. I don't pay much attention to which publishers produce which books, but I've been quite happy with several volumes of theirs that I've bought.

I hadn't read Art of the War in probably 20 years, so this was a good refresh. The way they structured this was to first have a recording of the text being read, and then start at the beginning with another reading but this time with interspersed commentary. That was a very good way to do it.

The text itself is short, and the commentary in this edition is good, so I'd recommend this even if you have read Art of War before.


The Map Thief, Michael Blanding

This is the story of an antiquities dealer specializing in rare maps, E. Forbes Smiley III, who turned to theft to build his inventory. I don't usually go for true crime stories, and indeed that was the least interesting aspect of this book. However, it was an interesting look at a little corner of the art/antiques market that I did not know about. There is also good information about the history of cartography, exploration and printing.((Everyone loves to hate on the Mercator projection, but this book does a good job of explaining how revolutionarily useful Mercator's cartography was in the 16th century. His projection is a tool that is remarkably good for its intended purpose, i.e. helping navigate over long sea voyages. It shouldn't be used the way it has been (hung on every classroom wall, making choropleth infographics, etc.) but that doesn't make it a bad tool per se, just one that is misused. The fitness of a technology, just like that of a biological organism, can only be usefully evaluated in the environment it is adapted for.))

Perhaps the most interesting part of the case for me came after Smiley confessed and the various libraries he stole from had to go about figuring out what was missing and from whom. In the case of normal thefts, or even art thefts, this is pretty straight forward, but the nature of the material — rare, poorly or partially catalogued, incompletely and idiosyncraticly described, existing in various editions with only marginal differences, etc. — make it quite a puzzle. Coming up with a good cataloging system for oddities like antique maps would make a good exercise for a library science/information systems/database project. (Indeed it was only thanks to the work of a former Army intelligence analyst that things got sorted out as well as they did.) Even something superficially simple like figuring out which copy of a printed map is which makes for a good computer vision challenge.

There are also Game Theoretic concerns at work: libraries would benefit if they all operated together to publicize thefts in order to track down stolen materials, but it is in every individual library's interest to cover up thefts so as not to besmirch their reputation and alienate donors, who expect that materials they contribute will be kept safe. The equilibrium is not straightforward, nor is it likely to be optimal.

Posted in Book List, Reviews | Tagged , , | Leave a comment