MalConv: Lessons learned from Deep Learning on executables

I don't usually write up my technical work here, mostly because I spend enough hours as is doing technical writing. But a co-author, Jon Barker, recently wrote a post on the NVIDIA Parallel For All blog about one of our papers on neural networks for detecting malware, so I thought I'd link to it here. (You can read the paper itself, "Malware Detection by Eating a Whole EXE" here.) Plus it was on the front page of Hacker News earlier this week, which is not something I thought would ever happen to my work.

Rather than rehashing everything in Jon's Parallel for All post about our work, I want to highlight some of the lessons we learned from doing this about ML/neural nets/deep learning.

As way of background, I'll lift a few paragraphs from Jon's introduction:

The paper introduces an artificial neural network trained to differentiate between benign and malicious Windows executable files with only the raw byte sequence of the executable as input. This approach has several practical advantages:

  • No hand-crafted features or knowledge of the compiler used are required. This means the trained model is generalizable and robust to natural variations in malware.
  • The computational complexity is linearly dependent on the sequence length (binary size), which means inference is fast and scalable to very large files.
  • Important sub-regions of the binary can be identified for forensic analysis.
  • This approach is also adaptable to new file formats, compilers and instruction set architectures—all we need is training data.

We also hope this paper demonstrates that malware detection from raw byte sequences has unique and challenging properties that make it a fruitful research area for the larger machine learning community.

One of the big issues we were confronting with our approach, MalConv, is that executables are often millions of bytes in length. That's orders of magnitude more time steps than most sequence processing networks deal with. Big data usually refers to lots and lots of small data points, but for us each individual sample was big. Saying this was a non-trivial problem is a serious understatement.

The MalConv architecture
Architecture of the malware detection network. (Image copyright NVIDIA.)

Here are three lessons we learned, not about malware or cybersecurity, but about the process of building neural networks on such unusual data.

1. Deep learning != image processing

The large majority of the work in deep learning has been done in the image domain. Of the remainder, the large majority has been in either text or speech. Many of the lessons, best practices, rules of thumb, etc., that we think apply to deep learning may actually be specific to these domains.

For instance, the community has settled around narrow convolutional filters, stacked with a lot of depth as being generally the best way to go. And for images, narrow-and-deep absolutely seems to be the correct choice. But in order to get a network that processes two million time steps to fit in memory at all (on beefy 16GB cards no less) we were forced to go wide-and-shallow.

With images, a pixel values is always a pixel value. 0x20 in a grayscale image is always darkish gray, no matter what. In an executable, a byte values are ridiculously polysemous: 0x20 may be part of an instruction, a string, a bit array, a compressed or encrypted values, an address, etc. You can't interpolate between values at all, so you can't resize or crop the way you would with images to make your data set smaller or introduce data augmentation. Binaries also play havoc with locality, since you can re-arrange functions in any order, among other things. You can't rely on any Tobbler's Law1 relationship the way you can in images, text, or speech.

2. BatchNorm isn't pixie dust

Batch Normalization has this bippity-boppity-boo magic quality. Just sprinkle it on top of your network architecture, and things that didn't converge before now do, and things that did converge now converge faster. It's worked like that every time I've tried it — on images. When we tried it on binaries it actually had the opposite effect: networks that converged slowly now didn't at all, no matter what variety of architecture we tried. It's also had no effect at all on some other esoteric data sets that I've worked on.

We discuss this at more length in the paper (§5.3), but here's the relevant figure:

BatchNorm activations
KDE plots of the convolution response (pre-ReLU) for multiple architectures. Red and orange: two layers of ResNet; green: Inception-v4; blue: our network; black dashed: a true Gaussian distribution for reference.

This is showing the pre-BN activations from MalConv (blue) and from ResNet (red & orange) and Inception-v4 (green). The purpose of BatchNorm is to output values in a standard normal, and it implicitly expects inputs that are relatively close to that. What we suspect is happening is that the input values from other networks aren't gaussian, but they're close-ish.2 The input values for MalConv display huge asperity, and aren't even unimodal. If BatchNorm is being wonky for you, I'd suggest plotting the pre-BN activations and checking to see that they're relatively smooth and unimodal.

3. The Lump of Regularization Fallacy

If you're overfitting, you probably need more regularization. Simple advice, and easily executed. Everytime I see this brought up though, people treat regularization as if it's this monolithic thing. Implicitly, people are talking as if you have some pile of regularization, and if you need to fight overfitting then you just shovel more regularization on top. It doesn't matter what kind, just add more.

We ran in to overfitting problems and tried every method we could think of: weight decay, dropout, regional dropout, gradient noise, activation noise, and on and on. The only one that had any impact was DeCov, which penalized activities in the penultimate layer that are highly correlated with each other. I have no idea what will work on your data — especially if it's not images/speech/text — so try different types. Don't just treat regularization as a single knob that you crank up or down.

I hope some of these lessons are helpful to you if you're into cybersecurity, or pushing machine learning into new domains in general. We'll be presenting the paper this is all based on at the Artificial Intelligence for Cyber Security (AICS) workshop at AAAI in February, so if you're at AAAI then stop by and talk.


  1. Everything is related, but near things are more related than far things. []
  2. I'd love to be able to quantify that closeness, but every test for normality I'm aware of doesn't apply when you have this many samples. If anyone knows of a more robust test please let me know. []
Posted in CS / Science / Tech / Coding | Tagged , , , , , , | Leave a comment

AI's "one trick pony" has a hell of a trick

The MIT Technology Review has a recent article by James Somers about error backpropagation, "Is AI Riding a One-Trick Pony?" Overall, I agree with the message in the article. We need to keep thinking of new paradigms because the SotA right now is very useful, but not correct in any rigorous way. However, as much as I agree with the thesis, I think Somers oversells it, especially in the beginning of the piece. For instance, the introductory segment concludes:

When you boil it down, AI today is deep learning, and deep learning is backprop — which is amazing, considering that backprop is more than 30 years old. It’s worth understanding how that happened—how a technique could lie in wait for so long and then cause such an explosion — because once you understand the story of backprop, you’ll start to understand the current moment in AI, and in particular the fact that maybe we’re not actually at the beginning of a revolution. Maybe we’re at the end of one.

That's a bit like saying "When you boil it down, flight is airfoils, and airfoils are Bernoulli's principle — which is amazing, considering that Bernoulli's principle is almost 300 years old." I totally endorse the idea that we ought to understand backprop; I've spent a lot of effort in the last couple of months organizing training for some of my firm's senior leadership on neural networks, and EBP/gradient descent is the heart of my presentation. But I would be very, very careful about concluding that backprop is the entire show.

Backprop was also not "lying in wait." People were working on it since it was introduced in 1986. The problem was that '86 was the height of the 2nd AI winter, which lasted another decade. Just like people should understand backprop to understand contemporary AI, they should learn about the history of AI to understand contemporary AI. Just because no one outside of CS (and precious few people in CS, for that matter) paid any attention to neural networks before 2015 doesn't mean they were completely dormant, only to spring up fully formed in some sort of intellectual Athenian birth.

I really don't want to be in the position of defending backprop. I took the trouble to write a dissertation about non-backprop neural nets for a reason, after all.1 But I also don't want to be in the position of letting sloppy arguments against neural nets go unremarked. That road leads to people mischaracterizing Minksy and Papert, abandoning neural nets for generations, and putting us epochs behind where we might have been.2


PS This is also worth a rejoinder:

Big patterns of neural activity, if you’re a mathematician, can be captured in a vector space, with each neuron’s activity corresponding to a number, and each number to a coordinate of a really big vector. In Hinton’s view, that’s what thought is: a dance of vectors.

That's not what thought is, that's how thought can be represented. Planets are not vectors, but their orbits can be profitably described that way, because "it behooves us to place the foundations of knowledge in mathematics." I'm sorry if that seems pedantic, but the distinction between a thing and its representation—besides giving semioticians something to talk about—underpins much of our interpretation of AI systems and cognitive science as well. Indeed, a huge chunk of data science work is figuring out the right representations. If you can get that, your problem is often largely solved.3

PPS This, on the other hand, I agree with entirely:

Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way. … What we know about intelligence is nothing against the vastness of what we still don’t know.

What I fear is that people read that and conclude that artificial neural networks are built on a shallow foundation, so we should give up on them as being unreliable. A much better conclusion would be that we need to keep working and build better, deeper foundations.


  1. That reason being, roughly put, that we're pretty sure the brain is not using backprop, and it seems ill-advised to ignore the mechanisms employed by the most intelligent thing we are aware of. []
  2. Plus sloppy arguments should be eschewed on the basis of the sloppiness alone, irrespective of their consequences. []
  3. IIRC both Knuth and Torvalds have aphorisms to the effect that once you have chosen the correct data structures, the correct algorithms will naturally follow. I think AI and neuroscience are dealing with a lot of friction because we haven't been able to figure out the right representations/data structures. When we do, the right learning algorithms will follow much more easily. []
Posted in CS / Science / Tech / Coding | Tagged , , , , , , , | Leave a comment

National AI Strategy

Some of my co-workers published a sponsored piece in the Atlantic calling for a national AI strategy, which was tied in to some discussions at the Washington Ideas event.

I'm 100% on board with the US having a strategy, but I want to offer one caveat: "comprehensive national strategies" are susceptible to becoming top-down, centralized plans, which I think is dangerous.

I'm generally disinclined to centralized planning, for both efficiency and philosophical reasons. I'm not going to take the time now to explain why; I doubt anything I could scratch out here would shift people very much along any kind of Keynes-Hayek spectrum.

So why am I bothering to bring this up? Mostly because I think it would be especially ill-conceived to adopt central planning when it comes to AI. The recent progress in AI has been largely a result of abandoning top-down techniques in favor of bottom-up ones. We've abandoned hand-coded visual feature detectors for convolutional neural networks. We've abandoned human-engineered grammar models for statistical machine translation. In one discipline after another emergent behavior has outpaced decades worth of expert-designed techniques. To layer top-down policy-making on a field built of bottom-up science would be a waste, and an ironic one at that.


PS Having spoken to two of the three authors of this piece, I don't mean to imply that they support centralized planning of the AI industry. This is just something I would be on guard against.

Posted in Business / Economics, CS / Science / Tech / Coding | Tagged , , , , | Leave a comment

What I've Been Reading

Banana: The Fate of the Fruit That Changed the World, Dan Koeppel

Not bad. I'm a sucker for this type of history of a single commodity or common household object. It did make we want to try to get my hands on one of the few non-Cavendish cultivars of bananas that ever make their way to America.

Banana, Dan Koeppel

Very short summary/background info: all of the bananas at American grocery stores are genetic clones. This leaves them very susceptible to disease. This is not a theoretical concern: the variety which was previously ubiquitous was wiped out in the middle of the 20th century, and the current variety is being decimated in many growing regions. The fact that they're sterile fruits propagated clonally also makes it extremely difficult to breed other, more resistant varieties the way we do with the majority of our produce. Also important to the story is that banana production for the US and European markets has historically been very oligopolistic, leading to some … unsavory … business practices.

(Digression: The artificial banana flavor used in candies tastes nothing like our modern Cavendish bananas but I have heard that it is a very good match for the flavor of the Gros Michel that the Cavendish replaced. I've never been able to find definitive confirmation of that though, and I wish it was mentioned in this book. This a minor was disappointment. On the other hand, I was unreasonably amused by Koeppel's tongue-in-cheek translation of the "Gros Michel" as "The Big Mike".)

There is a temptation when people right books about subjects that have been overlooked to try to swing the pendulum too hard the other way to make the audience realize how important the subject is. Koeppel mostly avoids that trap, but falls in to it somewhat when discussing more current history. There's a lot more to Latin American politics and the rise of Bolivarian Socialism than the treatment of workers on banana plantations, which Koeppel asserted as a primary cause. Similarly he drew an analogy between the Clinton administration filing a complaint with the WTO about EU banana import duties and the active role that United Fruit played in shaping US foreign policy in Latin America between the beginning of the century and the middle of the Cold War. Both have fruit companies involved in international relations, that's where the similarities end. One of those is egregious cronyism, and one is a rules-based order in action.

Koeppel was on the shakiest ground towards the end of the book, when he was discussing the future of the banana market. His discussion of Fair Trade could benefit from reading Amrita Narlikar and similar critiques. I do give Koeppel much credit for his recognition of Consumer Sovereignty. If the conditions of banana growers are going to improve it won't be because Chiquita/Dole/etc. become kinder and gentler, it must be because consumers decide to spend more money buying bananas. Our stated preferences for better conditions for poor agricultural workers do not match our revealed preferences as consumers.

I also commend Koeppel for admitting that researching this book caused him to change his mind about transgenic food. He had previously been anti-GMO but became convinced genetic manipulation is the only way to save the banana crop. I do wish he had extended more of the same enthusiastic acceptance of transgenics to other crops, which he only went halfway to doing. Yes, bananas sterility makes them somewhat of a special case, but only somewhat.

A couple of months back I read Dan Reader's Potato: A History of the Propitious Esculent. If you're going to pick up one book about a non-cereal staple crop (and why wouldn't you?), I liked Potato much better.


Pilot X, Tom Merritt

This is a time-travel adventure story. It seemed like it could have been a Doctor Who episode. Merritt handles the oddities that result from time travel with deftness and wit. (A couple of examples that make you stop and think. 1: "This will be the last time I meet you, but not the last time you meet me." 2: The main character spends twelve years in training for a job, but it takes only four years, because when he gets to the end of the four year period he goes back in time and starts again twice, so that there are three of him all operating in parallel.) Amusing, but not great.


The Grace of Kings, Ken Liu

The Grace of Kings, Ken Liu

I had previous read Liu's short story collection The Paper Menagerie and loved it. Grace of Kings didn't disappoint. Highly recommended. One of the blurbs on the back cover described it as "the Wuxia version of Game of the Thrones" and that pretty much covers it.

One downside to the book is that characters experience rapid changes in fortune within the span of several pages. It's a nice, fast pace — most contemporary fantasy authors would lumberingly stretch out plot points like this for scores (hundreds?) of pages — but it does rob the story of some of the potential dramatic tension. One minute I've never even considered the possibility of a character rebelling against their overlord, and then within ten minutes they've plotted their rebellion, rebelled, been suppressed, and then punished. That doesn't give me much chance to savor the possibilities of what might happen. All in all though, I prefer this pace to the prolix plodding common to so many contemporary fantasy authors. I appreciate GRRM-style world building as much as the next reader, but not every fantasy novel needs every minor character to have their entire dynastic history spelled out, complete with descriptions of their heraldry, the architecture of their family seat, their favorite meals, and their sexual peccadilloes.

I'm not actually sure 'fantasy' is the correct term for Grace of Kings, come to think of it. There's some minor divine intervention and a couple of fantastic beasts, but no outright magic. I suppose it's fantasy in a sort of Homeric way, rather than a 20th century way.

Anyway, I've got my hands on the sequel, The Wall of Storms, and will be starting it as soon as possible. Hopefully we don't have to wait too long before the third and final volume is published.


The Art of War, Sun Tzu, translated by the Denma Translation Group

I listened to this audio edition from Shambhala Press. I don't pay much attention to which publishers produce which books, but I've been quite happy with several volumes of theirs that I've bought.

I hadn't read Art of the War in probably 20 years, so this was a good refresh. The way they structured this was to first have a recording of the text being read, and then start at the beginning with another reading but this time with interspersed commentary. That was a very good way to do it.

The text itself is short, and the commentary in this edition is good, so I'd recommend this even if you have read Art of War before.


The Map Thief, Michael Blanding

This is the story of an antiquities dealer specializing in rare maps, E. Forbes Smiley III, who turned to theft to build his inventory. I don't usually go for true crime stories, and indeed that was the least interesting aspect of this book. However, it was an interesting look at a little corner of the art/antiques market that I did not know about. There is also good information about the history of cartography, exploration and printing.((Everyone loves to hate on the Mercator projection, but this book does a good job of explaining how revolutionarily useful Mercator's cartography was in the 16th century. His projection is a tool that is remarkably good for its intended purpose, i.e. helping navigate over long sea voyages. It shouldn't be used the way it has been (hung on every classroom wall, making choropleth infographics, etc.) but that doesn't make it a bad tool per se, just one that is misused. The fitness of a technology, just like that of a biological organism, can only be usefully evaluated in the environment it is adapted for.))

Perhaps the most interesting part of the case for me came after Smiley confessed and the various libraries he stole from had to go about figuring out what was missing and from whom. In the case of normal thefts, or even art thefts, this is pretty straight forward, but the nature of the material — rare, poorly or partially catalogued, incompletely and idiosyncraticly described, existing in various editions with only marginal differences, etc. — make it quite a puzzle. Coming up with a good cataloging system for oddities like antique maps would make a good exercise for a library science/information systems/database project. (Indeed it was only thanks to the work of a former Army intelligence analyst that things got sorted out as well as they did.) Even something superficially simple like figuring out which copy of a printed map is which makes for a good computer vision challenge.

There are also Game Theoretic concerns at work: libraries would benefit if they all operated together to publicize thefts in order to track down stolen materials, but it is in every individual library's interest to cover up thefts so as not to besmirch their reputation and alienate donors, who expect that materials they contribute will be kept safe. The equilibrium is not straightforward, nor is it likely to be optimal.

Posted in Reading Lists, Reviews | Tagged , , | Leave a comment

Friston

Two of my favorite blogs — Slate Star Codex (topics: psychiatry, social commentary) and Marginal Revolution (topics: economics, everything else) — have both linked to Karl Friston papers in the last 24 hours. Since one of my bosses is a Friston enthusiast, and he's the only Friston devotee I've ever met, and neither of these blogs has anything to do with what I work on, this gave me a Worlds-Are-Colliding feeling.

A George divided against itself can not stand.
A George divided against itself can not stand.

I haven't read either paper yet ("An aberrant precision account of autism" and "Predicting green: really radical (plant) predictive processing") but I do want to respond to SSC's commentary. Here's what he had to say:

A while ago I quoted a paper by Lawson, Rees & Friston about predictive-processing-based hypotheses of autism. They said:

This provides a simple explanation for the pronounced social-communication difficulties in autism; given that other agents are arguably the most difficult things to predict. In the complex world of social interactions, the many-to-one mappings between causes and sensory input are dramatically increased and difficult to learn; especially if one cannot contextualize the prediction errors that drive that learning.

And I was really struck by the phrase “arguably the most difficult thing to predict”. Really? People are harder to predict than, I don’t know, the weather? Weird little flying bugs? Political trends? M. Night Shyamalan movies? And of all the things about people that should be hard to predict, ordinary conversations?

I totally endorse the rest of his post, but here I need to disagree. Other people being the hardest thing to predict seems perfectly reasonable to me. The weather isn't that hard to predict decently well: just guess that the weather tomorrow will be like it is today and you'll be pretty damn accurate. Add in some basic seasonal trends — it's early summer, so tomorrow will be like today but a little warmer — and you'll get closer yet. This is obviously not perfect, but it's also not that much worse than what you can do with sophisticated meteorological modeling. Importantly, the space between the naive approach and the sophisticated approach doesn't leave a lot of room to evolve or learn better predictive ability.

Weird flying bugs aren't that hard to predict either; even dumb frogs manage to catch them enough to stay alive. I'm not trying to be mean to amphibians here, but on any scale of inter-species intelligence they're pretty stupid. The space between how well a frog can predict the flight of a mosquito and how well some advanced avionics system could do so is potentially large, but there's very little to be gained by closing that predictive gap.

Political trends are hard to predict, but only because you're predicting other human agents aggregated on a much larger scale. A scale that was completely unnecessary for us to predict, I might add, until the evolutionary eye-blink of ten thousand years or so ago.

Predicting movies is easier than predicting other human agents, because dramatic entertainments — produced by humans, depicting humans — are just a subset of interacting with other human agents. If you have a good model of how other people will behave, then you also have a good model of how other people will behave when they are acting as story tellers, or when they are characters. (If characters don't conform to the audience's model of human agents at least roughly, they aren't good characters.)

Maybe a better restatement of Friston et al. would be "people are are arguably the most difficult things to predict from the domain of things we have needed to predict precisely and have any hope of predicting precisely."

Posted in Uncategorized | Tagged | Leave a comment

Will AI steal our jobs?

As an AI researcher, I think I am required to have an opinion about this. Here's what I have to say to the various tribes.

AI-pessimists: please remember that the Luddites have been wrong about technology causing economic cataclysm every time so far. We're talking about several consecutive centuries of wrongness.1 Please revise your confidence estimates downwards.

AI-optimists: please remember that just because the pessimists have always been wrong in the past does not mean that they must always be wrong in the future. It is not a natural law that the optimists must be right. That labor markets have adapted in the long term does not mean that they must adapt, to say nothing of short-term dislocations. Please revise your confidence estimates downwards.

Everyone: many forms of technology are substitutes for labor. Many forms of technology are complements to labor. Often a single form of technology is both simultaneously. It is impossible to determine a priori which effect will dominate.2 This is true of everything from the mouldboard plough to a convolutional neural network. Don't casually assert AI/ML/robots are qualitatively different. (For example, why does Bill Gates think we need a special tax on robots that is distinct from a tax on any other capital equipment?)

As always, please exercise cognitive and epistemic humility.


  1. I am aware of the work of Gregory Clark and others related to Industrial Revolution era wage and consumption stagnation. If a disaster requires complicated statistical models to provide evidence it exists, I say its scale can not have been that disastrous. []
  2. Who correctly predicted that the introduction of ATMs would coincide with an increase in employment of bank tellers? Anyone? Anyone? Beuller? []
Posted in Business / Economics, CS / Science / Tech / Coding | Tagged , , , , , | Leave a comment

Self-Diagnosis and Government Contracting

Earlier this week Hadley Wickham, Chief Scientist at RStudio, gave a little talk at Booz Allen. He started out in med school, and one of the things that stuck out from his talk was a comparison between being a consulting statistician and taking a medical history. He tells a similar story in this interview:

One of the things I found most useful from med school was we got trained in how to take a medical history, like how to do an interview. Really, there’s a lot of similarities. When you’re a doctor, someone will come to you and say, “I’ve broken my arm. I need you to put a cast on it.”1 It’s the same thing when you’re a statistician, someone comes to you and says, “I’ve got this problem, I need you to fit a linear model and give me a p-value.” The first task of any consulting appointment is to think about what they actually need, not what they think they want. It’s the same thing in medicine, people self‑diagnose and you’ve got to try and break through that and figure out what they really need.

I think this problem comes up in any consulting or contracting environment. As a consultant, should I:

  1. (a) do what my client is asking me to do, or
  2. (b) figure out why they're asking me to do that, and then figure out what they should want me to do, and then convince them that's what they want to do, and then do that thing?

This is pretty routine, and no surprise to anyone who has worked in consulting. Here's why I'm sharing it though. This is from Megan McArdle's discussion of the CMS Inspector General's report on "How HealthCare.gov Went So, So Wrong."2

The federal government contracting process is insane. [...] A client is a long-term relationship; you want to preserve that. But the federal contracting system specifically discourages these sorts of relationships, because relationships might lead to something unfair happening. (In fact, the report specifically faults CMS for not documenting that one of the people involved in the process had previously worked for a firm that was bidding.) Instead the process tries to use rules and paperwork to substitute for reputation and trust. There’s a reason that private companies do not try to make this substitution, which is that it’s doomed.

Yes, you end up with some self-dealing; people with the authority to spend money on outside vendors dine very well, can count on a nice fruit basket or bottle of liquor at Christmas, and sometimes abuse their power in other less savory ways. But the alternative is worse, because relying entirely on rules kills trust, and trust is what helps you get the best out of your vendors.

Trust is open ended: You do your best for me, I do my best for you. That means that people will go above and beyond when necessary, because they hope you’ll be grateful and reciprocate in the future. Rules, by contrast, are both a floor and a ceiling; people do the minimum, which is also the maximum, because what do they get out of doing more?

Having everything spelled out exactly in contract not only removes trust from the equation, it eliminates the contractor's ability to give you what you need instead of what you originally ask for. It precludes that consultant from exercising their expertise even though that expertise is the very reason they were given a contract in the first place.

Granted, there are some advantages to a consultant only being able to do what they are initially asked to do. Unscrupulous contractors can't use that chain of logic in (b) above to convince the client to do a lot of unnecessary things. But if we don't trust government managers enough to resist that convincing, why should we trust them enough to write up the RFPs and judge proposals and oversee the performance of the contracts in the first place?

I've been consulting less than a year, and I've already been exposed to too many government agencies who are the equivalent of a hypochondriac who stays up all night reading WebMD. "Yes, yes, I understand you have a fever and your neck is stiff, but no, you do not have meningitis... no it's not SARS either... or bird flu." "Yes, I understand you head everyone talking about 'The Cloud,' but no, not every process should be run via Amazon Web Services, and no, you don't need GPUs for that, and no no NO, there is no reason to run a bunch of graph algorithms on non-graph data."


  1. During his in-person talk, the patient with the broken-arm-and-cast instead had a cold and wanted antibiotics, which is a better example since the cold is caused by a virus which will be unaffected by the antibiotics. []
  2. Which incidentally is something I have spilled a lot of ink about previously. []
Posted in Business / Economics | Tagged , | Leave a comment