Friston

Two of my favorite blogs — Slate Star Codex (topics: psychiatry, social commentary) and Marginal Revolution (topics: economics, everything else) — have both linked to Karl Friston papers in the last 24 hours. Since one of my bosses is a Friston enthusiast, and he's the only Friston devotee I've ever met, and neither of these blogs has anything to do with what I work on, this gave me a Worlds-Are-Colliding feeling.

A George divided against itself can not stand.
A George divided against itself can not stand.

I haven't read either paper yet ("An aberrant precision account of autism" and "Predicting green: really radical (plant) predictive processing") but I do want to respond to SSC's commentary. Here's what he had to say:

A while ago I quoted a paper by Lawson, Rees & Friston about predictive-processing-based hypotheses of autism. They said:

This provides a simple explanation for the pronounced social-communication difficulties in autism; given that other agents are arguably the most difficult things to predict. In the complex world of social interactions, the many-to-one mappings between causes and sensory input are dramatically increased and difficult to learn; especially if one cannot contextualize the prediction errors that drive that learning.

And I was really struck by the phrase “arguably the most difficult thing to predict”. Really? People are harder to predict than, I don’t know, the weather? Weird little flying bugs? Political trends? M. Night Shyamalan movies? And of all the things about people that should be hard to predict, ordinary conversations?

I totally endorse the rest of his post, but here I need to disagree. Other people being the hardest thing to predict seems perfectly reasonable to me. The weather isn't that hard to predict decently well: just guess that the weather tomorrow will be like it is today and you'll be pretty damn accurate. Add in some basic seasonal trends — it's early summer, so tomorrow will be like today but a little warmer — and you'll get closer yet. This is obviously not perfect, but it's also not that much worse than what you can do with sophisticated meteorological modeling. Importantly, the space between the naive approach and the sophisticated approach doesn't leave a lot of room to evolve or learn better predictive ability.

Weird flying bugs aren't that hard to predict either; even dumb frogs manage to catch them enough to stay alive. I'm not trying to be mean to amphibians here, but on any scale of inter-species intelligence they're pretty stupid. The space between how well a frog can predict the flight of a mosquito and how well some advanced avionics system could do so is potentially large, but there's very little to be gained by closing that predictive gap.

Political trends are hard to predict, but only because you're predicting other human agents aggregated on a much larger scale. A scale that was completely unnecessary for us to predict, I might add, until the evolutionary eye-blink of ten thousand years or so ago.

Predicting movies is easier than predicting other human agents, because dramatic entertainments — produced by humans, depicting humans — are just a subset of interacting with other human agents. If you have a good model of how other people will behave, then you also have a good model of how other people will behave when they are acting as story tellers, or when they are characters. (If characters don't conform to the audience's model of human agents at least roughly, they aren't good characters.)

Maybe a better restatement of Friston et al. would be "people are are arguably the most difficult things to predict from the domain of things we have needed to predict precisely and have any hope of predicting precisely."

Posted in Uncategorized | Tagged | Leave a comment

Will AI steal our jobs?

As an AI researcher, I think I am required to have an opinion about this. Here's what I have to say to the various tribes.

AI-pessimists: please remember that the Luddites have been wrong about technology causing economic cataclysm every time so far. We're talking about several consecutive centuries of wrongness.1 Please revise your confidence estimates downwards.

AI-optimists: please remember that just because the pessimists have always been wrong in the past does not mean that they must always be wrong in the future. It is not a natural law that the optimists must be right. That labor markets have adapted in the long term does not mean that they must adapt, to say nothing of short-term dislocations. Please revise your confidence estimates downwards.

Everyone: many forms of technology are substitutes for labor. Many forms of technology are complements to labor. Often a single form of technology is both simultaneously. It is impossible to determine a priori which effect will dominate.2 This is true of everything from the mouldboard plough to a convolutional neural network. Don't casually assert AI/ML/robots are qualitatively different. (For example, why does Bill Gates think we need a special tax on robots that is distinct from a tax on any other capital equipment?)

As always, please exercise cognitive and epistemic humility.


  1. I am aware of the work of Gregory Clark and others related to Industrial Revolution era wage and consumption stagnation. If a disaster requires complicated statistical models to provide evidence it exists, I say its scale can not have been that disastrous. []
  2. Who correctly predicted that the introduction of ATMs would coincide with an increase in employment of bank tellers? Anyone? Anyone? Beuller? []
Posted in Business / Economics, CS / Science / Tech / Coding | Tagged , , , , , | Leave a comment

Self-Diagnosis and Government Contracting

Earlier this week Hadley Wickham, Chief Scientist at RStudio, gave a little talk at Booz Allen. He started out in med school, and one of the things that stuck out from his talk was a comparison between being a consulting statistician and taking a medical history. He tells a similar story in this interview:

One of the things I found most useful from med school was we got trained in how to take a medical history, like how to do an interview. Really, there’s a lot of similarities. When you’re a doctor, someone will come to you and say, “I’ve broken my arm. I need you to put a cast on it.”1 It’s the same thing when you’re a statistician, someone comes to you and says, “I’ve got this problem, I need you to fit a linear model and give me a p-value.” The first task of any consulting appointment is to think about what they actually need, not what they think they want. It’s the same thing in medicine, people self‑diagnose and you’ve got to try and break through that and figure out what they really need.

I think this problem comes up in any consulting or contracting environment. As a consultant, should I:

  1. (a) do what my client is asking me to do, or
  2. (b) figure out why they're asking me to do that, and then figure out what they should want me to do, and then convince them that's what they want to do, and then do that thing?

This is pretty routine, and no surprise to anyone who has worked in consulting. Here's why I'm sharing it though. This is from Megan McArdle's discussion of the CMS Inspector General's report on "How HealthCare.gov Went So, So Wrong."2

The federal government contracting process is insane. [...] A client is a long-term relationship; you want to preserve that. But the federal contracting system specifically discourages these sorts of relationships, because relationships might lead to something unfair happening. (In fact, the report specifically faults CMS for not documenting that one of the people involved in the process had previously worked for a firm that was bidding.) Instead the process tries to use rules and paperwork to substitute for reputation and trust. There’s a reason that private companies do not try to make this substitution, which is that it’s doomed.

Yes, you end up with some self-dealing; people with the authority to spend money on outside vendors dine very well, can count on a nice fruit basket or bottle of liquor at Christmas, and sometimes abuse their power in other less savory ways. But the alternative is worse, because relying entirely on rules kills trust, and trust is what helps you get the best out of your vendors.

Trust is open ended: You do your best for me, I do my best for you. That means that people will go above and beyond when necessary, because they hope you’ll be grateful and reciprocate in the future. Rules, by contrast, are both a floor and a ceiling; people do the minimum, which is also the maximum, because what do they get out of doing more?

Having everything spelled out exactly in contract not only removes trust from the equation, it eliminates the contractor's ability to give you what you need instead of what you originally ask for. It precludes that consultant from exercising their expertise even though that expertise is the very reason they were given a contract in the first place.

Granted, there are some advantages to a consultant only being able to do what they are initially asked to do. Unscrupulous contractors can't use that chain of logic in (b) above to convince the client to do a lot of unnecessary things. But if we don't trust government managers enough to resist that convincing, why should we trust them enough to write up the RFPs and judge proposals and oversee the performance of the contracts in the first place?

I've been consulting less than a year, and I've already been exposed to too many government agencies who are the equivalent of a hypochondriac who stays up all night reading WebMD. "Yes, yes, I understand you have a fever and your neck is stiff, but no, you do not have meningitis... no it's not SARS either... or bird flu." "Yes, I understand you head everyone talking about 'The Cloud,' but no, not every process should be run via Amazon Web Services, and no, you don't need GPUs for that, and no no NO, there is no reason to run a bunch of graph algorithms on non-graph data."


  1. During his in-person talk, the patient with the broken-arm-and-cast instead had a cold and wanted antibiotics, which is a better example since the cold is caused by a virus which will be unaffected by the antibiotics. []
  2. Which incidentally is something I have spilled a lot of ink about previously. []
Posted in Business / Economics | Tagged , | Leave a comment

Marketing to Algorithms?

Toby Gunton :: Computer says no – why brands might end up marketing to algorithms

I know plenty about algorithms, and enough about marketing.1 And despite that, I'm not sure what this headline actually means. It's eye catching, to be sure, but what would marketing to an algorithm look like?

When you get down to it, marketing is applied psychology. Algorithms don't have psyches. Whatever "marketing to algorithms" means, I don't think it's going to be recognizable as marketing.

Would you call what spammers do to slip past your filters "marketing"? (That's not rhetorical.) Does that count as marketing? Because that's pretty much what Gunton seems to be describing.

Setting aside the intriguing possibility of falling in love with an artificial intelligence, the film [Spike Jonez's Her] raises a potentially terrifying possibility for the marketing industry.

It suggests a world where an automated guardian manages our lives, taking away the awkward detail; the boring tasks of daily existence, leaving us with the bits we enjoy, or where we make a contribution. In this world our virtual assistants would quite naturally act as barriers between us and some brands and services.

Great swathes of brand relationships could become automated. Your energy bills and contracts, water, gas, car insurance, home insurance, bank, pension, life assurance, supermarket, home maintenance, transport solutions, IT and entertainment packages; all of these relationships could be managed by your beautiful personal OS.

If you're a electric company whose customers all interact with you via software daeomns, do you even have a brand identity any more? Aren't we discussing a world in which more things will be commoditized? And isn't that a good thing for most of the categories listed?

What do we really care about: getting goods and services, or expressing ourselves through the brands we identify with? Both, to an extent. But if we can no longer do that through our supermarkets or banking, won't we simply shift that focus it to other sectors: clothes, music, etc.


Arnold Kling :: Another Proto-Libertarian

2. Consider that legislation may be an inferior form of law not just recently, or occasionally, but usually. Instead, consider the ideas of Bruno Leoni, which suggest that common law that emerges from individual cases represents a spontaneous order, while legislation represents an attempt at top-down control that works less well.

I'd draw a parallel to Paul Graham's writing on dealing with spam. Bayesian filtering is the bottom-up solution; blacklists and rule sets are the top-down.


Both of these stories remind me of a couple of scenes in Greg Egan's excellent Permutation City. Egan describes a situation where people have daemons to answer their video phones that have learned (bottom-up) how to mimic your reactions well enough to screen out personal calls from automated messages. In turn marketers have software that learns how to recognize if they're talking to a real person or one of these filtering systems. The two have entered an evolutionary race to the point that people's filters are almost full-scale neurocognitive models of their personalities.


  1. Enough to draw a paycheck from a department of marketing for a few years, at least. []
Posted in Business / Economics, CS / Science / Tech / Coding | Tagged , , , , | Leave a comment

Latitude-Longitude Distance

I thought I would post some of the bite-sized coding pieces I've done recently. To lead off, here's Ruby function to find the distance between two points given their latitude and longitude.

Latitude is given in degrees north of the equator (use negatives for the Southern Hemisphere) and longitude is given in degrees east of the Prime Meridian (optionally use negatives for the Western Hemisphere).

include Math
DEG2RAD = PI/180.0
def lldist(lat1, lon1, lat2, lon2)
  rho = 3960.0
  theta1 = lon1*DEG2RAD
  phi1 = (90.0-lat1)*DEG2RAD
  theta2 = lon2*DEG2RAD
  phi2 = (90.0-lat2)*DEG2RAD
  val = sin(phi1)*sin(phi2)*cos(theta1-theta2)+cos(phi1)*cos(phi2)
  val = [-1.0, val].max
  val = [ val, 1.0].min
  psi = acos(val)
  return psi*rho
end

A couple of notes:

  1. Everything with val at the bottom is to deal with an edge case that can crop up when you try to get the distance between a point and itself. In that case val should be equal to 1.0, but on my systems some floating-point errors creep in and I get 1.0000000000000002, which is out of range for the acos() function.
  2. This returns the distance in miles. If you want some other unit, redefine rho with the appropriate value for the radius of the earth in your desired unit (6371 km, 1137 leagues, 4304730 passus, or what have you).
  3. This assumes the Earth is spherical, which is a decent first approximation, but is still just that: a first approximation.1

I am currently writing a second version to account for the difference between geographic and geocentric latitude which should do a good job of accounting for the Earth's eccentricity. The math is not hard, but finding ground truth to validate my results against is, since the online calculators I've tried to check against do not make their assumptions clear. I did find a promising suite of tools for pilots, and I'd hope if you're doing something as fraught with consequences as flying that you've accounted for these sorts of things.

Protip: You can win every exchange just by being one level more precise than whoever talked last. Eventually, you'll defeat all conversational opponents and stand alone.
xkcd #1318 — "Protip: You can win every exchange just by being one level more precise than whoever talked last. Eventually, you'll defeat all conversational opponents and stand alone."

  1. As far as I'm concerned, this is my canonical example of the difference between a first and second approximation. The Earth isn't really a oblate spheroid either, but that makes a very good second approximation — about 100 m. (See John Cook here and here.) []
Posted in CS / Science / Tech / Coding | Tagged , , | Leave a comment