Tag Archives: technology

Book List: 2019Q1

I think I did less reading this quarter than at any point since I beat dyslexia. Certainly less than any point since I started keeping track in 2011, and that includes the period when I finished my dissertation and had two kids. I'm teaching a course at a local college this semester, and lesson prep and grading has not left a lot of time for reading. But enough complaining...


slide:ology: The Art and Science of Creating Great Presentations, Nancy Duarte

Despite giving a fairly large number of presentations, I'm definitely not the audience for this. It's not really about presentations, but about sales presentations. If, like me, you have mostly factual & technical information to impart, I'm not sure how much this will help. There's a decent amount of advice in here if you're a complete graphic design novice, but there are probably better places to get that knowledge.


Cover of "The Relaxed Mind" by Dza Kilung
"The Relaxed Mind," Dza Kilung

The Relaxed Mind, Dza Kilung Rinpoche

There is perhaps a bit too much "woo" in the later chapters of this meditation manual, but it is still a good book for practice. If nothing else, I like having some meditation-related book on my bedstand/ipod: even if that book itself is not the best, it serves as an encouragement to keep practicing. The earlier two or three of the seven practices described here seem concretely useful. Maybe the latter practices will have more appeal to me as I become a "better" meditator?


Cover of "The Most Human Human," by Brian Christian
"The Most Human Human," Brian Christian

The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive, Brian Christian

I loved this. Christian has a degree in computer science and an MFA in poetry. I can't think of a better background to write about what the Turing Test tells us about talking with (and being) human. There's good history of AI, exploration of psychology and epistemology, and tips for what makes a conversation interesting.

I'm recommending this as a great book for other technologists to learn something about "soft skills" and for non-technologists to learn about AI. I can't think of another book that comes close to providing both benefits.


Lies Sleeping, Ben Aaronovitch

This is the latest in Aaronovitch's "Rivers of London" series, which I still love. I should really write these recaps as soon as I finish reading, because it's been long enough now that I don't have anything specific to say about it. But this is the ninth volume in the series, so if you don't already have an opinion about the prior eight, there's really no need for you to have one about this.

My wife, who reads mysteries almost exclusively, has recently started this series after hearing me talk about it since 2014. It's one of the few book series we both equally enjoy.


The Labyrinth Index, Charles Stross

(1) Copy-and-paste what I said above about not waiting to write these comments. (2) Copy-and-paste what I said about already having opinions about the series since it's long running, but replace "ninth" with "twelfth."


Cover of "Gravity's Rainbow" by Thomas Pynchon
"Gravity's Rainbow," Thomas Pynchon

Gravity's Rainbow, Thomas Pynchon

I'll be honest: I did not understand this book. I enjoyed it a great deal, but I did not understand it.

I like Pynchon as a stylist even when the narrative has me completely befuddled. As a result, even the confusing passages make for very good audiobook listening because I can let the language just wash over me.


Cover of "Zen Mind, Beginner's Mind" by Shunryu Suzuki.
"Zen Mind, Beginner's Mind," Shunryu Suzuki.

Zen Mind, Beginner's Mind, Shunryu Suzuki

I also didn't fully understand this book, but I feel like I wasn't really meant to. ((Actually, now that I think about it, maybe Pynchon didn't really want people to understand him either.)) I'm not sure "understanding" is even a thing you're supposed to be able to do to Zen. I think I got a lot out of it regardless. It's definitely something I'm going to revisit in the future.


Cover of "House of Suns" by Alastair Reynolds
"House of Suns," Alastair Reynolds

House of Suns, Alastair Reynolds

This is another winner. I haven't had this much fun reading a sci-fi book in years. It has that wide-screen baroque space opera feel that I used to get from Iain Banks books. I can't think of another story that engages so well with the sheer scope — in time and distance — of the galaxy. Before I was half way through I was already putting all of the library's other Reynolds books on my list.


The Sky-Blue Wolves, S. M. Stirling

I keep saying I'm going to stop reading this series, but then a new volume comes out just when I want a junk-food book and I read it anyway. Then I feel about as satisfied as I do after eating actual junk food. This is a fun world to mentally play around in, but Stirling is really phoning it in at this point. The Big Bad Guy that was supposed to require a world war to defeat just got knocked off in about a chapter of Dreamtime Ninja Shenanigans, and meanwhile two of our Intrepid Heroes (who happen to both be rightful heirs to continent-spanning empires) decided to have a love-child. Nice neat bow; everyone rides into the sunset.

Posted in Book List, Reviews | Tagged , , , , , , , | Leave a comment

Art in Space

Today I'm going to put on my Tyler Cowen hat and speculate about what art work will be valuable when humans are space-faring.

That's a pretty big range of possibilities, so let's keep things to a realistic, near(-ish) future. That means ignoring Iain Banks-type, post-singularity futures in which people are molding entire continents on ring worlds for aesthetic value alone. For the sake of argument, let's imagine a future something like that laid out in James S.A. Corey's "The Expanse" world: large population centers on artificial habitats on Mars, the asteroid belt, Jovian moons, etc. with a stable-but-tenuous economic existence. (I've been watching the latest season of the TV adaptation recently, and the latest printed volume is on hold for me at the library, so it's on my mind.)

Concept art of an upscale district on Ceres for "The Expanse" TV adaptation, by Ryan Dening & Tim Warnock.

There are myriad ways that the technology of the next several centuries could change art, but I want to think about what effects living in space specifically will have, rather than generic sci-fi-ness of the future could have.

The art market itself is also extremely broad, so here I'm thinking of mostly the upper-middle of the market more or less: not the sort of stuff at Art Basel or Gagosian, but what you might find in the off-world version of Canyon Road.


If living in space maintains the frontier aspect that I am picturing — and honestly, why wouldn't it, since it will be harder than living on a deep sea oil platform, undersea habitat or antarctic base? — then I'd predict a lot more shift to crafts and folk/outsider art.

Most obviously, the volume of living space will be constrained in a way we're not used to. (Think of living permanently on a cruise ship, or perhaps even a submarine.) I'd think that the average size of visual art will decrease to match. On the other hand, the sheer size of a canvas will itself will become a signal: a huge picture will be a prestige item simply because it makes a direct claim about how much living space you have.

Concept are for a rougher district on Ceres, also by Denig & Warnock.

This would not apply as directly to digital art, which I would expect to proliferate. For one thing, it is massless and volume-less — a nice feature when momentum, conservation of energy, and other orbital mechanical constraints play such an important role in life. Furthermore, it can be swapped out with any other work of art in a display effortlessly, which allows for both variety and flexibility in the event that living in confined conditions change our norms of privacy and personalization. (Hot-swapping living space would necessitate either bland, lowest-common-denominator, hotel room artwork, or similarly hot-swappable artwork.) The supply of digital art may also increase: with millions more people relying directly and tangibly on computerized navigation, life support, logistics, etc. it seems reasonable to suspect that some people will take their digital skills into more creative roles.

"Table Piece CCLXVI," Anthony Caro, 1975.
"Table Piece CCLXVI," Anthony Caro, 1975.

Will there be more sculpture? A lot of the economy of space seems (in Expanse-world, that is) to be based on mining, drilling, fabrication, etc. Will having many more people able to work a plasma torch and MIG welder lead to a proliferation of Anthony Caros? I can see an increase in supply, but on the demand side not so much. I have had dealers tell me that there is already low demand on Earth for sculpture because it is perceived by potential buyers as being awkward to display. Buying a big chunk of decorative rock or metal or ceramic would be more prohibitive in space than it is in the here-and-now.

Sci-fi designers seem to love non-rectangular corridors with lots of protruding bulkheads and other features that seem intentionally wasteful of volume. I have no idea if real space ships will actually end up like this in a case of life imitating art, but let's assume they do. This will: (a) make it difficult to hang canvases because the walls are often inexplicably not vertical, and (b) leave you with a lot of little nooks and crannies into which you might be able to fit sculpture. How would your aesthetic sense change if you didn't have blank walls to cover, but instead had lots of interstitial space between all the assorted conduits and ducts that needed to be filled?

I would expect other traditional hand-crafts to increase. A population of artisan workers — possibly with limited entertainment options due to being physically isolated from large population centers — may very well turn to crafts as a method of expression and to pass time. Textiles, perhaps? Limited living space would also mean limited possessions. Would there be a resurgence in, for instance, needlework to personalize jumpsuits? ((Because if there's one thing the sci-fi of my youth universally agreed on, it's that people on space stations will wear jumpsuits.)) Perhaps jewelry would be another outlet, if there is access to machinist skills and tools. This form of wearable sculpture would bypass the limitations of size & weight mentioned in the previous paragraph. Both jewelry and needlework might have added appeal if clothing becomes more standardized for safety or utilitarian reasons.


A yosegi jewelry chest by Affine Creations.
A yosegi jewelry chest by Affine Creations.

Materials will also be a limitation. There are no trees in space, so forget one of my hobbies, wood-working, or carving or turning. However I could see an increased demand for small wooden objects like boxes and small scale cabinetry as semi-luxury items, both because they would act as a reminder of Earth, and because living on a moving vessel would create a practical need for things to be put in containers. (Again, think of being on a ship.) Perhaps there would be a big demand for intarsia or yosegi? These both become easier with CNC tools (even if that is sort of cheating), and I assume these tools would be well provided for in space. All sorts of veneer work could be in higher demand: it can be used to mask the metal or synthetic materials that habitats would be made out of without costing significant mass or volume.

Of course, paper also becomes expensive. I've seen arguments that cheap paper in the 14th century was a necessary condition for the emergence of more realistic painting styles in the Renaissance, since it allowed artists to do orders of magnitude more practice than before. I don't think this will be a limiting condition given the availability of digital tablets, but it is probably a safe assumption that print-making, calligraphy and other works on paper will not be common on Ganymede.

I would expect some advanced technology in terms of chemistry and materials science. What new possibilities for pigments, substrates, etc. will this open up? The exploration of hostile environments will require advances in sensor and processing technology. What effect will this have on computational photography or digital rendering? AR/VR as an artistic medium will probably be helped along.

Many beginners are often tempted to paint from photographs as source material. This often leads to problems matching colors and values well since cameras and displays don't come close to capturing the full dynamic range of human vision. Will this mismatch between cameras and our retinas cause difficulty painting landscapes if the artist's vision has to modulated through some variety of sensors or visors to protect them from radiation when observing the environment? Forget painting en plein air.

The skills and tools for ceramics and glasswork seem like they would be more common in a space ecosystem, but does the utility of materials that shatter easily go down if you live on a moving vessel? I would think so. Perhaps there is a divide between those living on moons & asteroids and those living on ships, with the former being interested in ceramics and the latter not. Perhaps one of the materials science advances is more durable ceramics, and this point becomes moot.

More broadly, will interior decoration be divided between spaces that are rigged for acceleration and those that aren't? Or those that have a definite up/down axis due to (pseudo)gravity and those that do not? Will visual artists adjust to create works that don't have a defined top or bottom so that they can be appreciated better in zero-g? Or will the opposite occur: the use of artwork and decorations to subliminally orient occupants of a space in a common direction when there is no proper "down"?

How do the aforementioned CNC and 3D printing technologies fit into purely aesthetic pursuits? Too early to tell for me.


Regarding performance arts, I have little opinion. Will scarcity of large, open spaces make theatre less common? Will live music proliferate for the same reasons I speculate that crafts might (i.e. isolated communities looking to make their own entertainment)? Or will people be sailing off into the deepness with such large digital entertainment libraries that this is unnecessary? If people become used to working and living in space suits, communicating via radio, and seeing others primarily through screens, will that increase or decrease the desire to see and hear unmediated, live performances? ((Will the choral traditions of mining communities like those in Wales or South Africa be replicated by miners of asteroids?)) I have no idea what the sign of the effect is, to say nothing of the magnitude. One thing we can be confident about is that zero or low gravity environments certainly open up huge possibilities for dance, acrobatics, etc.


To what degree are artworks made in space demanded on Earth? I once bought a (rather ugly) change dish/ashtray only because it was cast out of lava on Mount Etna in front of me. Would there be enough people on Earth that would want decorative paperweights made of chunks of Iapetus or Pallas? Will art produced in space demand a premium on Earth because of its exotic origin, or will it be scene as an inferior good from a cultural/economic backwater?

Going in the other direction, will art that is conspicuously from Earth have extra luxury status in space? "I paid to haul this dead weight up out of the gravity well just to look at it."

What themes will be explored in space-based art? My first guess is that desire for landscapes and other natural scenes of Earth would increase, to compensate for people not being able to be in "natural" environments personally. On the other hand, perhaps the early settlers in space are proud enough of their pioneer spirit that they turn their back aesthetically on Earth. ("Shrouded bards of other lands, you may rest, you've done your work..." etc.) Psychologically, it seems like the most salient themes of living in space would be isolation and danger; I would expect those to be explored. Are there any thematic elements linking the art of nomadic cultures or those living in very hostile conditions? I'm not well-versed enough to think of any, but perhaps they exist.


A common connection in my speculations is that the supply and demand may move in opposing directions (e.g. easier to make sculpture, but fewer people want them). Another is that there could be a more bifurcated art market, with higher demand for luxury items (e.g. made of wood) among a narrower portion of the space-faring population, but a broader demand for more folk-art and crafts.

I mostly don't have answers to any of the questions I raised. I think the only thing to do is hoist ourselves out of the gravity well and find out what happens.


PS I hope you read the title of this post in the same voice Mel Brooks used at the end of the History of the World Part I. I certainly did.


Edited: I just thought of another art form that could adapt to space well — bonsai. Agronomy will be critical not just for food, but for life-support systems in general. This could lead to increased prominence for horticultural pursuits.

Bonsai seems especially well-suited, given the volume constraints of space-living. I could sum up the entire goal of bonsai ((To the extend I understand it, and my experience boils down to visiting a few arboretums and attempting to grow one juniper that almost immediately succumbed to a fungal infection.)) as "let me take untamed Nature, and form it into a bite-sized version to keep inside my home," which I can definitely see the appeal of if you're traveling away from Earth and out into The Beyond. I also think an over-looked aspect of space travel is how long it will take to get anywhere translunar, so a slow form of art creation may have an intrinsic appeal.

A fukinagashi (wind-swept) style bonsai tree.

I'm very curious how forms of plants and human tastes would adapt to low- and zero-g, especially since many traditional bonsai styles are either about succumbing to gravity (cascades) or reacting strongly against it (upright). Simulating the effect of wind on a tree is also a common technique, but wind might be a somewhat foreign concept to people residing in space. ((Unless we build something like O'Neill cylinders that have coriolis winds.))

Posted in Art | Tagged , , , | Leave a comment

Why we worry about the Ethics of Machine Intelligence

This essay was co-authored by myself and Steve Mills.

We worry about the ethics of Machine Intelligence (MI) and we fear our community is completely unprepared for the power we now wield. Let us tell you why.

To be clear, we’re big believers in the far-reaching good MI can do. Every week there are new advances that will dramatically improve the world. In the past month we have seen research that could improve the way we control prosthetic devices, detect pneumonia, understand long-term patient trajectories, and monitor ocean health. That’s in the last 30 days. By the time you read this, there will be even more examples. We really do believe MI will transform the world around us for the better, which is why we are actively involved in researching and deploying new MI capabilities and products.

There is, however, a darker side. MI also has the potential to be used for evil. One illustrative example is a recent study by Stanford University researchers who developed an algorithm to predict sexual orientation from facial images. When you consider recent news of the detainment and torturing of more than 100 male homosexuals in the Russian republic of Chechnya, you quickly see the cause for concern. This software and a few cameras positioned on busy street corners will allow the targeting of homosexuals at industrial-scale – hundreds quickly become thousands. The potential for this isn’t so far-fetched. China is already using CCTV and facial recognition software to catch jaywalkers. The researchers pointed out that their findings “expose[d] a threat to the privacy and safety of gay men and women.” That disavowal does little to prevent outside groups from implementing the technology for mass targeting and persecution.

Many technologies have the potential to be applied for nefarious purposes. This is not new. What is new about MI is the scale and magnitude of impact it can achieve. This scope is what will allow it to do so much good, but also so much bad. It is like no other technology that has come before. The notable exception being atomic weapons, a comparison others have already drawn. We hesitate to draw such a comparison for fear of perpetuating a sensationalistic narrative that distracts from this conversation about ethics. That said, it’s the closest parallel we can think of in terms of the scale (potential to impact tens of millions of people) and magnitude (potential to do physical harm).

None of this is why we worry so much about the ethics of MI. We worry because MI is unique in so many ways that we are left completely unprepared to have this discussion.

Ethics is not [yet] a core commitment in the MI field. Compare this with medicine where a commitment to ethics has existed for centuries in the form of the Hippocratic Oath. Members of the physics community now pledge their intent to do no harm with their science. In other fields ethics is part of the very ethos. Not so with MI. Compared to other disciplines the field is so young we haven’t had time to mature and learn lessons from the past. We must look to these other fields and their hard-earned lessons to guide our own behavior.

Computer scientists and mathematicians have never before wielded this kind of power. The atomic bomb is one exception; cyber weapons may be another. Both of these, however, represent intentional applications of technology.  While the public was unaware of the Manhattan Project, the scientists involved knew the goal and made an informed decision to take part. The Stanford study described earlier has clear nefarious applications; many other research efforts in MI may not. Researchers run the risk of unwittingly conducting studies that have applications they never envisioned and do not condone. Furthermore, research into atomic weapons could only be implemented by a small number of nation-states with access to proper materials and expertise. Contrast that with MI, where a reasonably talented coder who has taken some open source machine learning classes can easily implement and effectively ‘weaponize’ published techniques. Within our field, we have never had to worry about this degree of power to do harm. We must reset our thinking and approach our work with a new degree of rigor, humility, and caution.

Ethical oversight bodies from other scientific fields seem ill-prepared for MI. Looking to existing ethical oversight bodies is a logical approach. Even we suggested that MI is a “grand experiment on all of humanity” and should follow principals borrowed from human subject research. The fact that Stanford’s Institutional Review Board (IRB), a respected body within the research community, reviewed and approved research with questionable applications should give us all pause. Researchers have long raised questions about the broken IRB system. An IRB system designed to protect the interests of study participants may be unsuited for situations in which potential harm accrues not to the subjects but to society at large. It’s clear that the standards that have served other scientific fields for decades or even centuries may not be prepared for MI’s unique data and technology issues. These challenges are compounded even further by the general lack of MI expertise, or sometimes even technology expertise, within the members of these boards. We should continue to work with existing oversight bodies, but we must also take an active role in educating them and evolving their thinking towards MI.

MI ethical concerns are often not obvious. This differs dramatically from other scientific fields where ethical dilemmas are self-evident. That’s not to say they are easy to navigate. A recent story about an unconscious emergency room patient with a “Do Not Resuscitate” tattoo is a perfect example. Medical staff had to decide whether they should administer life-saving treatment despite the presence of the tattoo. They were faced with a very complex, but very obvious, ethical dilemma. The same is rarely true in MI where unintended consequences may not be immediately apparent and issues like bias can be hidden in complex algorithms. We have a responsibility to ourselves and our peers to be on the lookout for ethical issues and raise concerns as soon as they emerge.  

MI technology is moving faster than our approach to ethics. Other scientific fields have had hundreds of years for their approach to ethics to evolve alongside the science. MI is still nascent yet we are already moving technology from the ‘lab’ to full deployment. The speed at which that transition is happening has led to notable ethical issues including potential racism in criminal sentencing and discrimination in job hiring. The ethics of MI needs to be studied as much as the core technology if we ever hope to catch up and avoid these issues in the future. We need to catalyze an ongoing conversation around ethics much as we see in other fields like medicine, where there is active research and discussion within the community

The issue that looms behind all of this, however, is the fact that we can’t ‘put the genie back in the bottle’ once it has been released. We can’t undo the Stanford research now that it’s been published. As a community, we will forever be accountable for the technology that we create.

In the age of MI, corporate and personal values take on entirely new importance. We have to decide what we stand for and use that as a measure to evaluate our decisions. We can’t wait for issues to present themselves. We must be proactive and think in hypotheticals to anticipate the situations we will inevitably face.

Be assured that every organization will be faced with hard choices related to MI. Choices that could hurt the bottom line or, worse, harm the well-being of people now or in the future. We will need to decide, for example, if and how we want to be involved in Government efforts to vet immigrants or create technology that could ultimately help hackers. If we fail to accept that these choices inevitably exist, we run the risk of compromising our values. We need to stand strong in our beliefs and live the values we espouse for ourselves, our organizations, and our field of study. Ethics, like many things, is a slippery slope. Compromising once almost always leads to compromising again.

We must also recognize that the values of others may not mirror our own. We should approach those situations without prejudice. Instead of anger or defensiveness we should use them as an opportunity to have a meaningful dialog around ethics and values. When others raise concerns about our own actions, we must approach those conversations with humility and civility. Only then can we move forward as a community.

Machines are neither moral or immoral. We must work together to ensure they behave in a way that benefits, not harms, humanity. We don’t purport to have the answers to these complex issues. We simply request that you keep asking questions and take part in the discussion.


This has been crossposted to Medium and to the Booz Allen website as well.

We’re not the only one discussing these issues. Check out this Medium post by the NSF-Funded group Pervasive Data Ethics for Computational Research, Kate Crawford’s amazing NIPS keynote, Mustafa Suleyman’s recent essay in Wired UK, and Bryor Snefjella’s recent piece in BuzzFeed.

Posted in CS / Science / Tech / Coding | Tagged , , , , , , | Leave a comment

AIES 2018

Last week I attended the first annual conference on AI, Ethics & Society where I presented some work on a Decision Tree/Random Forest algorithm that makes decisions that are less biased or discriminatory. ((In the colloquial rather than technical sense)) You can read all the juicy details in our paper. This isn't a summary of our paper, although that blog post is coming soon. Instead I want to use this space to post some reaction to the conference itself. I was going to put this on a twitter thread, but it quickly grew out of control. So, in no particular order, here goes nothing:

Many of the talks people gave were applicable to GOFAI but don't fit with contemporary approaches. Approaches to improving/limiting/regulating/policing rule-based or expert systems won't work well (if at all) with emergent systems.

Many, many people are making the mistake of thinking that all machine learning is black box. Decision trees are ML but also some of the most transparent models possible. Everyone involved in this AI ethics discussion should learn a rudimentary taxonomy of AI systems. It would avoid mistakes and conflations like this, and it would take maybe an hour of time.

Now that I think of it, it would be great if next year's program included some tutorials. A crash course in AI taxonomy would be useful, as would a walk-through of what an AI programmer does day-to-day. (I think it would help people to understand what kinds of control we can have over AI behavior if they knew a little more about what went in to getting any sort of behavior at all.) I'd be interested in some lessons on liability law and engineering, or how standards organization operate.

Lots of people are letting the perfect be the enemy of the good. I heard plenty of complaints about solutions that alleviate problems but don't eliminate them completely, or work in a majority of situations but don't cover every possible sub-case.

Some of that was the standard posturing that happens at academic conferences ("well, sure, but have you ever thought of this??!") but that's a poor excuse for this kind of gotcha-ism.

Any academic conference has people who ask questions to show off how intelligent they are. This one had the added scourge of people asking questions to show off how intelligent and righteous they are. If ever there was a time to enforce concise Q&A rules, this is it.

We’re starting from near scratch here and working on a big problem. Adding any new tool to the toolbox should be welcome. Taking any small step towards the goal should be welcome.

People were in that room because they care about these problems. I heard too much grumbly backbiting about presenters that care about ethics, but don't care about it exactly the right way.

We can solve problems, or we can enforce orthodoxy, but I doubt we can do both.

It didn't occur to me at the time, but in retrospect I'm surprised how circumscribed the ethical scenarios being discussed were. There was very little talk of privacy, for instance, and not much about social networks/filter bubbles/"fake news"/etc. that has been such a part of the zeitgeist.

Speaking of zeitgeist, I didn't have to hear the word "blockchain" even one single time, for which I am thankful.

If I had to give a rough breakdown of topics, it would be 30% AV/trolley problems, 20% discrimination, 45% meta-discussion, and 5% everything else.

One questioner brought up Jonathan Haidt's Moral Foundations Theory at the very end of the last day. I think he slightly misinterpreted Haidt (but I'm not sure since the questioner was laudably concise), but I was waiting all weekend for someone to bring him up at all.

If any audience would recognize the difference between “bias” in the colloquial sense and “bias” in the technical, ML/stats sense, I would have hoped it was here. No such luck. This wasn't a huge problem in practice, but it’s still annoying.

There’s a ton of hand-waving about how many of the policies being proposed for ethical AI will actually work at the implementation level. “Hand-waving” is even too generous of a term. It’s one thing to propose rules, but how do you make that work when fingers are hitting keyboards?

I’ll give people some slack here because most talks were very short, but “we’ll figure out what we want, and then tell the engineers to go make it happen somehow” is not really a plan. The plan needs to be grounded in what's possible starting at its conception, not left as an implementation detail for the technicians to figure out later.

"We'll figure out what to do, and then tell the geeks to do it" is not an effective plan. One of the ways it can fail is because it is tinged with elitism. (I don't think participants intended to be elitist, but that's how some of these talks could be read.) I fully endorse working with experts in ethics, sociology, law, psychology, etc. But if the technicians involved interpret what those experts say — accurately or not — as "we, the appointed high priesthood of ethics, will tell you, the dirty code morlocks, what right and wrong is, and you will make our vision reality" then the technicians will not be well inclined to listen to those experts.

Everyone wants to 'Do The Right Thing'. Let's work together to help each other do that and refrain as much as possible from finger pointing at people who are 'Doing It Wrong.' Berating people who have fallen short of your ethical standards — even those who have fallen way, way short — feels immensely satisfying and is a solid way to solidify your in-group, but it's not productive in the long run. That doesn't mean we need to equivocate or let people off the hook for substandard behavior, but it does mean that the response should be to lead people away from their errors as much as possible rather than punishing for the sake of punishing.

I wish the policy & philosophy people here knew more about how AI is actually created.

(I’m sure the non-tech people wish I knew more about how moral philosophy, law, etc. works.)

Nonetheless, engineers are going to keep building AI systems whether or not philosophers etc. get on board. If the latter want to help drive development there is some onus on them to better learn the lay of the land. That’s not just, but they have they weaker bargaining position so I think it's how things will have to be.

Of course I'm an engineer, so this is admittedly a self-serving opinion. I still think it's accurate though.

Even if every corporation, university, and government lab stopped working on AI because of ethical concerns, the research would slow but not stop. I can not emphasize enough how low the barriers to entry in this space are. Anyone with access to arXiv, github, and a $2000 gaming computer or some AWS credits can get in the game.

I was always happy to hear participants recognize that while AI decision making can be unethical/amoral, human decision making is also often terrible. It’s not enough to say the machine is bad if you don’t ask “bad compared to what alternative?”. Analyze on the right margin! Okay, the AI recidivism model has non-zero bias. How biased is the parole board? Don't compare real machines to ideal humans.

Similarly, don't compare real-world AI systems with ideal regulations or standards. Consider how regulations will end up in the real world. Say what you will about the Public Choice folks, but their central axiom is hard to dispute: actors in the public sector aren't angels either.

One poster explicitly mentioned Hume and the Induction Problem, which I would love to see taught in all Data Science classes.

Several commenters brought up the very important point that datasets are not reality. This map-is-not-the-territory point also deserves to be repeated in every Data Science classroom far more often.

That said, I still put more trust in quantitative analysis over qualitative. But let's be humble. A data set is not the world, it is a lens with which we view the world, and with it we see but through a glass darkly.

I'm afraid that overall this post makes me seem much more negative on AIES than I really am. Complaining is easier than complementing. Sorry. I think this has been a good conference full of good people trying to do a good job. It was also a very friendly crowd, so as someone with a not insignificant amount of social anxiety, thank you to all the attendees.

Posted in CS / Science / Tech / Coding | Tagged , , , , , | Leave a comment

AI's "one trick pony" has a hell of a trick

The MIT Technology Review has a recent article by James Somers about error backpropagation, "Is AI Riding a One-Trick Pony?" Overall, I agree with the message in the article. We need to keep thinking of new paradigms because the SotA right now is very useful, but not correct in any rigorous way. However, as much as I agree with the thesis, I think Somers oversells it, especially in the beginning of the piece. For instance, the introductory segment concludes:

When you boil it down, AI today is deep learning, and deep learning is backprop — which is amazing, considering that backprop is more than 30 years old. It’s worth understanding how that happened—how a technique could lie in wait for so long and then cause such an explosion — because once you understand the story of backprop, you’ll start to understand the current moment in AI, and in particular the fact that maybe we’re not actually at the beginning of a revolution. Maybe we’re at the end of one.

That's a bit like saying "When you boil it down, flight is airfoils, and airfoils are Bernoulli's principle — which is amazing, considering that Bernoulli's principle is almost 300 years old." I totally endorse the idea that we ought to understand backprop; I've spent a lot of effort in the last couple of months organizing training for some of my firm's senior leadership on neural networks, and EBP/gradient descent is the heart of my presentation. But I would be very, very careful about concluding that backprop is the entire show.

Backprop was also not "lying in wait." People were working on it since it was introduced in 1986. The problem was that '86 was the height of the 2nd AI winter, which lasted another decade. Just like people should understand backprop to understand contemporary AI, they should learn about the history of AI to understand contemporary AI. Just because no one outside of CS (and precious few people in CS, for that matter) paid any attention to neural networks before 2015 doesn't mean they were completely dormant, only to spring up fully formed in some sort of intellectual Athenian birth.

I really don't want to be in the position of defending backprop. I took the trouble to write a dissertation about non-backprop neural nets for a reason, after all. ((That reason being, roughly put, that we're pretty sure the brain is not using backprop, and it seems ill-advised to ignore the mechanisms employed by the most intelligent thing we are aware of.)) But I also don't want to be in the position of letting sloppy arguments against neural nets go unremarked. That road leads to people mischaracterizing Minksy and Papert, abandoning neural nets for generations, and putting us epochs behind where we might have been. ((Plus sloppy arguments should be eschewed on the basis of the sloppiness alone, irrespective of their consequences.))


PS This is also worth a rejoinder:

Big patterns of neural activity, if you’re a mathematician, can be captured in a vector space, with each neuron’s activity corresponding to a number, and each number to a coordinate of a really big vector. In Hinton’s view, that’s what thought is: a dance of vectors.

That's not what thought is, that's how thought can be represented. Planets are not vectors, but their orbits can be profitably described that way, because "it behooves us to place the foundations of knowledge in mathematics." I'm sorry if that seems pedantic, but the distinction between a thing and its representation—besides giving semioticians something to talk about—underpins much of our interpretation of AI systems and cognitive science as well. Indeed, a huge chunk of data science work is figuring out the right representations. If you can get that, your problem is often largely solved. ((IIRC both Knuth and Torvalds have aphorisms to the effect that once you have chosen the correct data structures, the correct algorithms will naturally follow. I think AI and neuroscience are dealing with a lot of friction because we haven't been able to figure out the right representations/data structures. When we do, the right learning algorithms will follow much more easily.))

PPS This, on the other hand, I agree with entirely:

Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way. … What we know about intelligence is nothing against the vastness of what we still don’t know.

What I fear is that people read that and conclude that artificial neural networks are built on a shallow foundation, so we should give up on them as being unreliable. A much better conclusion would be that we need to keep working and build better, deeper foundations.

Posted in CS / Science / Tech / Coding | Tagged , , , , , , , | Leave a comment

Will AI steal our jobs?

As an AI researcher, I think I am required to have an opinion about this. Here's what I have to say to the various tribes.

AI-pessimists: please remember that the Luddites have been wrong about technology causing economic cataclysm every time so far. We're talking about several consecutive centuries of wrongness. ((I am aware of the work of Gregory Clark and others related to Industrial Revolution era wage and consumption stagnation. If a disaster requires complicated statistical models to provide evidence it exists, I say its scale can not have been that disastrous.)) Please revise your confidence estimates downwards.

AI-optimists: please remember that just because the pessimists have always been wrong in the past does not mean that they must always be wrong in the future. It is not a natural law that the optimists must be right. That labor markets have adapted in the long term does not mean that they must adapt, to say nothing of short-term dislocations. Please revise your confidence estimates downwards.

Everyone: many forms of technology are substitutes for labor. Many forms of technology are complements to labor. Often a single form of technology is both simultaneously. It is impossible to determine a priori which effect will dominate. ((Who correctly predicted that the introduction of ATMs would coincide with an increase in employment of bank tellers? Anyone? Anyone? Beuller?)) This is true of everything from the mouldboard plough to a convolutional neural network. Don't casually assert AI/ML/robots are qualitatively different. (For example, why does Bill Gates think we need a special tax on robots that is distinct from a tax on any other capital equipment?)

As always, please exercise cognitive and epistemic humility.

Posted in Business / Economics, CS / Science / Tech / Coding | Tagged , , , , , | Leave a comment

Marketing to Algorithms?

Toby Gunton :: Computer says no – why brands might end up marketing to algorithms

I know plenty about algorithms, and enough about marketing. ((Enough to draw a paycheck from a department of marketing for a few years, at least.)) And despite that, I'm not sure what this headline actually means. It's eye catching, to be sure, but what would marketing to an algorithm look like?

When you get down to it, marketing is applied psychology. Algorithms don't have psyches. Whatever "marketing to algorithms" means, I don't think it's going to be recognizable as marketing.

Would you call what spammers do to slip past your filters "marketing"? (That's not rhetorical.) Does that count as marketing? Because that's pretty much what Gunton seems to be describing.

Setting aside the intriguing possibility of falling in love with an artificial intelligence, the film [Spike Jonez's Her] raises a potentially terrifying possibility for the marketing industry.

It suggests a world where an automated guardian manages our lives, taking away the awkward detail; the boring tasks of daily existence, leaving us with the bits we enjoy, or where we make a contribution. In this world our virtual assistants would quite naturally act as barriers between us and some brands and services.

Great swathes of brand relationships could become automated. Your energy bills and contracts, water, gas, car insurance, home insurance, bank, pension, life assurance, supermarket, home maintenance, transport solutions, IT and entertainment packages; all of these relationships could be managed by your beautiful personal OS.

If you're a electric company whose customers all interact with you via software daeomns, do you even have a brand identity any more? Aren't we discussing a world in which more things will be commoditized? And isn't that a good thing for most of the categories listed?

What do we really care about: getting goods and services, or expressing ourselves through the brands we identify with? Both, to an extent. But if we can no longer do that through our supermarkets or banking, won't we simply shift that focus it to other sectors: clothes, music, etc.


Arnold Kling :: Another Proto-Libertarian

2. Consider that legislation may be an inferior form of law not just recently, or occasionally, but usually. Instead, consider the ideas of Bruno Leoni, which suggest that common law that emerges from individual cases represents a spontaneous order, while legislation represents an attempt at top-down control that works less well.

I'd draw a parallel to Paul Graham's writing on dealing with spam. Bayesian filtering is the bottom-up solution; blacklists and rule sets are the top-down.


Both of these stories remind me of a couple of scenes in Greg Egan's excellent Permutation City. Egan describes a situation where people have daemons to answer their video phones that have learned (bottom-up) how to mimic your reactions well enough to screen out personal calls from automated messages. In turn marketers have software that learns how to recognize if they're talking to a real person or one of these filtering systems. The two have entered an evolutionary race to the point that people's filters are almost full-scale neurocognitive models of their personalities.

Posted in Business / Economics, CS / Science / Tech / Coding | Tagged , , , , | Leave a comment

Reading List for 23 September 2013

Arnold Kling :: Big Gods

Here is a question to think about. If religions help to create social capital by allowing people to signal conscientiousness, conformity, and trustworthiness [as Norenzayan claims], how does this relate to Bryan Caplan’s view that obtaining a college degree performs that function?

That might explain why the credentialist societies of Han China were relatively irreligious. Kling likes to use the Vickies/Thetes metaphor from Neal Stephenson's Diamond Age, and I think this dichotomy could play well with that. Wouldn't the tests required by the Reformed Distributed Republic fill this role, for instance?

Ariel Procaccia :: Alien journals

Steve Landsburg :: RIP, Ronald Coase

This is by far the best, simplest explanation of Coase's insights that I have read. Having read plenty of Landsburg, that should not — indeed does not — surprise me.

His final 'graph is a digression, but a good point:

Coase’s Nobel Prize winning paper is surely one of the landmark papers of 20th century economics. It’s also entirely non-technical (which is fine), and (in my opinion) ridiculously verbose (which is annoying). It’s littered with numerical examples intended to illustrate several different but related points, but the points and the examples are so jumbled together that it’s often difficult to tell what point is being illustrated... Pioneering work is rarely presented cleanly, and Coase was a true pioneer.

And this is why I put little stock in "primary sources" when it comes to STEM. The intersection between people/publications who originate profound ideas and people/publications which explain profound ideas well is a narrow one. If what you want is the latter, don't automatically mistake it for the former. The best researchers are not the best teachers, and this is true as much for papers as it is for people.

That said, sometimes the originals are very good. Here are two other opinions on this, from Federico Pereiro and John Cook.

Prosthetic Knowledge :: Prototypo.io

Start a font by tweaking all glyphs at once. With more than twenty parameters, design custom classical or experimental shapes. Once prototyping of the font is done, each point and curve of a glyph can be easily modified. Explore, modify, compare, export with infinite variations.

I liked this better when it was called Metafont.

Sorry, I couldn't resist some snark. I actually do like this project. I love both Processing and typography, so why wouldn't I? Speaking of which...

Hoefler & Frere-Jones :: Pilcrow & Capitulum

Some sample pilcrows from the H&FJ foundry.
Some sample pilcrows from the H&FJ foundry.

Eric Pement :: Using SED to make indexes for books

That's some impressive SED-fu.

Mike Duncan :: Revolutions Podcast

(Okay, so technically this may not belong on a "reading list.") Duncan previously created The History of Rome podcast, which is one of my favorites. Revolutions is his new project, and it just launched. Get on board now.

Kenneth Moreland :: Diverging Color Maps for Scientific Visualization [pdf]

Ardi, Tan & Yim :: Color Palette Generation for Nominal Encodings [pdf]

These two have been really helpful in the new visualization project I'm working on.

Andrew Shikiar :: Predicting Kiva Loan Defaults

Brett Victor :: Up and Down the Ladder of Abstraction: A Systematic Approach to Interactive Visualization

This would be a great starting place for high-school or freshmen STEM curricula. As a bonus, it has this nice epigraph from Richard Hamming:

"In science, if you know what you are doing, you should not be doing it. In engineering, if you do not know what you are doing, you should not be doing it. Of course, you seldom, if ever, see either pure state."

Megan McArdle :: 13 Tips for Jobless Grads on Surviving the Basement Years

I'm at the tail end of a doctoral program and going on the job market. This is good advice. What's disappointing is that this would have been equally good and applicable advice for people going on the job market back when I started grad school. The fact that we're five years (!!) down the road and we still have need of these sorts of "surviving in horrid job markets" pieces is bleak.

Posted in Reading Lists | Tagged , , , , | Leave a comment

Command line history

Jude Robinson :: The single most useful thing in bash

Create ~/.inputrc and fill it with this:

"\e[A": history-search-backward
"\e[B": history-search-forward
set show-all-if-ambiguous on
set completion-ignore-case on

This allows you to search through your history using the up and down arrows … i.e. type cd / and press the up arrow and you'll search through everything in your history that starts with cd /.

Wow. That is not an exaggeration at all: the most useful thing. I am so thrilled to finally be able to search my shell history the same way I can my Matlab history. I've been able to do this there for ages and my mind still hasn't caught up with not being able to do it in the shell.

If it's not clear to you why this is useful or why it pleases me, I don't think there's anything I can do to explain it. Sorry.


PS Anyone have first-hand experience with the fish shell? The autocompletions and inline, automatic syntax highlighting seem clever. I need to get around to giving it a try on one of my boxes.

Posted in CS / Science / Tech / Coding | Tagged , | Leave a comment

Pi

The Economist :: Babbage Blog :: Humble Pi

The Raspberry Pi is the brainchild of a couple of computer scientists at Cambridge University. Back in 2006, they lamented the decline in programming skills among applicants for computer-science courses. ... Over the past ten years, computer-science students have gone from arriving at university with a knowledge of several assembly and high-level programming languages to having little more than a working knowledge of HTML, Javascript and perhaps PHP—simple tools for constructing web sites. To learn a computer language, “you’ve got to put in your 10,000 hours,” says Dr Upton. “And it’s a lot easier if you start when you’re 18.” Some would say it is even better to start at 14.

The problem is not a lack of interest, but the lack of cheap, programmable hardware for teenagers to cut their teeth on. For typical youngsters, computers have become too complicated, too difficult to open (laptops especially) and alter their settings, and way too expensive to tinker with and risk voiding their warranty by frying their innards.

I don't see the connection between learning to code and having super-cheap hardware. Back when I was a kid learning to program I actually had to pay real money for a compiler. (Anyone else remember Borland Turbo C++?) Now you're tripping over free languages and environments to use, including many that run entirely through your browser so there's zero risk to your machine.

Honestly how many teens are going to go full-David Lightman and be doing serious enough hacking that their hardware is at risk? Is the goal to make sure teens have the opportunity to start learning to code before college, or to give them hardware to tinker with? Those are both fine goals. Being a software guy I'd put more weight on the former, but the important thing is that the way to accomplish either are completely different.

The Pi is a great way to meet the goal of giving people cheap hardware to experiment with. But if the goal is to give kids an opportunity to start racking up their 10k hours in front of an interpeter or compiler then projects like Repl.it are a lot better. (Repl.it has in-browser interpreters for JavaScript, Ruby, Python, Scheme and a dozen other languages.)

For starters, [your correspondant] plans to turn his existing Raspberry Pi into a media centre. By all accounts, Raspbmc—a Linux-based operating system derived from the XBox game-player’s media centre—is a pretty powerful media player. The first task, then, is to rig the Raspberry Pi up so it can pluck video off the internet, via a nearby WiFi router, and stream it direct to a TV in the living room. Finding out not whether, but just how well, this minimalist little computer manages such a feat will be all part of the fun.

I did this exact project about a month ago, and couldn't be more pleased with either the results or the fun of getting it to work. I still have to tinker with some things: the Vimeo plugin won't log into my account, and I need to build a case. Other than that, I wish I had done this a long time ago.

Posted in CS / Science / Tech / Coding | Tagged , , , , | Leave a comment