Black Boxes (Full of Lemons?)

“The great question of the 21st century is going to be: Whose black box do you trust?'” That question was relayed to Tim O’Reilly by John Mattison, the chief medical information officer of Kaiser Permanente.

It’s a chilling question, partially because of what a black box is. O’Reilly defines a black box as “a system whose inputs and outputs are known, but the system by which one is transformed to the other is unknown.”

We see what we put in and what we get. But how that’s done? Not really. This creates what’s called an asymmetry of information.

This particular kind of asymmetry has been written about by economist George Akerlof in his landmark paper “The Market for ‘Lemons'” (here) . Akerlof uses the automobile market as an example. New and used cars can be, for the sake of simplicity, either good or bad (a ‘lemon’). “After owning a specific car […] for a length of time,” writes Akerlof, “the car owner can form a good idea of the quality of this machine; i.e., the owner assigns a new probability to the event that his car is a lemon. This estimate is more accurate than the original estimate.”

In this moment, asymmetry is created. Someone selling that specific car knows more about the car’s quality than any would-be purchaser. That buyer can only give an educated guess at what the seller knows for certain. That seller can withhold this information, leading to the risk of someone buying, well, a lemon.

But asymmetry gets even more tangled when we deal with black boxes. Let’s refresh with O’Reilly’s definition. A black box is “a system whose inputs and outputs are known, but the system by which one is transformed to the other is unknown.” Again, the asymmetry lies in the process between input and output. Someone using a black box service has little to no idea how that process happens. The developers of the black box do.

Or do they? That is where things get complicated. Even the developers can run into some asymmetry. The algorithms that run these services can become incredibly complex. So much so, that even the developers don’t understand the inner machinations.

The output might be crap but there’s no clear way of understanding how or why. Not without lots and lots and lots of time. It’s as if the car-salesman couldn’t entirely be sure whether he was selling was a lemon or not. His information might be off, but he won’t be giving it to the buyer any time soon.

This creates a compounded asymmetry of information. Now it’s not just the customer not knowing but the service provider as well. This kind of asymmetry could have always been around. I am sure there are examples of the case. But with the dawn of black box services and master algorithms, compounded asymmetry is happening now more than ever.

It makes me wonder how to operate with such uncertainty, especially when IT relies on recommending such black box products to their clients. Goodness, then the asymmetry would compound even more – the IT expert recommends a service that performs a task (which he nor the developers fully understand) for a client (which does not understand either how the service is performed or that the IT expert and the service providers do not fully understand its workings).

If that is to be our future, we must not only be okay with asymmetry but thrive within it. We have to be okay if there are some lemons in them black boxes.

Advertisements
1 comment
  1. Theodore J Burkhardt said:

    What is an example of a “black box”? A social media sorting algorithm?

    Like Wendell Berry recommends, a technology should be embraced when it improves life holistically. That means it doesn’t diminish the good things about life that make life worthwhile [what some moral philosophers call “incommensurate” goods]. That technology shouldn’t undermine community, the integrity of the earth, inner peace, our ability to freely produce what we need from our surroundings, etc. So–does a black box avoid That? I’d say, on principle, if I’m understanding the definition, that the fact that we are ignorant about it’s processes, we are then ignorant about it’s effects, and so we can’t really know if it’s good or not. It’s like, we may have a pizza that is super delicious, but the recipe is withheld from us. We may find it odd when people start craving that one pizza over every pizza. Is there some chemical in the recipe that is addictive? To find out, we’d have to ask the cook. If it turned out he himself didn’t even know what was in the pizza, we’d have a public health issue. It isn’t that different from computers and their compounded systems.

    Interesting boss!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: