We need technology that doesn’t feel like technology.

Huh?

Darius Kazemi, the internet artist who expressed this at a cool talk (here), used his glasses to describe this point:

“My glasses don’t feel like technology, even though they are literally the most important piece of technology I own […] [T]hey don’t feel like technology because they’re not work. I didn’t have to learn to use them. Maybe once every five years I notice that they’re broken or I need to adjust my prescription. Then I take them to the optometrist and, for a couple hours, they become technology.”

Kazemi’s point reminded me one that Ivan Illich emphatically makes in Tools for Conviviality. Therein he writes that “the means for the satisfaction of […] needs are abundant so long as they depend primarily on what people can do for themselves, with only marginal dependence on commodities.”

The fine line of marginal dependence. That is the tightrope good technology treads. (or at least tries to) The goal is to help people with what they can do for themselves and others.

There is a story in Kazemi’s talk where he refers to a well-meaning developer asked what his team could build for activists to use. To Kazemi, this was asking the wrong question. It’s not about what you can build but how you can empower people. Because, as Kazemi mentions, sometimes that means less technology, not more of it. Maybe empowerment will come from building less.

As someone entering information technology, that is a sobering thought.

It’s empowering as well.

 

Advertisements

Read about the world of baseball statistics for just a while and its parallel to our data-driven present will jump out at you. Forget parallel, baseball stats ride on the stream of information technology.

I am particularly interested in the conflict between what’s quantifiable and what isn’t. Nate Silver describes the struggle in The Signal and The Noise. “Statheads can have their biases too”, he writes of the data obsessed. “One of the most pernicious ones is to assume that if something cannot be easily quantified, it doesn’t matter.”

So what can we do to solve that?

One way is to make the unquantifiable quantifiable through technological escalation. Take Pitch f/x, “a system of three-dimensional cameras that have now been installed at every major-league stadium. Pitch f/x can measure not just how fast a pitch travels – that has been possible for years with radar guns – but how much it moves, horizontally and vertically, before reaching the plate.”

Pitchers can be measured in a way that was once thought impossible to quantify. Imagine how this could be used in practice, honing a pitcher’s skills in such minute, microscopic ways.

It makes one wonder where all this escalation leads to. According to Silver, as far as baseball goes, “we’re not far from a point where we might have a complete three-dimensional recording of everything that takes place on a baseball field.” Silver further mentions how  “new technology will not skill scouting”, the talent evaluation of prospective players, “but it may change its emphasis toward”, again, “the things that are even harder to quantify and where the information is more exclusive, like a player’s mental tools.”

But there’s another way to get at the unquantifiable, a way that does not rely on a technological arms race to read people’s minds. “The key”, Nate Silver writes with regards to baseball statistics and forecasting, “is to develop tools and habits so that you are more often looking for ideas and information in the right places.”

The same can be said in regards to technological ventures.

Companies like Couchsurfing and e180 are focused on looking in the right places, not technological escalation, to expand on what we can quantify with our data. Let’s take a look at the two aforementioned examples.

Couchsurfing is a hospitality and social networking service where people can coordinate travel and lodgings. How do they measure how effective their product is? Tristan Harris, co-founder of The Center for Humane Technology, describes how they do so in various speaking engagements. Below is from a transcript of one of his TED talks (here):

“Let’s say you take two people who meet up, and they take the number of days those two people spent together, and then they estimate how many hours were in those days — how many hours did those two people spend together? And then after they spend that time together, they ask both of them: How positive was your experience? Did you have a good experience with this person that you met? And they subtract from those positive hours the amount of time people spent on the website, because that’s a cost to people’s lives. Why should we value that as success? And what you were left with is something they refer to as ‘net orchestrated conviviality,’ or, really, just a net ‘Good Times’ created. The net hours that would have never existed, had Couchsurfing not existed.”

Positive hours spent together – Hours spent on site = net ‘Good Times’. Such a formula does not require much but speaks volumes. It is a testament to Couchsurfing adjusting themselves to search for ideas and information in the right places.

Then there’s e180, an eLearning and event-based company. Their main product is Braindates. As Startupfest describes it, Braindates is an “in-person peer-to-peer learning and networking experience”, a “platform and method for collaborative learning and networking to help you seek and find connections with others in a variety of interests and areas of expertise.” (Watching a video of it unfolding describes it just as well)

Their product connects people in-person through a technological interface. The power of their product is in those connections, so that makes measurement quite simple – how many braindates occurred? (In 2015, for instance, they recorded over 1500 braindates) It also allows them to access heart-felt stories about how Braindates helped people. Many of those kind of testimonials are on their site. Like Couchsurfing, e180 searches for ideas and information in the right places, where sincere human connection takes place.

This is not to denigrate technical innovation. Such fields, like deep learning, are full of admirable pursuits undergone by admirable people. We cannot forget, however, that an equally attainable and effective avenue exists.

You just have to look in the right places.

Eric Beinhocker, author of The Origin of Wealth, makes a distinction between physical technologies and social technologies. Physical technologies deal with everything from stone tablets to an iPad. Social technologies deal with things like laws to an economic market.

The distinction is a useful one, but one question kept pestering me. Where does information technology play into the two? Does it fit within physical technology or social technology? A large majority of IT work pertains to physical technology, whether one is installing software on a computer or working with servers. At the same time, IT work can help keep social technologies running. Like what exactly? A company’s financial structure? (pay roll, accounting) What about helping clients keep up with public policy? (GDPR for instance) That could be another.

So is information technology both physical and social technologies? An “all of the above” sort of answer? Maybe. However, I want to key in on something Beinhocker mentions in The Origin of Wealth with regards to the mutual relationship between physical and social technologies:

“Physical technologies and social technologies coevolve. Physical technology innovations make new social technologies possible, like fossil fuel technologies made mass production possible, smartphones make the sharing economy possible. And vice versa, social technologies make new physical technologies possible – Steve Jobs couldn’t have made the smartphone without a global supply chain.”

There is coevolution going on. Both act upon the other in continual growth. So if that is the case, what if information technology is not physical technology? What if information technology is not social technology either?

What if information technology was in fact the area in-between physical and social technology, where the two interact with each other in the constant act of coevolution? Could that truly be information technology?

“In the discussion of the relation between man and powerful agencies controlled by man, the gnomic wisdom of the folk tales has a value far beyond the books of our sociologists.”

-Norbert Wiener, “The Machine Age” (excerpts here)

This statement struck me coming from one of the grand theorizers of modern (and future) information technology. Here is an instance within the same article where he channels that gnomic wisdom, referring to a short story by W.W. Jacobs and mentioning One Thousand and One Nights:

“[T]he machines will do what we ask them to do and not what we ought to ask them to do[…]

There is general agreement among the sages of the peoples of the past ages, that if we are granted power commensurate with our will, we are more likely to use it wrongly than to use it rightly, more likely to use it stupidly than to use it intelligently. [W. W. Jacobs’s] terrible story of the “Monkey’s Paw” is a modern example of this — the father wishes for money and gets it as a compensation for the death of his son in a factory accident, then wishes for the return of his son. The son comes back as a ghost, and the father wishes him gone. This is the outcome of his three wishes.

Moreover, if we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us.”

When we read about tech, we gravitate towards non-fiction. If something fictional about technology, science-fiction will hit the spot. But there’s something to be said about fantastical tales that Wiener recommends here.

Because our technology now resembles magic more than we like to realize. The world is porous with possibility, flowing invisibly through our lives at an unbelievable rate. It cannot help but feel like magic

With that said, why can we not use fairy tales to better understand our relationship with the magic that is technology? They might prove more instructive now than they ever were before.

“The great question of the 21st century is going to be: Whose black box do you trust?'” That question was relayed to Tim O’Reilly by John Mattison, the chief medical information officer of Kaiser Permanente.

It’s a chilling question, partially because of what a black box is. O’Reilly defines a black box as “a system whose inputs and outputs are known, but the system by which one is transformed to the other is unknown.”

We see what we put in and what we get. But how that’s done? Not really. This creates what’s called an asymmetry of information.

This particular kind of asymmetry has been written about by economist George Akerlof in his landmark paper “The Market for ‘Lemons'” (here) . Akerlof uses the automobile market as an example. New and used cars can be, for the sake of simplicity, either good or bad (a ‘lemon’). “After owning a specific car […] for a length of time,” writes Akerlof, “the car owner can form a good idea of the quality of this machine; i.e., the owner assigns a new probability to the event that his car is a lemon. This estimate is more accurate than the original estimate.”

In this moment, asymmetry is created. Someone selling that specific car knows more about the car’s quality than any would-be purchaser. That buyer can only give an educated guess at what the seller knows for certain. That seller can withhold this information, leading to the risk of someone buying, well, a lemon.

But asymmetry gets even more tangled when we deal with black boxes. Let’s refresh with O’Reilly’s definition. A black box is “a system whose inputs and outputs are known, but the system by which one is transformed to the other is unknown.” Again, the asymmetry lies in the process between input and output. Someone using a black box service has little to no idea how that process happens. The developers of the black box do.

Or do they? That is where things get complicated. Even the developers can run into some asymmetry. The algorithms that run these services can become incredibly complex. So much so, that even the developers don’t understand the inner machinations.

The output might be crap but there’s no clear way of understanding how or why. Not without lots and lots and lots of time. It’s as if the car-salesman couldn’t entirely be sure whether he was selling was a lemon or not. His information might be off, but he won’t be giving it to the buyer any time soon.

This creates a compounded asymmetry of information. Now it’s not just the customer not knowing but the service provider as well. This kind of asymmetry could have always been around. I am sure there are examples of the case. But with the dawn of black box services and master algorithms, compounded asymmetry is happening now more than ever.

It makes me wonder how to operate with such uncertainty, especially when IT relies on recommending such black box products to their clients. Goodness, then the asymmetry would compound even more – the IT expert recommends a service that performs a task (which he nor the developers fully understand) for a client (which does not understand either how the service is performed or that the IT expert and the service providers do not fully understand its workings).

If that is to be our future, we must not only be okay with asymmetry but thrive within it. We have to be okay if there are some lemons in them black boxes.

“We only labour to stuff the memory and leave the conscience and the understanding unfurnished and void. Like birds who fly abroad to forage for grain, and bring it home in the beak, without tasting it themselves, to feed their young; so our pedants go picking knowledge here and there, out of books, and hold it at the tongue’s end, only to spit it out and distribute it abroad.”

-Montaigne, Of Pedantry (trans. Charles Cotton)

I love the analogy of knowledge being picked like grain to feed young birds. It takes on a peculiar spin when connected to cognitive systems which are at the heart of modern (and future) computing, being utilized in everything from winning Jeopardy! to looking for signs of cancer. What does cognitive computing mean anyway? The definition will help with the analogy. Senior VP & Director of IBM Research John E. Kelly III explains in “Computing, Cognition, and the Future of Knowing”:

Cognitive systems are “probabilistic, meaning they are designed to adapt and make sense of the complexity and unpredictability of unstructured information. They can ‘read’ text, ‘see’ images and ‘hear’ natural speech. And they interpret that information, organize it and offer explanations of what it means, along with the rationale for their conclusions. They do not offer definitive answers. In fact, they do not ‘know’ the answer. Rather they are designed to weigh information and ideas from multiple sources, to reason, and then offer hypotheses for consideration.”

A key part to cognitive computing is feeding the machine this unstructured information. Tons of it. “If you give the computer enough examples of what is right and what is wrong,” Thomas Friedman declares in Thank You For Being Late, “the computer will figure out how to properly weight answers, and learn by doing.”

Are we in this instance, then, the bird feeding grain to our young? Are we giving our cognitive systems batches of data without tasting it ourselves? What does it mean to taste this data anyway? How do we do so when dealing with millions upon millions of grains? How much of it should we taste before giving it off to something like IBM’s Watson?

What does it mean if we, like Montaigne’s pedants, do not taste the data for ourselves?

A college professor I once had enforced a zero-tolerance policy for late homework. This included the woes of technological mishaps: “My printer ran out of ink”, “My computer crashed”, “I lost my internet connection and could not access the link to turn the paper in”. Murphy’s Law, he told us, especially lords over technology. Acknowledge the likelihood for technical hiccups and prepare accordingly.

That memory stays with me. But even as I venture forth into the world of information technology, the excuse still remains tempting. When something goes wrong, blame technology. Why?

“When you make a prediction that goes so badly,” Nate Silver explains in The Signal and The Noise, “you have a choice of how to explain it. One path is to blame external circumstances – what we might think of as ‘bad luck.'” Silver uses an example with weather forecasts: “When the National Weather Service says there is a 90 percent chance of clear skies, but it rains instead and spoils your golf outing, yo can’t really blame them.”

What’s tempting about the excuse is that bad luck can happen with technology. The program runs 90 percent of the time and, well, now is the one time out of ten that it doesn’t. These kind of things can happen.

But there’s a danger to that line of reasoning. We can hide behind the veil of probabilities. How were we to know that the half of a percent chance this thing would break actually could occur? As Silver quips, “[w]hen you can’t state your innocence, proclaim your ignorance.” How could I have known that was going to happen? I had no idea that would happen. The shelter ignorance provides takes us out of the picture, exactly where we want to be.

Danny Kahneman has a line from Thinking, Fast and Slow that I throw around often. We have “an almost unlimited ability to ignore our ignorance.” I find it almost ridiculous to type that we also have the propensity to ignore our ignorance of throwing around ignorance as an excuse. Getting around that is the real challenge. How can we know when something is legitimately out of hands and what can we do it?

Well, in the meantime, I’ll go back what that college professor told us.