Archive

Uncategorized

Astro

A rendering of Teller’s graph (Source: Embedded Computing Designs)

Astro Teller, CEO of Google X, created this graph to state technological change when compared to human adaptability (I saw it originally in Thomas Friedman’s Thanks For Being Late). The dotted line is our potential as humans – to rise to the occasion as the pace of technological advances escalates.

Nate Silver puts it another way in The Signal and The Noise:

“In many ways, we are our greatest technological constraint. The slow and steady march of human evolution has fallen out of step with technological progress: evolution occurs on millenial time scales, whereas processing power doubles roughly every other year.” (see Moore’s Law)

The question we ask is how to keep up with technology. That is what the dotted line on Teller’s graph is all about. How do we rise to technology’s demand?

“It has become difficult for contemporary man to imagine development and modernization in terms of lower rather than higher energy use” Ivan Illich noted in Tools of Conviviality.

With that drive towards constant adaptation to new technologies, can there be time to slow down and contemplate the alternative? Can we have time to ask whether there is an approach other than proceeding up the dotted line of technological escalation? Can we find a way to bring technology back down to our level?

L.M. Sacasas puts it eloquently in “The Tech Backlash We Really Need” (here):

“We fail to ask, on a more fundamental level, if there are limits appropriate to the human condition, a scale conducive to our flourishing as the sorts of creatures we are. Modern technology tends to encourage users to assume that such limits do not exist; indeed, it is often marketed as a means to transcend such limits. We find it hard to accept limits to what can or ought to be known, to the scale of the communities that will sustain abiding and satisfying relationships, or to the power that we can harness and wield over nature. We rely upon ever more complex networks that, in their totality, elude our understanding, and that increasingly require either human conformity or the elimination of certain human elements altogether. But we have convinced ourselves that prosperity and happiness lie in the direction of limitlessness.”

Advertisements

We’ve all heard of self-fulfilling prophecies. What about self-canceling ones? Nate Silver describes one such example in The Signal and The Noise:

“There are two major north-to-south routes through Manhattan: the West Side Highway, which borders the Hudson River, and the FDR Drive, which is on Manhattan’s east side. Depending on her destination, a driver may not strongly prefer either thoroughfare. However, her GPS system will tell her which one to take, depending on which has less traffic – it is predicting which route will make for the shorter commute. The problem comes when a lot of other drivers are using the same navigation systems – all of a sudden, the route will be flooded with traffic and the ‘faster’ route will turn out to be the slower one.”

One part I left out mentioned how GPS’s were coming into vogue. This book was published in 2012. Google Maps and Waze are now household names. While adoption seemed to have changed since the book’s publication, the self-canceling predictions steadily chug along. One could argue that they are getting worse.

I found a piece written earlier this year by Atlantic writer Alexis Madrigal that explored this problem of mapping app congestion in California. (here) Madrigal laid out a point I hadn’t thought of when dealing with this issue. It is outlined in the opening of the article:

“In the pre-mobile-app days, drivers’ selfishness was limited by their knowledge of the road network. In those conditions, both simulation and real-world experience showed that most people stuck to the freeways and arterial roads. Sure, there were always people who knew the crazy, back-road route, but the bulk of people just stuck to the routes that transportation planners had designated as the preferred way to get from A to B.

Now, however, a new information layer is destroying the nudging infrastructure that traffic planners built into cities. Commuters armed with mobile mapping apps, route-following Lyft and Uber drivers, and software-optimized truckers can all act with a more perfect selfishness.

In some happy universe, this would lead to socially optimal outcomes, too. But a new body of research at the University of California’s Institute of Transportation Studies suggests that the reality is far more complicated. In some scenarios, traffic-beating apps might work for an individual, but make congestion worse overall. And autonomous vehicles, touted as an answer to traffic-y streets, could deepen the problem.”

A more perfect selfishness. If you think about what navigational apps do, it makes perfect sense: get me to where I want to go as fast as possible. When you have a mass of people operating from this line of thinking with app in tow, congestion is bound to escalate. Finding the quickest route becomes a self-canceling prophecy.

Alexandre Bayen, the director of UC Berkeley’s Institute of Transportation Studies, suggested that, according to Madrigal, “the apps should spread out drivers on different routes intentionally, which would require collaboration among the mapping apps.” Such a solution seems counter-intuitive, not only for business but on an individual level. Why should I take this longer way to help collective traffic flows? Just give me the quickest route already.

But underpinning this solution is a different perspective on societal and individual behavior – one that could apply to other aspects of our lives beyond navigating traffic. Eric Liu and Nick Hanauer expound upon such a perspective in their wonderful book The Gardens of Democracy:

“So, for instance, when you are cut off in traffic and feel the chemical rush of road rage, play out two scenarios. The first is the commonly expected one, in which the rest of your drive is dedicated to exacting revenge against the offending driver or to paying his ruthlessness forward and cutting off another driver.

The alternative scenario is one in which you catch yourself and choose not to compound one person’s discourtesy with your own. Here, you recognize that if you make the small decision to let drivers into traffic, even if it feels like an affront to your dignity, then other people will do the same.

Because the first scenario is indeed the common one, and everyone assumes its rules are the rules of the freeway, gridlock and awful traffic james are the inevitable result. But when we let the second scenario play out, traffic flows more smoothly. Gridlock does not occur. We get where we want to get faster.

This is not just parable. It is hard science. People who study complex adaptive systems – using computer models of traffic going along two axes (north-south and east-west) – can demonstrate and compare the effects of these two scenarios. Lesson one: others will act the way you act. Lesson two: when you act in a pro-social way, the net result for you and everyone else is better.

This may seem counterintuitive, the notion that slowing down gets you there faster, that to yield now is to advance later. The reason, again, is our ingrained and too-narrow idea about what constitutes our self-interest. In a one-time transaction with someone who won’t exist after the transaction (and here, we are describing the parameters of neoclassical economics), you might rightly think that screwing that person is the best way to achieve your own interest. At a minimum, you’d be safe to think you could get away with it. You would think that someone else’s problem is someone else’s problem.

If, however, we allow for the possibility that the other person in the transaction may still exist after the transaction, then we think differently. If we allow for the possibility that the other person will not only reciprocate […] but will also carry your behavior virally to others, then we must act differently. If we allow for the possibility that someone else’s problem is eventually your problem too, then we must act differently.

This possibility is called real life.”

If you look up a notable figure on Google, a list of quotations appears under their biographical information. As I searched for the French polymath Henri Poincaré, such a list appeared. A quip of his stood out to me:

“Science is built up of facts, as a house is built of stones; but an accumulation of facts is no more a science than a heap of stones is a house.”

This practically drips with relevancy today, especially in the movement towards what is called “Big Data”. Something about the term always irked me. What is the “Big” in “Big Data” anyway? What does it refer to?

The technology surrounding Big Data would imply it is about more data. Sensors on smart cars collect more data. Chess-playing AI stores more data about the possibilities in chess. If more data can be collected, the better these systems can work for us. A spiritual antecedent to this line of thinking is a school of statistics called frequentism. I heard about it from Nate Silver’s The Signal and The Noise. The parallels jumped off the page. He describes it as follows:

“Essentially, the frequentist approach toward statistics seeks to wash its hands of the reason that predictions most often go wrong: human error. It views uncertainty as something intrinsic to the experiment rather than something intrinsic to our ability to understand the real world. The frequentist method also implies that as you collect more data, your error will eventually approach zero: this will be both necessary and sufficient to solve any problems.”

More data does not solve for the fact that we could be wrong in the assumptions we make in building these systems. What benefit are more stones if the design of the house is already compromised? This is the sort of question that The Center for Humane Technology (here) asks: what good is technology if it manipulates us in unethical ways?

Sure enough, me pulling a random quote from Poincaré is playing into the heap. The quote is another stone. I have compromised the structural integrity for which the stone was a part of. Now it is just a stone.

Are we building houses or heaps of stones? We have to ask this as we move forward with “Big Data.”

One of the first truly automatous chess-playing machines was built in 1912 by Spanish engineer Leonardo Torres y Quevedo. He dubbed it El Ajedrececista. (‘the chess player’)

Judged by today’s standards, El Ajedrececista didn’t play the kind of chess that Deep Blue, or any human for that matter, does. “[I]t was extremely limited in its functionality,” Nate Silver writes in The Signal and the Noise, “restricted to determining positions in an endgame in which there are just three pieces left on the board.” There is actual footage of the automaton in action if you care to look. (here)

Left at that, El Ajedrececista is a whimsical footnote, a cocktail party factoid. But if you go further, there is something underneathhis extremely limited chess automaton that is both revealing and disturbing.

In an episode of his podcast Akimbo (here), Seth Godin riffs on learning to juggle. When people learn to juggle, they are misguided in their attention. They are focusing on the whole process, the catching in particular. It overwhelms and intimates, stopping many in their tracks. Instead, Godin makes the argument that it is actually all in the throw. Focus just on that and nothing else for a long while. Once you can master that siloed feature, catching becomes easier, and juggling with it.

This process serves as an analogy. When you are learning to juggle in this manner, you are not juggling at all. You are doing something else. To learn something, whether it be riding a bike or entreprenuership, is to break it down into an abstracted form of that thing (starting on a Strider bike or selling lemonade in your neighborhood) until, incremental step by incremental step, you can ride a bike or run a business.

Which brings us back to El Ajedrececista. Leonardo Torres y Quevedo tackled the chess-playing problem much how Godin talked about juggling. He created a machine that played an abstract version of chess. Not controlling 16 pieces but 2. Not competing against 16 other pieces but 1. Not a full game, but an endgame. Not any endgame, but an endgame with three pieces left. To engineer a machine to play chess, you start with an abstracted form of chess – something not like real chess at all. Then, little by little, you get to something Deep Blue playing real chess and beating chess grandmasters.

Think about self-driving cars even. Those CAPTCHA programs seem like an odd way to teach a computer to drive. Within this perspective of siloed and abstracted learning, however, it makes sense. Recognizing road signs and storefronts are a part of navigating in any vehicle. It was a big factor of how we all learned to drive. Get a computer to master this, and driving comes with it.

The disturbing aspect of this is that it plays into the fear that machines aided by deep-learning will take our livelihood. If AI can learn like us to a point that is uncanny, who’s to say it cannot learn our jobs, be it as a mechanic or journalist?

I am not in a position to answer that, (feel free to chime in) but I feel like it begs for us to keep the conversation going, the one that has gone on for many centuries, about what humans can do that machines cannot.

Distilled even further: What does it mean to be human anyway?

There is a passage a good friend shared with me that I cannot stop thinking about. It is from Msgr. Robert Sokolowski’s Phenomenology of the Human Person. The passage in question is Sokolowski quoting David Bickerton:

“the problems animals solve, the problems we solve, are our own problems . . . But the problems computers solve are not problems for computers. If I have a problem, it’s my problem. If my computer has a problem, it’s still my problem. Nothing is a problem for it, because it doesn’t interact with the world. It just sits there and waits for me to give it my problems.”

Solving a technical problem, be it front-end development or hardware, feels solely like a technical problem. That is all I can think of as I work on a website or install software. But software does not have problems in and of itself. It is people that need the software to work that do. Our work has to acknowledge that. Walter Russell Mead put it best in “The Jobs Crisis: Bigger Than You Think” (here):

“We are going to have to discover the inherent dignity of work that is people to people rather than people to things. We are going to have to realize that engaging with other people, understanding their hopes and their needs, and using our own skills, knowledge, and talent to give them what they want at a price they can afford is honest work.”

This is a variation on what Bickerton is expressing. Deep down, all work is people to people. The thing doesn’t have the problem, it is the people that do.

I am reminded of a story I’ve alluded to previously. Internet artist / developer Darius Kazemi spoke of a well-meaning developer who asked what her team could build for activists to use.

The developer was focusing on what stuff her team could build, what they could create. She wasn’t looking at the work as people to people. She was looking at the work as people to thing.

From prior experience with activists, Kazemi mentions that if the work was understood as people to people, the developer would first learn as much as she could from the activists rather than focusing on a product. What are the problems the activists are hoping to solve?

Soon enough, she would gather that these activists didn’t need more technology. All they needed was enough to work quickly and efficiently. She wouldn’t have to build much. The thing was less important compared to the activists’ needs.

The ongoing question, then, is how to constantly think of our work in tech as people to people and how to not get caught in the vacuum of working on the thing in and of itself.

If you drive a car for long enough, something interesting happens. All of a sudden, a situation arises where you have to drive your friend’s car. You do so with no problem. How? It’s not your car to begin with. You don’t know it like you know your car. Ah, but you knew enough about cars from your own experience to drive this one.

Tim O’Reilly refers to this as structural learning. We learn how something works and can generalize it to other things under that umbrella. Cars are a great example of structural learning. Then again, most technology is. I’ve had to operate many a friend’s smartphone before. Even though many had differing phones, I was able to perform the task at hand.

That  brings up another point. In both examples of structural learning, the end result is not pretty. You had to get used to the reactive brakes and awkward steering. I had to adjust to an iPhone’s interface. But through and through, we got the job done.

We were effective.

Nick Hanauer and Eric Liu discuss a shift from an efficient to effective-based world in The Gardens of Democracy, a book I’ve started to read with much interest. The portion in question is worth quoting in full:

“The metaphors of the Enlightenment, taken to scale during the Industrial Age, led us to conceptualize markets as running with ‘machine-like efficiency’ and frictionless alignment of supply and demand. But in fact, complex are tuned not for efficiency but for effectiveness = not for perfect solutions but for adaptive, resilient, good enough solutions. This, as Rafe Sagarin depicts in the interdisciplinary survey National Security, is how nature works. It is how social and economic systems work too. Evolution relentlessly churns out effective, good-enough-for-now solutions in an ever-changing landscape of challenges. Effectiveness is often inefficient, usually messy, and always short lived, such that system that works for one era may not work for another.”

Many, myself included, are wondering how to learn, how to be adaptive really, in a world as complex as this one. The programming I am learning about now may be irrelevant soon. As soon as you learn about AI, something new might come into the fold. How can one keep up?

Structural learning seemed like a necessary step for me – to adopt a bird’s eye view of information technology. But I have been slowly coming to the trade off. As Hanauer and Liu write, the effectiveness that stems from structural learning is usually messy. We want to master whatever field we get into like we master our car and phone. We want it to be beautiful, not messy.

But I wonder if this new mastery can be one that is equally as compelling as the one we’re used to. Will we be okay with being ever adaptive and ever good-enough-for-now?

“The NWS (National Weather Service) keeps two different sets of books: one that shows how well the computers are doing by themselves and another that accounts for how much value the humans are contributing.”

-Nate Silver, The Signal and The Noise

The most interesting book to me is the second one. How much value are humans contributing to the forecast? As far as the NWS’ second book goes, our contribution looks promising:

“According to the agency’s statistics,” Silver writes, “humans improve the accuracy of precipitation forecasts by about 25 percent over the computer guidance alone, and temperature forecasts by about 10 percent. Moreover, according to [Jim] Hoke,” director of NWS’ Hydrometerological Prediction Center, “these ratios have been relatively constant over time: as much progress as the computers have made, his forecasters continue to add value on top of it.”

These results wont mean much unless we delve into the semantics at play.

The typical discourse is that humans are augmented by computers. Humans are the starting point. First book: how humans are doing. Second book: how humans are doing with computer aid. We see this everywhere. Whether optimistic or pessimistic, humans are the measure.

What the NWS does is the complete opposite. Start with the computer instead. The second book, then, is how computers do when augmented by humans. We don’t see this everywhere. Such a thought emphasizes servility to technology.

Like Carl Jung’s notion of ideas having people instead of people having ideas, this paradigm takes a common assumption and flips it. While the turn can be disorienting, we can now observe the relationship between two things from a different perspective.

And who knows, perhaps we can glean something we hadn’t right-side up.