What it Means to Be Human: Chess, Juggling, and Machine Learning

One of the first truly automatous chess-playing machines was built in 1912 by Spanish engineer Leonardo Torres y Quevedo. He dubbed it El Ajedrececista. (‘the chess player’)

Judged by today’s standards, El Ajedrececista didn’t play the kind of chess that Deep Blue, or any human for that matter, does. “[I]t was extremely limited in its functionality,” Nate Silver writes in The Signal and the Noise, “restricted to determining positions in an endgame in which there are just three pieces left on the board.” There is actual footage of the automaton in action if you care to look. (here)

Left at that, El Ajedrececista is a whimsical footnote, a cocktail party factoid. But if you go further, there is something underneathhis extremely limited chess automaton that is both revealing and disturbing.

In an episode of his podcast Akimbo (here), Seth Godin riffs on learning to juggle. When people learn to juggle, they are misguided in their attention. They are focusing on the whole process, the catching in particular. It overwhelms and intimates, stopping many in their tracks. Instead, Godin makes the argument that it is actually all in the throw. Focus just on that and nothing else for a long while. Once you can master that siloed feature, catching becomes easier, and juggling with it.

This process serves as an analogy. When you are learning to juggle in this manner, you are not juggling at all. You are doing something else. To learn something, whether it be riding a bike or entreprenuership, is to break it down into an abstracted form of that thing (starting on a Strider bike or selling lemonade in your neighborhood) until, incremental step by incremental step, you can ride a bike or run a business.

Which brings us back to El Ajedrececista. Leonardo Torres y Quevedo tackled the chess-playing problem much how Godin talked about juggling. He created a machine that played an abstract version of chess. Not controlling 16 pieces but 2. Not competing against 16 other pieces but 1. Not a full game, but an endgame. Not any endgame, but an endgame with three pieces left. To engineer a machine to play chess, you start with an abstracted form of chess – something not like real chess at all. Then, little by little, you get to something Deep Blue playing real chess and beating chess grandmasters.

Think about self-driving cars even. Those CAPTCHA programs seem like an odd way to teach a computer to drive. Within this perspective of siloed and abstracted learning, however, it makes sense. Recognizing road signs and storefronts are a part of navigating in any vehicle. It was a big factor of how we all learned to drive. Get a computer to master this, and driving comes with it.

The disturbing aspect of this is that it plays into the fear that machines aided by deep-learning will take our livelihood. If AI can learn like us to a point that is uncanny, who’s to say it cannot learn our jobs, be it as a mechanic or journalist?

I am not in a position to answer that, (feel free to chime in) but I feel like it begs for us to keep the conversation going, the one that has gone on for many centuries, about what humans can do that machines cannot.

Distilled even further: What does it mean to be human anyway?

Leave a comment