Music is a medium associated with sound. Therefore the dominant sense is hearing. Even so, we are not listening to music in a black void. The visual sense creeps in. And one does not perform music in one’s head. A sense of touch with an instrument is key. Music, then, is an array of senses with hearing at the helm. This sensory combination forms what media theorist Marshall McLuhan refers to as a sense ratio.
In his 1962 The Gutenberg Galaxy, McLuhan mentions how “computers can now be programmed for every possible variety of sense ratio.” Now this is an elementary fact. What makes it poignant still is how the Internet presents every possible variety of sense ratio for music. You can read about music, explore scores, watch music videos and documentaries, follow a band on social media, and chat about music on a message board.
All of these options place the dominant sense of hearing elsewhere. They cue in on the other senses: the tactile sense of typing, clicking, dragging links; the visual sense of reading and watching what is on the screen. Hearing in some of these cases becomes inner. We imagine what an artist sounds like when a writer describes them as between Nirvana and Portishead. We hear a familiar piece in our minds as we analyze a passage of its score. What’s more, all of these activities usually happen as we listen to music cued on our laptops.
An apparatus that can program any sense ratio does not hold one to be greater than the others. By association, does music not have an objective sense ratio? Is it, rather, that the sense ratio is dispersed in the multiple, the various contexts and people that engage in music on computers?