6 days in Shanghai, now at Pudong Airport, waiting to go to Hong Kong. Although I’ve been to this airport about 5 or 6 times in the last few years, this is the first time I’ve noticed that it’s right by the sea. The picture shows looking out past the loading area for planes, to the open sea beyond. Who knew? Various hearing adventures this trip, which I’ll spend this post talking about.
I was here to attend an Electroacoustic Music Studies Network (EMS) conference, and I really shouldn’t react the way I do. I should probably just accept what I find at academic conferences like this, but I have a hard time containing myself. Basically, we have a bunch of European and American composers of a very specialized type of music, talking about each other’s music and the music of their teachers. And no one but me seems to be bothered by how insular and self-serving the whole thing is. Although there were several pieces on the concerts which were quite lovely, there were also quite a few which sounded completely hackneyed and like they were written according to a formula of how this particular music should be written.
Even worse, the papers were often talking about a kind of orthodoxy which says that one must follow the prescriptions of the founder of acousmatic music, Pierre Schaeffer, in order to do things “properly.” And I just don’t get it. What does this have to do with creativity and the act of composing music and presenting it to an audience? I spoke with Robert Normandeau, a composer who seems to buy into the orthodoxy, but also writes some quite beautiful music, about trying to strike up a dialog about these issues, and even though we didn’t really begin anything at the conference, perhaps we will over email, or elsewhere.
I chide myself for these complaints … why should I complain about these insular attitudes if it is a comfortable environment for the people who subscribe to it? In as much as it seems to me a little bit like a religious belief for some of the people at the conference, you’d think I’d leave it alone, just as I wouldn’t think of arguing with anyone about their religious beliefs, regardless of what I think about them. But this is a little different, because it’s talking about music and music technologies, which I feel very engaged in, and very passionate about. So maybe the answer for me is just to avoid these kinds of events, which is what I’ve actually done for many years …
One way that this is strange for me is that one of the features of “electroacoustic” music as defined by EMS seems to be an avoidance of melody, harmony and metrical sense. So, the entire theoretical focus is on how to classify and organize the traditionally “extra musical” sound features which remain. Which ends up with a strong focus on the “morphology” of “space” and “spectral” identity. Which means, how do sounds move in a 2 or 3 dimensional space, and how do you classify timbres. All this in order to permit analysis of the work in an academic setting.
But for me, the question comes up: what do I do with this spatial morphology if I can’t hear the location of a sound? Is this music something I’m specifically excluded from because of my limitations? Or, is there some way for a listener with my limitations to hear and appreciate the music even if the spatial information is inaccessible? If I listen to music by Gabrielli or of Henry Brandt, it isn’t necessarily inaccessible because I can’t hear antiphonal effects. I do of course miss something, but the music still contains enough emotional content that it can reach me and move me even if I miss that spatial parameter of location. Somehow, this doesn’t seem to be the intent of the “acousmatic” composers.
There’s also a kind of catechismic espousal of one of Pierre Schaeffer’s dicta that “acousmatic listening” involves divorcing a sound from it’s source, from the physical object which creates the sound. It makes the music more “pure.” It also seems, and has always seemed, patently stupid to me. The primary interest I find in sampled and processed sounds is in the relationship to the physical source of the sound, and the complex layers of meaning that arise from this recognition. Why on earth would I want to make something so “pure” that it loses this dimension? There’s enough about music which is abstract, that I always want to hold on to those things which make the meaning more tangible and concrete (pun intended).
Although by being diligent about attending the conference I didn’t really get to make any new explorations of Shanghai this time, I did at least get a chance to play a concert at a club with my young Shanghai buddies, WANG Changcun, Mai Mai, and Xu Cheng. It was really refreshing, after several days of totally cerebral discussions about an extraordinarily conservative and proscribed musical genre to hear each of their solo sets: Wang Changcun played some very rhythmic deconstructed samples, Mai Mai played a drone e-bow guitar, sort of a la David First in NYC. And Xu Cheng did another sort of sample deconstruction piece, noisy and interesting. And quite different from Wang Changcun’s. A bit of a breath of fresh air for me.
One last interesting thing … I sort of didn’t notice my ear. Maybe it’s because the sound system was mixed to mono, like most club systems. At least that’s what the sound guy told me, since I couldn’t tell. But basically, I just set up, and played. I was really into the music, and the 45 minutes or so of playing just went by, and I hardly noticed it at all. Felt like it went well, and I just didn’t notice the distortion in my ear or anything. Not that it wasn’t there, but it just didn’t matter.
At the MacDowell Colony for four weeks … arrived Tuesday, now it’s Friday night. A studio of my own in the woods, with nothing to do but work. I’m fed well, can ride my bike to town if I need anything, and have to walk to a little library to use the internet. Quite heavenly. Have immediately finished editing together a version of Numb, and have begun writing more musical material for the rest of MONO … and hope to have the piece structured and plotted out before I’m done here. Also, of course, I need to prepare for concerts in China next month and a write a paper to deliver in Shanghai … but I think MONO is my main job for the month. Tonight or tomorrow I’ll post another call for collaboration in the form of stories, after I post links to Numb and MONO Prelude.
One of the things one does here is talk to other resident artists about your project, so now people are aware of my history with hearing loss, and what I’m doing with it, on some level. This is what we talk about at breakfast and dinner. But this evening, back in the studio, I had a little reminder of my reality.
I’m in the studio, listening to the recording of Numb I’ve been editing together (picture of the ProTools file above). I suddenly become completely paranoid that the stereo imaging isn’t happening. I spend about a half hour sending tones through one speaker then the other … and it doesn’t really matter which one I’m sending it through, I really can’t tell which one it’s coming from. I can see it on the meters on the computer and in the mixer, but it sounds to me like all the sound comes out of both speakers. I finally calm down enough and do some simple diagnostics to convince myself that the sound I’m sending through each individual speaker is what I mean to send there, but it’s an intellectual exercise in debugging. I can’t hear it.
So, if I can’t hear it, why do I care if the stereo is working for other people? On the one hand, it seems antithetical to everything I believe about how I write music: for years I’ve striven to write what I really hear, not what I think I’m supposed to hear. But this seems different. In fact, the rest of the world takes great pleasure in the spatial movement of sound, the separation of sources and their isolation and distinct identity. I can imagine it, but I can’t hear it. That is, I hear it internally, but not with my ears. So trying to realize it seems as important as it did when I had two ears, but now I can’t just deal with it intuitively and with my senses. I now have to treat it as a kind of intellectual task, something I need to do and to trust that it will work as I imagine it.
Last week I had the opportunity to hear a graduating percussion student at NYU perform my piece Ever-livin’ rhythm in his senior recital. The student, Garrett Lanzet did a great job, though he was a little shaken by syncing up at the end … makes me think that I should really go back to the piece and put a click track on it, so that the player can really know where he is all the time. But that never crossed my mind when I wrote the piece.
What was interesting for me, though, was how similar and how different the piece is from what I write now. It was my first piece for computer sound … and then, as now, I rejected the idea of writing “for loud speakers” and felt compelled to include a performer, so that the piece could be a performance.
I wrote the piece in 1977 … so that’s 33 years ago. I can hear in the piece places where I tried to make the music a little more abstract that I would if I were writing it now. I also am very aware that the musical themes are directly taken from the old Folkways LP recordings of the Ba Benzele pygmies which I was listening to a lot at the time, trying to understand and get a handle on their polyrhythmic world. And I was fascinated then, as now, with the idea of using the computer to create a kind of magic in the realm of timbre and sound and coordination which the player alone can’t do.
I recall that the electronic crashes in the piece, which seem to echo and compliment the cymbal crashes, are actually made up of a chaotic pile of the little rhythmic motive, but at a speed of quarter note equals about 3000 beats per minute. I don’t think anyone will ever hear that little compositional tid-bit. And today I might try something like that, but if I couldn’t really hear its significance, I’d probably chuck it in a minute.
More successfully, the ways the bowed vibes and the FM-synth melody interact in the middle are really lovely, as is the complex poly-rhythm built by the combination of percussion & tape at the end. However, I feel as though I know a lot more about writing for performers today than I did then … and either through the use of real interaction, or with a click track, I’d have been more careful to help the player find their way in the piece, so that they’re not searching for the sync pulse, as Garrett was at the end of the piece last week.
And of course … I remember that the piece actually made some interesting use of stereo effects, with the sounds swirling about the percussionist some. Unfortunately, I can’t hear that anymore … I presume it was at the performance last week, but I have no way of telling. THAT is very strange, still.
What’s also a bit strange is that there’s an element of my musical language which has persisted in all this time. Even in 1977, the music was focused on melody. And there are real harmonic and rhythmic anchors. I spoke afterward with Jonathan Haas, Garrett’s teacher, and the one who turned him on to the piece. Jonathan and I hung out at Aspen in 1975, and again I think in ’78 when Ever-livin’ Rhythm was played at the festival there. We haven’t really seen each other in the many years since. He praised the piece, and said he tried to get all his students to play the piece, but Garrett was only the second who had really taken it on, because it’s so difficult. He compared it to Berio’s Circles or Stockhausen’s Zyklus … classic 20th Century percussion tour de forces using a huge set up for a single players, as does Ever-livin’ Rhythm. My response was that, indeed, those had both been models for me, but that I needed mine to groove as well. That wasn’t an issue for either Berio or Stockhausen. But it still is for me, even after all these years.
Played both MONO Prelude and Numb on Sunday, and seemed to get an enthusiastic response. After the show, there was a brief Meet-The-Composer discussion with the composers on the program, led by Cornelius Dufallo. When he got to me, Cornelius commented that he’d now played several of my pieces in different contexts and was struck by the references to events in my life in all the pieces. He asked how I thought about the autobiographical nature of my work.
It was a great question, and caught me a bit off guard. My answer was that starting about a decade ago I had stopped “editing” myself when I wrote, and that this is what has emerged. While that’s right, it’s not very clear, and doesn’t really address the heart of what he was asking about. So I’ll try again here.
Much of what I hear and read about music theory talks about process, about the analysis and organization the materials of music, about the exploration of unusual sounds, about use of improvisation, about the use of algorithms to generate structures and sounds, etc. All very interesting, and all certainly stuff I’ve spent time studying and thinking about. But ultimately, not things I find terribly relevant when I sit down to make music. What really happened about 2002, when we moved to the the City full time, and I dropped work on the musical theater piece I’d been working on for about 5 years (The Rise & Fall of Isabella Rico), was that I re-thought why I write music.
In some way, having spent 5 years working on something as un-hip, un-avant-garde as a musical theater piece, which had been aimed at Broadway or Off-Broadway, had a huge liberating effect on me. For the decades from the mid-1970s through that time, I think I had been snared by my history at IRCAM, and then by my position directing iEAR Studios and my growing engagement with the engineering culture at Rensselaer. Pre-Rico I was really thinking about everything I did both as music and as some kind of research. Research could mean using game algorithms for composition in real time, or exploring new techniques for processing or synthesis, exploring new interface devices, finding ways to use computers to direct improvising players, working with network-distributed performance ensembles, etc. All very interesting … but requiring me to spend more time thinking about technical issues than focusing on why I write music.
What I learned working on Rico was to write songs which were meant to express the specific feelings of a specific character in a specific situation. More than that, a song in a musical theater piece has to mark a change or transformation in a character: the character enters the song with a conflict or question, and the song provides a way to resolve it. Of course, what was wrong with the work on Rico was that I needed to dumb down the music over and over again, in order to meet the demands of production meetings with producers, writer, music director, director, dramaturg and anyone else who was around. I’m very good at churning out those songs, but not so good at accepting the aesthetic limitations which the musical theater medium imposed on me. Which is why I eventually walked away from the project, with a resolve to never again put myself in a position where I am not in control of decisions about music.
Which put me in the position of asking myself: if I’m in control, what is the music about? And the answer I having been coming up with for the last 8 years or so has been that it’s about translating what I’m experiencing in my life into sound, into music. When I was young, despite numerous attempts to do something with my life other than write music, I always ended up with long hours where I’d get completely lost in just playing the piano. Either playing music by others, or just wandering through improvisations alone, wherever they took me. And I did that because it seemed to express something about how I was feeling … and somehow it transformed me and healed me in ways nothing else did. So if that’s what music does for me, then the best I can do to make music I can believe in is to get in touch with my feelings over the time I’m writing a piece, and transcribe them. So the Shadow Quartet was about my father’s passing, and Extended Family is about the time of my mother’s passing. The iFiddle Concerto is about my first grandson Jake’s birth, and Uptown Jump is about Jake and his family moving from Brooklyn to my neighborhood in northern Manhattan, making the extended family.
I’d have a hard time pointing out specific programmatic points in any of these pieces which describe particular events in a narrative. But I think they each do depict the emotional state I was in over a given time, focused on a particular series of events. So, in response to Cornelius’ question, autobiography is very key to my sense of my music. In fact, on some level, I think that all I can really offer as an artist is the chance to hear the world through my ears … which is a bit more ironic now that I only have one working ear! Nonetheless, the focus on making what I write expressive in some concrete way which relates to my experience feels like it’s breathed new life into my music. I’m still interested in using technology, in thinking about different sounds, working with different ways to instruct players or give them freedom to improvise … but all of that seems important or useful only in as much as it helps me get at the expressive goal. Which, at least for the moment, is very personal, if not autobiographical.
And, as I dive into expanding my thoughts about the entire MONO project, I’m now going back to some of the ideas I had in Rico, about using music to explore the emotional profiles of other characters, since MONO isn’t just about me. It needs to contain its own world of characters with their own unique perceptual limitations.
So, there’s a dream I have, whenever I have a concert after a period of time … like, two weeks, or a month, or even if I’m just doing different programs spread not so far apart. I dream I’m going to play a concert, and I’ve forgotten something really important. Sometimes, I’ve forgotten to finish writing the piece. Sometimes I’ve forgotten to bring the computer I’m going to perform with. Often I’ve forgotten the score. Even more often, I’ve got no cables to hook things up, no way to transport my gear, no sound system. Occasionally, I realize I can’t go on stage because I’ve forgotten my pants.
This dream always happens at about the same time in preparing a new program. It’s the time when I need to put aside whatever else I’m working on, and make sure that I really am prepared, that I know how to play what I need to play, that I’ve organized what needs to be organized for me to get to the gig and be confident that I can show up and have everything I need, and that I can give the best performance I can give.
Ran into this dream about a week ago, in relation to the concert coming up on Sunday. I’d been spending all my time refining and revising things for Numb, which has really been consuming me for the last couple of months. The dream said it was time to make sure I could still play MONO Prelude, which I’ll also have to do in the concert, and which puts me in a completely different role. In Numb I’m sort of a support player, processing and playing back files. In MONO Prelude, I’m the whole show. I speak, and play the computer, I process my voice, and I hope I’m convincing. And I hadn’t looked at it since we did the recording for the new CD back in February.
Although I always wake up in a fright from these dreams, I’ve come to appreciate them. Since I returned a week ago, I’ve managed to revive MONO Prelude, made a few changes and adjustments, and feel pretty good about being able to make it work on Sunday. Will even have a video track from Luke for it this time … So as long as I’m ill prepared in my dreams, it seems to keep me on track to make sure I have time to be prepared for the real deal.
Thinking about what I’m writing, what I’ve been writing. Concert coming up next weekend with the two parts of MONO which I’ve done so far … the Prelude and Numb, nearly half an hour of music. And then I dive into finishing up the next CD, hopefully done by the middle of May, when I head off to MacDowell for a month.
What I’ve been thinking about is my language. Tonal, rhythmic, melodic, accessible. Recently I’ve heard a number of things which have seemed to me exactly the opposite of what I like, but which have generated great audience enthusiasm. Specifically, in the concert last weekend, my piece was preceded by a work from Matthias Pintscher. Which sounded very derivative of work by Helmut Lachenmann. I heard a full evening of Lachenmann’s music few weeks ago at Miller Theater. It’s all whispering strings, in the range of ppppp to pp, entirely composed of “extended techniques”, scrapings, playing on or near the bridge, all very quiet. Both Lachenmann and Pintscher are clearly skilled composers. But the work strikes me as similar to the big Mark Rothko paintings, with slowly morphing monochromatic canvases. What’s wrong with the Rothko canvases? They’re from the ’50′s, that’s what. Let me explain. It has a lot to do with why I write the way I do.
Mid to late 20th century music and art seemed to be a time to test boundaries. Visual arts needed to explore how you made a language without representation. Composers needed to explore how you could make structure and musical sense out of all the full universe of sounds. This was the ultimate lesson from Cage, and from Stockhausen, and even from Pierre Schaeffer. Good lessons. Both in the visual and musical arts, our minds are more accepting and our ability to form a language draws from a much wider range of choices. But this is a lesson I learned in the 1960s and 1970s, when I was a student. And my reaction to the concert of Lachenmann’s music was that I’d walked into a fossilized version of my 1970s grad school composition seminar. This music sounds very old to me.
So it’s surprising to me to run into young musicians who seem to hear this as something new and exciting. And I’m talking about really wonderful young players whom I’ve worked with and respect. And while I can relate to the “gee whiz” factor of figuring out how to make unusual sounds from your instrument, I find myself unable to find any excitement in music which seems to only focus on the novelty of the sounds. The novelty, for me at least, wears off. Then I want the music to say something to me, to make me feel something, to take me somewhere emotionally and intellectually. Ideally, to take me somewhere I might not otherwise visit. For that, extended techniques, the use of all manner of unusual sounds from acoustic, electronic and environmental sources, are certainly useful tools. It would be foolish to act as though they weren’t available as part of our music. But, at least to me, it’s also very limiting to refuse to include the elements which speak to people in all cultures: melodies, rhythmic patters, pulses, harmonic movement. The things that allow us to remember a piece of music.
An example in my work is my fascination with using electronics to transform acoustic sounds in concert. For me, what makes the noise of the transformation meaningful is the reference to the acoustic sound it comes from. This is just the opposite of the standard dogma of electroacoustic music, formulated by Pierre Schaeffer in the 1950s, which says the sound should be completely divorced from its source. For me, this is just wrong. What is fascinating, and what carries meaning, is hearing how the transformation happens. Because transformations carry stories, and stories carry meaning.
So, the Pinstcher piece I heard last week was a lovely extended moment … but it was a static, quiet, unmoving and very blurred snapshot. Not unattractive, but missing anything which will really make me care about it. And using sounds which seemed very old hat to me, but which the audience seemed to be discovering for the first time. Go figure.
So, I think it’s time to re-start this blog, but this time with a broader focus. In previous entries I tried to talk mostly about the actual experience of my hearing loss. From here on out, I think I’m going to focus more on the music I’m writing now, which is very much in response to the situation I chronicled in earlier entries.
Last week I did the first trial performance of the 2nd piece for MONO: Numb. It’s based on a text by an anonymous contributor to the project who lost the sense of touch on the skin of her breasts and belly after cancer surgery. The preparation for the performance was pretty dicey. The way the piece is set up, the text begins scrolling across a video screen while a string trio with digital processing plays. About a 3rd of the way through, a soprano starts speaking parts of the text as they go by, and the text loops and is combined with or processed by the music of the trio. Eventually more and more of the text is sung, until a real “song” emerges for the last couple of minutes.
I planned to use a kind of processor called a vocoder, which effectively superimposes the artifacts of speech on a carrier signal – in this case, the carrier signal is the strings trio, often playing in rhythmic unison with the speaker/singer. The effect is to make the strings seem to talk or sing. As I usually do, I got this all worked out in the studio, making “virtual” string parts on the computer, and recording the singer. The first two rehearsals, one with strings along, and one with strings and singer, just didn’t work. The players were fine, but I couldn’t hear the processing at all. The second rehearsal disintegrated when I ended up with the microphones and processors feeding back uncontrollably, and the players said they couldn’t take it any more and split. What a nightmare! And no matter what I did, I couldn’t duplicate the effects I had in the studio in a rehearsal with live instruments.
I spoke (via email) with my friend and incredible sound engineer Jody Elff, who was on tour in Seoul, South Korea. Back and forth, it seems I was doing everything right, but Jody responded that what I was trying to do was difficult, and that monitoring and balance, as well as adjustment of the parameters of the vocoder and compression of the incoming signals from strings and voice were key elements which I’d need to get right.
The reality of performance, though, is that there’s never enough time in rehearsal to get it all right, at least not for one-off performances like this one. And when we were rehearsing, I just didn’t trust what I heard. I had to ask the players what was coming out of the speakers, because I can’t tell what’s coming from the speakers and what’s coming from the instruments. It’s all just coming from the same place for me. There are a few players who are close friends and long time collaborators, with whom this might work. But not in this situation, where the musicians expect things to roll out as planned. It was the first time most of these players had played my work, and they don’t have a long term commitment to it or investment in it, other than as professionals who are playing what they’re asked to play (and who play spectacularly, I might add). But dealing with my hearing limitations isn’t what they signed on for. This was another situation where I should have hired a sound person to make the necessary adjustments and tunings of the processing for me, someone who knows my work and whose ears I can trust. But there was no budget or time for that with this gig.
The solution was to go back to the studio, where I have more or less unlimited time, up to 24 hrs a day, and use recordings of the players to make a separate track of the processing, generated by the interaction of the strings and the voice as I’ve recorded them. This way I can minimize the problems with my hearing. I can monitor just the processing, or just the live recording, and I can take everything apart to listen to it, and to make sure that the sounds I want to have happen are happening. What a weird way to make music! But it works. The fact remains that I have a very clear aural image in my head (or somewhere within my body) of what the music should sound like, including what the digital processing should sound like. In performance, I just don’t trust what I hear in terms of processing, so I don’t have any reliable instincts on how to tune it in real time … which is something that have I counted on, and assumed, for years.
Ultimately, of course, this isn’t about me being able to do what I do in real time in performance. It’s about making the music work, and sound the way I want it to sound. This “pre-recorded effects” solution worked like a charm. The sound guy on the gig was able to do a great job of keeping my effects-track in balance with the sound from the live players, and the audience had no inkling that the effects weren’t happening in real time. And the nice thing for me was that I seemed to get great feedback from the audience about the piece, which many people said they found moving. Which was, after all, the main idea.
Next week I do a repeat performance, with the full crew of singer, strings, video and two dancers. It’ll be preceded by MONO Prelude, which by now feels like an old friend. And in which I do the processing live. I’m eager to see how these two work together in order to put together ideas about how the whole piece will go. The fact of starting with the focus on me and my senses, and then expending to other people and their sensory challenges is really the direction I want to move in with the piece. We’ll see how it goes.
This picture is of pianist Vicky Chow and me rehearsing FAITH for the final concert on our tour of China. (Thanks to Vicky’s brother Johnny for the picture, via Vicky’s Facebook page.) With his back to the camera is composer/saxophonist Demetrius Spaneas, who joined us on the Beijing/Hong Kong legs of the tour. Besides playing FAITH in Shanghai, twice in Beijing and finally in Hong Kong, the tour also gave me an initial chance to try out the Prelude to MONO in all three cities. It was very interesting and instructive for me.
Particularly at the Central Conservatory in Beijing, where the text was projected in Chinese in coordination with my speaking in English, people seemed to be genuinely moved by the piece. My concerns … that the musical materials would seem too simple, or would somehow not work with the spoken text … didn’t seem to be a problem.
What I was aware of, and perhaps can fix in the next few days, is that the way that I handle voice processing in the piece is not really as refined as I’d like. Basically, I’d like to make each looped segment of recorded speech have a unique and somehow meaningful type of processing. At this point, it feels like most of the loops just have various delays, echoes, pitch shifting … but that it’s never really relevant to what is being described, nor does it necessarily reflect the condition I’m experiencing. I think particularly of the place where I mention tinnitus … and right now I have a multilayered delay. Wrong. It would be interesting if I could really ease in a sound which would convey the white noise aspect of what I really hear, without just overwhelming the audience with a blast of white noise. Similarly, when I talk about sounds on my left side seeming to come from a kazoo being played across the street, I should take some time and embody that state of hearing.
I’m due to perform the piece again in New York at the Cornelius Street Cafe on Monday night. I have some time this evening in my hotel in Hong Kong. I wonder if I’ll be able to make an initial pass at fixing these things tonight or Sunday in NYC, in order to have a revised version on Monday?
Another issue which came up repeatedly for me was the problem posed by performing with my disabled ear. Specifically, do I use my hearing aid when I perform (which means I hear everything with a patina of heavy distortion on the left side), or do I turn off the hearing aid (which means I just don’t hear what’s on the left side)? For the most part, I’ve always tried to just put the stage monitor on my right side, and turn off the hearing aid. For several concerts during this trip, though, that wasn’t possible. Last night I had to monitor myself from the house speakers, and I could really only hear the one on stage-right, which was by my left ear. So I kept my hearing aid on, and just dealt with the distortion.
There’s a certain sense in which all of this is very strange. Before the SSNHL performing was all about listening closely to the SOUND I produced. Now, it’s more about imagining the sound, and monitoring what I can to make sure that the sound is what I want it to be, even if I can’t really hear it. It’s very odd, since my internal sense of what I want is so very clear … the timbres, the way the sounds move in space, the fullness of the stereo image … but I can’t hear any of this clearly with my ears. So I’m sort of working off of cues as to what is going out, hoping that I’m interpreting what I do hear accurately. I sort of imagine that it’s what it must be like to do surgery with a remote controlled robotic arm. You don’t have the real sensory input that you’d expect touching flesh with your fingers, but you hope you’ve got enough feedback from the mechanism to do as good a job as if you were there in person.
A few words about touring in China again. While the person who arranged the Beijing and Hong Kong legs of the tour showed an amazing incompetence … didn’t seem to realize that it was necessary to arrange for equipment or figure out the repertoire to be played by the various artists he’d brought, nor take responsibility for coordination of publicity … the composers and performers on the tour managed to fill in the blanks, and the support from the institutions in China was really wonderful. In Beijing Tammy Huang of the Pearl Shell International Cultural Exchange was a warm, professional and thoughtful host, and the club D-22 and the Central Conservatory both provided great support for our concerts. In addition to the concerts, I ended up doing a surprise recording session at the Beijing Film Academy with Demetrius and flutist/cilia performer Bruce Gremo, at the instigation of recording engineer Jürgen Frenz, who heard us improvise for an hour of so at D-22. Hopefully this will become another CD, the first improvisational recording I’ve done since Fish Love That in 2001. In Hong Kong, the Chinese University of HK provided us with the support we needed, even though they hadn’t received any information from the producer about the gig, including tech needs and publicity info, until we sent it 2 days before the concert, when we figured out that the producer had dropped the ball.
The really great part of the trip, as usual, was the interaction with the various people I got to see and meet and work with. And at this point, there are some continuing friendships developing with folks in China. The American and Canadian musicians in Beijing and Hong Kong were a pleasure to work with, performing with Vicky is always a treat. And meeting new friends and contacts in all three cities makes me hope I’ll be able to find ways to continue coming back here.
So, this is my view from the hotel room in Shanghai. My last day here I am finally losing some of the jet lag. For lunch, I headed out of the hotel, dreading going into a who-knows-what restaurant and spending too much for something not so good, because I don’t know what to order. Or more dreadful yet, I had thoughts of heading up to Huaihai Lu to get MacDonalds, because it would be easy. Instead walked down Fuxing Lu, found a noodle shop where I walked in, and despite them no knowing any English and me being able to say little more than “I am alone” and “I want to eat something.” I got settled at a little table on the sidewalk with a couple of Chinese guys who seemed to find me very funny, and helped me figure out how to season and stir up my noodle ramen (which is what it sounded like they called it), with hand pulled noodles being made by a lady on the sidewalk next to me. Incredibly delicious. I’m brought back to the taste of “muslim food” in China, funky tables and stools blocking the sidewalk. Yum. Oh yes, and while I don’t know how much the MacDonalds would have cost, this was a total of 11 kwai, or just under $2. And then I spent another 2 kwai on a delicious sesame covered sticky rice ball from a little hole-in-the-wall bakery vendor for dessert.
Dinner last night with an ex-student now living in Pudong (new eastern extension of Shanghai) and found that his wife has had a similar experience with hearing loss, but as the result of a tumor. As we sat down to supper, she said “I can’t sit here” and moved to the opposite side of the table. I suspected at that point that something was going on with her hearing, and the story came out in our discussion. Then I received a very moving story from another ex-student via email about her hearing loss, resulting from a fall, and her on-going recovery through application of traditional Chinese medicine, qi gong, etc. I’ll begin performing MONO for real next week, and am now really thinking about how I move from my solo prelude into the body of the work.
Final hearing issue: I got a recording of the concert from last Thursday, and trimmed it down and posted it yesterday. Vicky texted me, saying it was low & slow. I went back and listened. Sure enough, the file was somehow playing at the wrong sampling rate, which made it sound like it was a slightly slowed down tape. Took much a big chunk of the morning to fix. But the lesson is that even when I CAN hear, it doesn’t mean much if I don’t use my brain. Don’t know how I didn’t notice that it was at the wrong pitch in my first listen …
So, here is my healed foot. It’s 2 1/2 months since the last blog entry, my foot is pretty much healed (you can see the scar if you look closely). It’s on a window sill in my hotel room overlooking Shanghai, where I landed day before yesterday. And this does, surprisingly, relate to my hearing, and to MONO.
I was to premiere the prelude to MONO, which I’ve been working on feverishly for the last month or so, this past Sunday. What happened instead was … 10pm concert scheduled on a Sunday night, in an out of the way East Village club, on the first cold and rainy weekend of the Fall … no one showed up. Imagine that. I couldn’t imagine it, and was incredibly stunned & depressed by the whole (lack of) event. Two weeks after a full house at the Smithsonian, I end up canceling a concert that couldn’t attract an audience to hear a new piece. What goes up, must come down.
What I did notice during the should check was that the pianist commented on the interesting use of stereo in the opening of my excerpt of MONO. Which I’d thought about, and planned for … but of course hadn’t noticed because I can’t hear it. So, I walked away from it with an awareness of two things: first, I need to really go back and make sure that I have made sense of movement from stereo to mono in the piece as it stands now, and second, that in the long run I’m going to need to hire someone to work with me on making an effective use of spatialization in this piece as it develops, since I really can’t hear it … and I’ve been living with this long enough that I forget that I can’t hear it.
I tackled the first of these issues, probably in a temporary way, by doing a little bit of reprogramming on the plane from NY to Shanghai on Monday-Tuesday. Now, in the final couple of minutes of the piece, a droning background and all the speech shift fully to the right side, so everyone else gets the sense of one-sided mono sound. On the other issue I’ve made a first contact with an audio person via email, and we’ll see where that goes. Meanwhile, the concert that didn’t happen will have its most important parts …. the NYC premiere of the MONO Prelude, and a performance of Hammer & Hair by Kathy Supove and Ana Milosavljevic, on my Monday night concert on Nov 9 at the Cornelia St Cafe. So, pick yourself up, dust yourself off, and start all over again.
Then, of course, there’s Shanghai. I’m still suffering a bit of jet lag. Well, a lot of jet lag. Was on my way out to dinner last night with some friends, and felt it hit me. Had to bail out of a wonderful looking Hunan dinner, come back to the hotel and crash. Only to wake up at 4:45am, when I get to look out the window at this interesting “Shanghai Education Activities Center” across the alley from my hotel. What you can’t see in the picture is how it’s framed against the 30 story+ towers of apartments and office buildings which make up Shanghai’s skyline, nor the shady streets of plane trees which form the French Concession area where I’m staying, near the Conservatory.
Yesterday I did the real first performance of the MONO Prelude for a group of students at the Shanghai Theater Institute. I’m not sure why I was asked to give them a 2 hr lecture, or what they got out of it, though they did have quite a few questions and responses at the end. Then came back to the Conservatory where I got to do a run-through of FAITH with Vicky Chow. Was very encouraging that we were able to get through it OK, and she played it with real feeling … which makes me feel great. We have another run through in about an hour, at 10am, then perform it as the opening of a Bang On A Can All-Stars show at the Festival this evening. Which will be the China premiere of that piece. And which I presume will attract a crowd, and won’t be canceled because no one shows up. Then Sunday Vicky and I head off to Beijing for at least 3 more performances of both FAITH and MONO, and then to Hong Kong.
And finally … it’s my birthday today. 62 years old, and here I am off on the wrong side of the world. I got to have a nice skype with Wendy this morning, and will again this evening. It’s actually more connected than another birthday I remember being away on, must have been 1978, with a crew from IRCAM, performing at the Donaueschingen Festival in Germany, while Wendy and 3-yr old Chloe were back in Paris. Then I was sick, terribly angry at being labeled part of the tech crew rather than being acknowledged as a performer by the IRCAM heirarchy, and miserable beyond reason. Now I’m a little lonely, but glad to have seen Wendy this morning on skype, and excited about the performance this evening. So hopefully, not a bad birthday.