Acoustic Learning, Inc.
Absolute Pitch research, ear training and more


Phase 9:  Examining the Process


February 12 - Seven ten split

Sound begins as a disturbance in the air.  The "waves" of sound enter the ear and are transformed into nerve impulses via the cochlear mechanism, which separates the sound energy into its component frequencies.  After passing through the "tonotopic map", the sound frequencies are passed to the brain.  The sound is handed over to a particular side of the brain depending on what we think we hear.  If we think we're hearing language, it ends up on the left; if we believe we're hearing musical or non-speech sound, it ends up on the right.  This process is illustrated in my highly unscientific diagram below:

In other words, if you pay attention to music, you hear musical sound; if you pay attention to language, you hear words.  This decision isn't necessarily a conscious one; for example, I find it particularly difficult to hear lyrics in songs.  Unless I make a special effort, the words register as musical sound and are lost to me.  I can listen to any pop song dozens of times without having the slightest idea what the words are; when I've tried listening to rap I've found it so difficult to extract meaning from the sound-- just to hear it as words, much less understand what's being said-- that I've pretty much given up on the genre.  I have to be constantly, actively attentive, or the words vanish.  The other year I had discovered the song "The Children of St Monica" by the Windupwatchband and I wanted to learn the words; each time my tape spun round to it, I'd remind myself to pay attention to the lyrics, but my attention would always wander during the musical interlude after the second verse.  I followed the music, and hardly noticed when the words resumed.  This happened many, many times, until finally I sat down and typed out all the lyrics.  Only then did I finally know the words of the third verse.

This week I was wondering: how do our minds make that decision?  Why left, why right?  I've speculated that this is because music is spatially oriented, and the right brain handles spatial relationships; but I began to wonder-- what about solfege pitch, which doesn't seem to perceive literal space and distance?  And, if color and pitch are analogous, where would color be processed?  Since color has no spatial features, why would it be processed in the right side?  If color isn't language, why would it be processed on the left?  A quick search showed that naming a color involves both sides of the brain.  The color sensation itself ends up on the right, while the verbal label for the color activates the left side.  Although the left side of the brain is generally referred to as the "logical" side of the brain, and the right side is usually acknowledged as the "creative" side, I think other designations make the issue far clearer.  I've labeled the diagram with those designations; from what I'm finding it seems that the left brain handles symbolic information and the right brain sensory input.

I've mentioned the Stroop effect before, in these pages, but it was finding this page which strengthened my certainty that perfect-pitch training has to take an entirely different form.  Trying to rapidly connect a sense experience to its verbal label is a difficult task-- so difficult that there are dozens (maybe hundreds) of websites which offer this color-word challenge.  I encourage you to follow this link and check out this example in particular.  Their "Trial #2" bears a striking resemblance to standard perfect-pitch training-- receive the sensations and produce the abstract verbal labels, in rapid succession-- but this page exists because they want to show you how incredibly hard it is to do this!  Regardless of whether you try to label tones with pitch names or trigger words, surely this can't be the best way to go.

For some time, I had thought that listening for vowel sounds was the answer, and to some extent it may still be; however, vowel-listening as I have come to understand it seems to be a relative perception based on illusory effects.  Sean pointed out that vowel sounds were a great way for him to begin recognizing each of the pitches, and long ago Peter described how vowel listening was for him a non-melodic experience-- but as I demonstrated at the close of Phase 8, when your key-sensation changes, the vowel sounds you hear can change too.  Vowel listening, although definitely a "good start", is not completely reliable as an absolute sensation.

Perhaps what has to be accomplished is convincing the mind that we are listening to coded, symbolic information-- not spatial or sensory information.  Diana Deutsch's latest CD, "Phantom Words", contains quite a few tricks to convince the listener that they are hearing musical sound instead of language sound.  How do we turn that around and trick the mind into hearing language instead of music?  It's easy enough to do this with non-musical sounds; over the last Christmas holiday, I was operating a small coffee grinder, and after a few whirls I was surprised to hear the machine speaking to me "Green, green, green," as plainly as any human conversation.  I called my brother over to listen to the device, and I revved it a few times; then I told him to listen for the word "green" and he was immediately startled to hear it so clearly where he had previously heard only noise.

What presentation makes it impossible to hear a pitch sound as music?  In coffee-grinding sound, all it took was the suggestion of "this is language" for that shift to occur; conversely, on Deutsch's CD, the mere repetition of language inescapably creates musicality.  I have figured out how to record and edit my own speech so that it is unidentifiable as anything but musical sound; that's easy.  But if the goal is to hear pitches "phonemically", then there must be a situation in which I can place the sound and prompt the listener "this is symbolic sound" and their mind will be forced to agree.  The suggestion needn't be language; sounds can be representative without being language.  But what?

Is this the correct solution, to force pitch sounds into the left side of the brain?  There are scientists who believe that the left planum temporale is involved in labeling tones, rather than processing them, and color identification does nonetheless leave the raw color sensation to the right side with labeling on the left.  However, phonemes are their own "label", and I tend to suspect that phonemic pitches should also be their own label.  In such a case, the sound simply remains itself, and the associative task is not with some additional, unrelated sensation, but with a printed symbol-- just like in reading language.  Can you imagine what it would be like trying to learn the letters of the alphabet if each made a sound completely unlike its name?  And it would be interesting, wouldn't it, if the current musical lettering system (C-D-E, etc) were dropped in favor of letting the note sounds represent themselves?

February 16 - On second thought

Have you ever noticed that the "w" sound doesn't actually exist?  Try sustaining one; you won't be able to do it.  Invariably you end up with an "oo" sound, and only when you introduce another vowel does the w appear-- briefly-- in the space between the vowels.  In fact, if you think of any word that has an "oo" sound followed by a vowel (such as "tuition") you'll notice there's a distinct w sound which we hear but don't acknowledge.  Tricky things, those phonemes, and every last one of them is relative.  Vowels are double-pitch combinations, and consonants are more complicated combinations which, as the w exemplifies, can depend on what precede or follow them.

Can single pitch frequencies really be "phonemic"?  Probably not.  I was recently transcribing a song from Can Hieronymus Merkin Ever Forget Mercy Humppe and Find True Happiness?; and in order to better hear some of the chords, I played them on instant infinite loop.  Their jackhammer noises resembled Diana Deutsch's "Phantom Words", and indeed, I began to hear different words in them depending on how I listened.  Although this type of playback had inadvertently convinced my brain that these were language sounds, this was certainly not absolute sensation.

If not for pitch, I'd be very pleased right now to have learned that language and music are indeed identical.  Last year, Levitin conducted an experiment which demonstrated that musical structure and linguistic structure are processed by the same areas of the brain.  Aside from "low-level sensory information", pitch didn't seem to be a significant part of Levitin's experiment, so I wrote to him, asking if any of his subjects had absolute pitch.  He replied no.  I wondered, mightn't absolute listeners process pitch information in the linguistic part of their brain-- did he know of any such research?  He again told me no, and he suggested I look for Robert Zatorre's publications.  So I found an older article by Zatorre, "Functional Anatomy of Musical Processing in Listeners with Absolute Pitch and Relative Pitch".  This research indicates that absolute listeners use language areas of the brain in addition to, not instead of, those areas which process pitch sensation.  That is, pitch-as-phoneme remains merely a neat idea of mine; and that idea is not supported by these scientists' papers, or even by my own recent observations.

Language and pitch exist simultaneously.  I've been impressed lately by the similarity between playing Chordsweeper and listening to people talk.  When I play Chordsweeper (I'm on level 6, round 4), I never mistake the type of chord.  I always know that it's a I, IV, or V, and I'm always right.  However, I often confuse its absolute position when I decide too quickly.  I often have to wait that extra moment to allow my mind to accept the structural information-- the "language" sound-- before I can attend to the separate sensory distinction.

This experience is directly comparable to Deutsch's Track 22, which forces the listener to hear the pitch information in spoken words.  I successfully re-created her illusion, recording my own voice into the computer and looping it; in the process, I found that the illusion seems not to work if you attempt to repeat more than one phrase.  That is, taking the words from her CD, I can loop "you often hear phantom words rather than real ones," and it never stops sounding like speech sound; but if I loop either "you often hear phantom words" or "rather than real ones", that phrase becomes musical.  I correlate this with a recent comment by my Voice & Movement instructor, who claimed that our minds are only capable of holding on to one idea at a time; if that's so, then the double-phrasing fails to become musical because we're actively switching back and forth between the ideas in each phrase.  As long as our active attention is drawn to the language sound, the pitch information is subsumed.  When there's only one phrase, the linguistic idea becomes fixed, and our minds become free to explore additional meaning in the sound sensation.

But that "additional meaning" is always there in our everyday conversation, as the qualitative aspect of speech.  Listen to anyone speaking-- anyone at all-- and you'll quickly recognize that the pitch information in their speech follows the same conventions as musical harmony.  Most obviously, you'll hear their tonic pitch, which they'll return to when they have completed a thought.  If they finish a sentence but wish to keep speaking, they'll speak its last word on some unresolved middle interval.  A question rises in pitch because that introduces musical tension, which directly implies the need for resolution-- the more urgent the question, the more dissonant the interval.  An answering statement releases that tension as it returns to the tonic.  These are just samples; if you stay alert you'll hear all kinds of remarkable demonstrations.  For a quick example, just imagine the words "But mom!" spoken as a major seventh; as an interval, it creates the greatest amount of scalar tension, and it demands immediate resolution (to the tonic)-- precisely the effect which would be desired by a petulant child.

I was further intrigued to be reminded that a composition is "interpreted" by a musician.  One of the fundamental concepts I teach in my Oral Interpretation class is that "interpretation" is the act of placing your ideas into someone else's words; if you apply the same definition to musicianship, then the "ideas" must be emotional, rather than intellectual, such that you're placing your emotions into someone else's pitches.  If pitch information is emotional, then it's not intellectual.  If it's sensory, it's not symbolic.  Perhaps the left planum temporale does label tones, and not process them.  Perhaps, like the Stroop color-word demonstrations, a pitch sensation is irreconcilably distinct from any cognitive label.

What all this leads back to is the fact that, to the ordinary mind, pitch information is undifferentiated sensation.  But even to the absolute mind, pitch information is still sensory input.  In a way this is a relief, because it means that I don't have to figure out how to shove pitch information into the other side of the brain.  Instead, I can focus on applying Gibson's principles so that each musical pitch becomes distinctly recognizable; and then, having been made recognizable, it can be recognized.

February 22 - A side aside

Well, after 86 full games of Level 6, Round 4 (and more than a few that I restarted), I've made it to Level 7 in Chordsweeper.  This is partly because I stopped letting myself get frustrated and tense when I clicked the wrong tiles, which had made it more difficult to sense the chords; but, more importantly, I watched Head again this afternoon and realized that "The Porpoise Song" began on an E-major triad.  Thanks to this, I no longer confused the duck and frog, which eliminated many errors.  My most common mixups at this point are squirrel-terrier, rabbit-brown dog, and parrot-basilisk.  I almost never mistake the cats or the monkeys-- the cats (A-major) are the mellowest sound and the monkeys (D-major) the most obnoxious.  I find that the birds (E-major) are thin and calm, the rodents (B-major) are serious and thoughtful, the dogs (C-major) are solid but light, and the reptiles (F-major) are unnerving and wobbly.  I had also had trouble between the eagle and the dragon, until I discovered that the dragon's sound quality seemed to resemble the spaniel; today I was intrigued to see that the F-major V-chord has the same pitches as the C-major I-chord... in a different order.

If I keep up this pace I'll need to write in additional levels-- to include the accidental keys, multiple octaves, and multiple timbres.  Multiple timbres shouldn't be too much of a change; the other day, when I had played Interval Loader up to level 13, I switched over to Chordsweeper and discovered that the non-piano instrument was still chosen, but I was able to recognize the chords the same as before.  Octaves will be trickier because part of what makes a tomato recognizable is that, in addition to being a different chordal structure, it's outside of the normal octave range; once the I-IV-V chords are in the same octaves the tomato could be confusing.  Accidentals, though, should be a real bear... which would mean I'd need to draw fifteen more little animal heads, too.  I hope the game would still be playable with so many animals; part of the reason it took me so long to pass Level 6 was that it was difficult to avoid leaving tiles isolated on the game board.

Back to Ear Training Companion v2.1, I recently heard from Jon, who has been reading the previous research:

Wow. I just got to the section where you spoke of the guy who said they sounds were vowels... Ever since I started Burges course (about 4 years ago, then I stopped), I thought I could hear a difference in these pitches, like where they are positioned (I am a vocal person) in the mouth, but I could never put my finger on it. Some pitches sounded more ... inside. Others were totally in the front. Well anyway, I went through ETC tonight listening to the vowels ... It is night and day. They totally sound distinct. Thanks for your research. It has been very much worth the 30 dollars.  ...Yes you can quote me. I think this has been the coolest experience. Today I could hear the vowels of everything around me. The lights buzzing, the high pitch frequency of the tv, the oboe, violin, harp. Wow. If this is just the beginning I am excited for more.

His experience raises a point I hadn't considered.  It's well known that there are front vowels and back vowels.  Isn't it possible that a trained singer will have learned that certain pitches will be optimally resonant when placed in specific areas of the mouth-- perhaps in those same areas where the pitches' matching vowels are formed?  I will have to explore that later.  In the short term, it's clear that vowel listening definitely has value, if some people can find it so powerful despite potential relative effects; but if I'm going to use vowel listening in the curriculum, I'll need to figure out how to make the process this obvious and easy for everyone.

Addendum 2/23:

I've also been continuing to play Interval Loader-- I'm on "Two Notes Advanced", where "Advanced" is movable-do.  When I began the game at this level, I couldn't even get past level 1, identifying only the octave and tritone, because I kept hearing the mellower or more dissonant pitches and thinking that that was the sound of the tritone.  Chordsweeper is maintaining my pitch sense; and as I've continued to play Interval Loader, my mind is gradually learning to separate pitch information from an interval's harmonic sound.  After 28 games, I can now easily and quickly run up to level 5 (P8, TT, M2, M6, M7) and sometimes continue to level 6 and 7 (P4 and m7, respectively).  I completed a new benchmark today which, by comparison to the "before" benchmark, demonstrates precisely what I'd intended when I designed the game-- that progressing further in the game causes you to learn the intervals as a matter of course.

You can see how the familiar intervals, for the levels that I've reached, are very familiar.  Of course, there is still some bias here-- in mis-identifying the unfamiliar intervals, I probably guessed them to be the familiar ones, and you can see that I recognize the m2 as "dissonant, but not the M2".  However, at the start, I didn't recognize the M2 at all-- furthermore, as my high scores continue to rise, I'm adequately convinced that the benchmark is representative of actual learning, especially since I've retained my familiarity with the fixed-do sounds without going back to play that game (here's a new "Two Notes" benchmark I blazed through just now).

I'm intrigued by the fact that, unlike Chordsweeper, I can play Interval Loader without actually paying attention-- I can read a news article, or think about what I'm going to discuss in class tomorrow, or any of a number of things, and I barely even notice that my fingers are responding to the game's interval sounds.  I'm not yet certain what that implies about the process, but it seems to be a phenomenon worth noting.

February 29 - Leapin' lizards

My brother visited me this past weekend-- and, like the astrophysicist he is, he told me that my suggestion of musical objects as "three-dimensional" is, literally, wrong.  I had introduced the discussion with a question about uncertainty; as the conversation turned to particles and waves, I described my model to him, and he pointed out that even though I'd created an interesting and seemingly valid analogy, the analogy was unscientific (as analogies normally are).  The problem, as he explained it, is that sound energy doesn't literally exist as a spatial object.

Once he made that clear to me, I realized there's no need to imagine that the "dimensions" I was describing-- amplitude, frequency, and duration-- are spatial characteristics.  Gibson described an object as a collection of characteristics which bear consistent relationships to each other; once we understand the relationships, we recognize the object.  Almost all of the objects that we can imagine do possess spatial characteristics, but musical sounds don't; nonetheless, musical sounds have characteristic relationships which makes them legitimately "musical objects".  Although the spatial analogy may still prove conceptually useful, to explain how sounds can be objects, not thinking of musical sounds as "spatial" makes me consider that this reaffirms the difference between vision and sound:  vision perceives spatial objects, and hearing perceives temporal objects.

This could be why musical structure gets shunted into the "symbolic" side of the brain.  As I mentioned a couple of entries ago, the physical work of the ear is that of delivering component frequencies to the brain's auditory cortex, after which the brain attempts to meaningfully re-assemble the components according to its existing knowledge.  But the relationships between those components are intellectually learned, and not inherently part of the sensation.  The left side analyzes the relationships and identifies the object while the right side of the brain responds unconsciously to the raw pitch-sensory information.  The object is symbolic because it's a "higher-order" representation of the raw frequency data, and doesn't actually exist except in our minds.

Of course, we're taught that musical sounds are relationships, not sensory experiences.  This may mean that the right side of the brain is deliberately ignored, deliberately given to unconscious awareness.  When I began playing Interval Loader with movable-do, I couldn't even distinguish a tritone from an octave because of the pitch information.  Once I concentrated fully on the relationships, actively disregarding the contribution of the pitch, I started to improve.

However, familiarizing myself with structure is heightening my pitch awareness.  When I'm listening to music, I can hear a harmonic interval sound and my mind begins searching for the component pitches.  Also, now that I'm on Level 7 of Chordsweeper, I've found that the higher V-chords (dragon, eagle, and elephant) don't play with a separate ba-bum like the lower chords do-- but I still recognize the V-chords because when I hear them my mind almost immediately plays back all three notes from bottom to top.  I don't hear the separate pitches, but I know that they're there and I know what they sound like.  That's how I can distinguish the C-major I-chord (C-E-G) from the inverted F-major V-chord (E-G-C); when I hear the I-chord, the sound just soaks in, but the V-chord mentally replays itself.

But what about sound and musical contour as motion?  I'll have to take another look at Levitin and Zatorre's articles about what gets processed where.  As I mentioned, Levitin recently demonstrated that musical structures are processed in the same areas as linguistic structures, on the left side; yet it's long been common knowledge that music is generally processed on the right side.  There's an apparent contradiction here, but I think the data exists to resolve it.

March 5 - Breaking the ice

As I've been winding down this week, heading into Spring Break, I've relaxed between assignments by playing Chordsweeper and Interval Loader.  The results are intriguing.  Specifically, I've been noticing more about the V-chord effect that I mentioned in my last entry.  What I'm discovering as I listen is that although some of the V-chords break up automatically, if I pause a moment after any chord (I, IV, or V), it's usually very easy to "replay" the chord's component pitches in my head.  The peculiar aspect of this is that I don't actually hear the pitches when I play the chord.  I hear the total chordal sound alone.  I don't listen for the pitches, I don't have to pick out the pitches before the sound fades away, and I don't have to replay the chord.  I simply know that the pitches I'm thinking of are the correct ones.  And if I want to, before the chord fades away, I can draw out any of those individual pitch sounds merely by focusing on it.  It's kind of freaky to listen to any of the chords and then suddenly find its pitches just sitting lazily in my mind, waiting to be noticed.

If our minds do indeed process syllables (chords) before inferring phonemic (pitch) information, then this is exactly as it should be.  I came across a forum thread in which a music teacher commented about what happens with some of his ear-training students who have perfect pitch:

[M]ost [of them] hear individual notes in isolation-- even when they hear a chord. So when you and I hear a C major chord, they must first hear C-E-G in isolation before they can make sense of it.

If you consider what it would be like to try to understand conversation by putting together letters, instead of paying attention to syllables and whole words, it would seem evident that starting with pitches in perfect-pitch training is an erroneous footing.  I've had data which suggested this, ever since reading The Language Instinct, which noted that adults can study specific phonemic sounds for years and still not be able to recognize them when they appear in words-- and now my own experience supports it.  Although familiarity with specific pitch sounds will ultimately be necessary to recognize and identify those sounds in music, it appears that structural comprehension is what will allow us to draw the individual sounds out of the music in the first place.

March 11 - A dollar short

Is it too late to learn?

When I look at a piece of text, something unusual happens.  My mind assimilates its grammatical structure, infers the author's literal intent, and extracts the key ideas-- all in the space of an eyeblink.  I had no idea this was unusual until just a few months ago, in acting class.  We were each given a monologue and, after ten minutes to read it over, we presented them to the class.  When I'd finished mine, I was ready to return to my seat, but one of my classmates stopped me with the question, "How the hell did you do that?"  I was flattered, of course, and pleased to have done a good job, but in a moment I was startled to recognize that the entire class was sitting in a kind of stunned silence.  I was further surprised when the instructor, who has been teaching at this school for nearly three decades, expressed his amazement and insisted that I explain: "how did you do that?"  At the time, I had no answer.  I was astonished by my classmates' astonishment.  Since then, I've been figuring it out.

What it essentially comes down to is that I began reading at age two, and they didn't.

Two to four years of age seems to be the most critical period for linguistic development.  The further you progress from that formative age, the less likely it is that your knowledge will be instinctive.  Taneda's book emphatically asserts that five years is already too old to be certain of learning absolute pitch; existing research seems to support that, if indirectly.  Transferring that concept from musical to linguistic literacy, is it any surprise that in this country, where children usually begin to read at age five or six, we have such contempt for the written word?  For the vast majority of our population, it seems, reading isn't instinctive.  It's a chore-- an endless repetition of effort.  How frustrating to look up every fifth word!  How annoying to puzzle out the meaning of each new sentence!  How tedious to learn grammar as hundreds of rules and vocabulary terms-- and how humiliating to write a paper that violates all the rules you overlooked!  Easier to just give up and watch television.

As I observe my classmates' reading and my students' writing, I wonder how I could teach them to know the language, to feel it the way that I do.  Is it even possible?  What would it take?  This isn't an idle question, because as I've stated it, the ultimate goal of perfect-pitch training is total musical literacy.  The principles I could imagine for language development would be directly applicable to musical training.  If the source of my own ability is solely the early start-- if, after a certain age, the information can't be internalized-- then, Houston, we have a problem.

But when I was young, I didn't just learn how to read-- I read.  Avidly.  I remember how, in fourth grade, I brought to school wagonloads of books I'd read for the MS Read-a-Thon.  And our theater-literature professor here at the school, a native German, knows the English language better than I do, mainly because he reads obsessively.  It seems that reading, like any skill, follows either a virtuous or a vicious cycle; the more you read, the easier it becomes, but the less you read, the harder it gets.  And although the adult brain becomes decreasingly malleable with age, I wonder what would happen if you took away the "adult" texts and put people back on children's books, which they would be able to read effortlessly; would that become a foundation for further development?  I tend to think it would.  Even without Gibson's theories of perceptual development, I have plenty of anecdotal examples of learning through task-oriented exposure (with reading, the task is simply to appreciate the story), and it's demonstrably true that if you accelerate a student past their level of competence they will fail to learn.

I suspect that it's largely a matter of motivation.  If my classmates believe they know how to read, and dislike what they know, what could convince them to read even one book, much less a wagonload?  How could they understand the difference between how they and I perceive language-- understand it well enough to desperately want to change?  If, to have true perfect pitch-- a total, instinctive comprehension of musical sound and its structural forms-- it's necessary to read thousands upon thousands of pages of sheet music, after having learned to truly read sheet music (instead of kinesthetic memorization), what would persuade someone to make the effort?  How can they truly understand what they're missing, to desire it strongly enough to make it happen?

Additionally, it's a matter of principle.  As I've been observing my classmates and my students, it has become crystal clear that, when approaching a piece of text, most people are firmly convinced that the process of reading is external, something that is "not me"; consequently, when "reading", they abandon everything they instinctively know about talking.  I've been gratified that, merely by my illustrating this fact, some of my students have accepted their instincts and given truly impressive readings-- effortlessly.  This seems curiously similar to what Sean was describing; he said that an inexplicably effective way to hear absolute sounds was to imagine that he already could.  I don't consider this to be some kind of feel-good you-can-do-it hypnosis-like affirmational garbage.  Rather, science has repeatedly demonstrated that our minds are musically competent in ways we are utterly unaware of (Norman Weinberger has an article or two about this on-line).  Sean may be, in some unsophisticated way, allowing that instinct to take over.  The encouraging notion is that, to some extent, we may already know what we think we need to learn.

I doubt that it's too late to learn.  Although my early start at reading enables me to instantly comprehend and reproduce a piece of text (which is a nice party trick, and explains why I've always loved auditioning) the central issue seems to be that, because I am so comfortable with written language, I approach reading as "me" instead of "not me".  With some diligence, my students are able to reproduce my results.  If, merely by articulating and exploiting the principles of instinctive speaking which my students already possess, I can create in them a natural relationship to their texts, then it seems probable that by exposing principles of instinctive musicianship (as I discover them) I'll be able to create a natural relationship to musical compositions.

March 13 - Deep fried claims

Time for a reality check, I think.

I've been reading the comments archive of James Randi, professional skeptic, and one particular article seemed rather interesting because of its illustration of the difference between anecdotal and scientific evidence.  Anecdotal evidence is a lot like an analogy; it will guide you in directions for further inquiry, but it doesn't prove anything, and if inappropriately framed it can be terribly misleading.

The main failing of anecdotal evidence is that it is specific to the particular storyteller; there's no assurance that it will transfer to another person or situation.  Scientific evidence, by contrast, can be consistently reproduced by following identical methods.  As an example of anecdotal evidence, I've discovered that using Chordsweeper has enabled me to automatically hear the pitches of major chords.  Based on the theory from which I designed the game, this was a result I'd expected; but neither theory nor my own success proves that every user will have this experience.  It simply suggests that if I tested a collection of users I might get the same results for each of them-- and if that happened, then that would become scientific evidence.

I mention this now just to remind you that, as of this moment, there is still no legitimate, documented, scientific evidence anywhere of anyone gaining perfect pitch.*  You can talk to dozens of people all over the place and they will tell you that their pitch comprehension has improved in some way-- I myself can now unfailingly recall a middle C or G--  but, to date, no one has been proven to actually gain true, full perfect pitch.  The closest thing that we have to scientific proof merely shows that pitch recognition and recall can be improved; the few apparent successes in those studies, who were (after training) able to pass a note-naming test with high scores, were flagged by the researchers as unusual cases.  Unusual cases don't prove a scientific hypothesis; they merely demonstrate that the test subjects were unusual, and should ideally be disregarded as "outlying" data.  Furthermore, despite the popular definition, note naming and perfect pitch are decidedly not the same thing.

If you've been reading through the archives in this site you've undoubtedly recognized for yourself which sources are scientific evidence, which sources are anecdotal evidence, and which thoughts are my own speculations and fancies.  I probably wouldn't bother with this entry except that I'm selling a product, and it's important to know what claims that product can make.  Specifically:  Ear Training Companion v2.1 will definitely improve your ability to recognize and recall musical tones, and it will improve your ability to perceive and produce musical sound.  I can say that this much has been proven.  Will it "give you perfect pitch"?  I can't promise that.  At this point, no product can promise that, despite what the advertising seems to tell you.

I've detailed my own experiences with Chordsweeper, and it seems to be fulfilling its theoretical goals. My experience with Interval Loader shows it to be more powerful than I'd anticipated as a training device for harmonic relative pitch-- apparently equally effective as other products out there which cost $100 or more.

The fabled "third component" of ETCv3, which is currently vaporware (I'm hoping to begin its design once the semester's over), will be based on Gibson's principles and will, theoretically, be the proper foundation for learning true perfect pitch.  In the meantime, ETCv2.1 is still a worthwhile tool (saves you about $100 and the irritating bother of finding a partner who'll practice with you every day), the two games are fun and seem to be effective, and upgrades will continue to be free as long as I continue to write 'em.

As the Randi article suggests, it's important simply to realize that anecdotal evidence represents unique experiences.  Anecdotal evidence is a suggestion of a probable result.  Each person who writes to me about their own experience helps me-- and, by extension, you-- to understand likely causes and effects as I continue my work.

*as an adult.  Taneda's method for teaching children seems bewilderingly effective-- bewildering because I'm utterly baffled why this method has not been more widely disseminated.  It is so simple and clear, without a whiff of quackery, with no double-talk whatsoever, that I don't see how it couldn't work.

March 23 - I coulda had a P8

I have a challenge for you.  It's one I've mentioned before, but it's relevant again today.  First, whisper a vowel.  By "whisper" don't just mean "talk quietly"; I mean breathe the air past your throat, without using your vocal cords to create vibration.  Notice the pitch created by the aspirated vowel.  Now try to whisper the same vowel at a different pitch.  You won't be able to do it-- not without changing the vowel.  This is because the fundamental frequency of your voice and the "formant" frequencies of the vowel are separate events; the vowel's frequencies remain the same no matter what pitch your vocal cords may create.  If you change the pitch at which you're whispering, you're changing the formants, thus changing the vowel.

Philip, recognizing this constancy, has written with an elegant observation:

I pitched my own English vowels yesterday: whispered them one by one, produced a tone for each based on the whisper, and they did add up to a full chromatic scale. I was able to use them with amazing results: I can now "fake" the aural recall of people with perfect pitch by singing any tone (within that octave) correctly at will. Did it all day yesterday and much of today. That's like wow!

This goes right along with what Jon said about feeling the pitches in different parts of his mouth.  I had thought that what Jon was experiencing was a kind of optimal resonance, where the fundamental pitch was most harmonious with its formants; Philip's report suggests that that's true, and adds another level-- we're our own natural pitch pipe, and we didn't even know it.  Now, it still isn't the easiest thing in the world to recognize what pitch you're whispering at; we're so accustomed to hearing a vowel sound that it's difficult to even hear the pitch sound, much less match it to the sound of an instrument.  I also wonder in how much of a range we do speak our own vowels.  Philip followed up with additional experimentation:

I woke up early today, started experimenting again with vowels vowel and they were all WAY off -- that was the first impression. My throat felt kind of coarse (morning), and both the whispers and the pitches were coming out way low. I was very disappointed at first, but soon figured out the problem. The thing is, focusing on vowels must have suspended my sensation of relative pitch so thoroughly that it took me a while to notice that they were exactly a whole exact octave lower! At first I thought I was singing "wrong notes," but I was in fact singing the correct tones between the G of the great octave and and the F# of the small.

This blew my mind. I instantly found a way to correct myself. It turns out there are different kinds of whispering! OK, whisper something in your normal whisper, and try to whisper loudly (in a kind of emphatic whisper you would want others to overhear). That's how I was whispering yesterday and the day before. And now whisper something very softly, and try whispering in a low voice. I think the low voice that will come will not be arbitrarily pitched. To preserve the vowels, it will have to fall an octave lower than your normal voice. Then I started whispering in a cartoonish chipmunk voice and got all the tones between the G of the first and the F# of the second octave spot on! So I can now fake aural recall in three octaves and am beginning to experiment with taking it beyond. (Of course, identifying pitches by hearing is a whole different matter).

Philip adds one concern, that

What puzzles me is though that my vowel pitches are a full fourth lower than yours: my "board" is the G of the small octave. You say somewhere in your research that the format overtones of the vowels do vary from person to person, but within half a tone (unless I misunderstood). Am I doing something wrong? I have noticed that in your latest release of ETC the pitch to vowel key is all hard-labelled with your own vowels for the different tones. What am I supposed to do with mine?

The vowel formants vary considerably more than that.  If you look at the formant chart, you'll see that the "i" (that's ee-as-in-beet, to English speakers) can be recognized anywhere between (about) 2000 and 3700Hz.  But this range represents a selection of different speakers; any given speaker will usually place each of their vowels in roughly the same spot every time.  The vowels you create with your voice have different pitches than the musical-note mapping from ETC simply because your vowels fall into a different place on the chart.  Philip was able to achieve different octaves by speaking the same vowels with different intensities-- which seems very similar to what I think when I sing an octave (the same note, but "brighter").  Perhaps they are the same frequencies, but the emphasis shifts from one overtone to another?  Hard to say without testing; but I can whisper "ah" now, both quietly and loudly, and hear exactly what Philip is talking about.

Vowel inconsistency between different speakers gives me some pause, and the special attention required to create these associations might make you balk, but the consistency with which any one of us will speak the same vowel makes me think that Philip has definitely hit on something anybody could do-- and, furthermore, makes me wonder if the effect couldn't be leveraged as a kind of check-your-answer for note identification.  It still might be highly susceptible to relative effects, and you surely need to be precise in speaking your vowels, but the strategy seems to have its potential use, and I'm pleased to know of it.

[In other news, I've made it to Level 8, Round 3 of Chordsweeper-- what fun to now have the accidental chords included!  I'm intrigued that I almost never mistake any of the I-chords, but I very frequently confuse all the different IV-chords with each other, and I often mix up the V-chords in B, C, and D (dog, bunny, and monkey) but not usually the other keys.  I don't know exactly what that might mean, but there it is.]

March 28 - Like it or clump it

Playing Chordsweeper is an intriguing experience.  I made it to Round 4 of Level 8 (after about twenty-five games of Round 3), and now the animals are all evenly weighted again, so I can't rely on the new sounds to get me through.  Although there are a number of effects which the game seems to have caused in my brain, the one I'm noticing today is point-and-feel.  Sometimes, when a chord sounds, my mind doesn't immediately make an identification.  When that happens, one of two things occurs.  Most often, I begin considering the sound analytically-- "hm, that sounds sort of harsh, so it's probably a monkey," or "Let's see, that sounds like Star Wars, but it's a V-chord, so it's the dragon and not the spaniel."  Other times, though, my thoughts remain blank as my eyes scan the tiles; and when my eyes happen to pass over the animal which matches the sound, some switch closes in my mind and I know that it's the one.  There's no thought involved, and not even any "searching".  Often, I feel as though I'm being drawn to the proper tile by a magnet; if, after finding the target tile this way, I try pointing at a different tile, my hand invariably pulls back to the first tile, because it just feels more comfortable.  And when this happens, on those occasions when it does happen, it's never wrong.  Freaky, but cool.

A couple days ago I was singing slow and sustained scales.  My mind wandered, and I inadvertently went to a higher note than I'd intended, and mentally prepared to switch direction.  Curiously, the note I was singing then seemed to alter its own sound, jockeying itself to be higher or lower than the next note, even though I had made no physical change in my vocal production.  This event reminded me of what I've been distracted from writing about lately-- our various perceptions of a musical tone, and what we generally mistake for pitch.  I think I've been stalling because I know it's mostly going to be summarizing what's come before, but a revisitation does seem appropriate (and necessary).  Here's a list of some perceptive effects that I can think of off the top of my head:

Pitch
Scale degree
Distance
Expectation
Grouping
Comparison
Direction
Height
Fusion
Motion
Contour
Closure
Sequence

This is by no means an exhaustive list, but it's a start.  And, starting at the top of the list:  the premise of my exploration is that "pitch" is, as a single sound frequency, like a thread embedded in the middle of any musical tone.  When a musical tone is sounded, various effects of context, harmony, attention, and expectation clump themselves around the thread-- sometimes on one side, sometimes another, sometimes equally around-- and, in doing so, lay claim to the identity of that tone.  Consequently, any given musical tone has multiple representations, both dynamic and static, all of which are typically imprecisely identified as "the pitch".

April 4 - Brought to the four

I've been paying more attention to the Taneda method over the past couple of weeks.  I've been examining the process more carefully, relating it to my own research-- and, lately, comparing it to the Suzuki method.

Taneda's philosophy and method are directly supported by Gibson's book.  Taneda's teaching philosophy is that the student should never be corrected, but only praised.  Of course, this is an instruction mainly to the parents who will want their child to do well, and will try to urge the youngster towards correct answers; although they may mean well, all they're doing is telling the child "you've failed."  These constant negative messages will gradually cause the child to think of the training process as a negative experience, and they'll reject it.  What Gibson's book would add is the fact that, in perceptual learning, the mind is self-correcting.  She provides example after example of researchers who found out that people got better at perceptual tasks merely by repeated practice; and in most cases, giving feedback of any kind, positive or negative, generated worse results.  Taneda's instruction to only praise the child is sensible because, since neither positive nor negative feedback actually affects the learning process, the sole purpose of feedback is to strengthen the teacher/student relationship and to motivate the child to continue the training.

I've been frustrated trying to find the support for his method, which I know I've read but can't seem to track down again.  It's an experiment in which the researchers tested three groups of people:  small children (age 3-5), older children (age 9-12), and adults.  They showed each subject a series of target shapes, each with a unique and distinct contour, and asked the subjects to identify new shapes as same or different.  After the testing trials, the experimenters asked, what color were the shapes?  Only the youngest children could answer this accurately.  The minds of the older subjects had been too efficient, remembering only the contour information that was necessary to recognize the shapes.  The minds of the younger subjects, not knowing what wasn't important, instinctively absorbed everything.  Taneda's method takes advantage of this tendency of the child's brain-- and this is why his method won't work for adults.  But I can't find the reference to the experiment!  All I can find is Gibson's more generic statement that as we get older, our minds become more efficient at identifying objects without wasting brainpower on irrelevant information.

[As I thumb through Gibson's book, again, I notice a couple times where I marked one idea with a pencil-- that perceptual learning is achieved through differentiation of features within the same class.  When comparing features of one musical tone to another, I'll have to figure out which ones are maximally distinct within comparable categories.  Do I compare a scale degree to another degree, or to a chordal sound, or to a short melody?  I'll have to experiment with that.]

So far, the look I've taken at the Suzuki method isn't deep, but it's enough to show me that I learned it wrongly.  The basic principle of the Suzuki method is listening and imitation; from this, Suzuki intends the child to learn music by the same natural process with which they learn their mother tongue.  Although I have vague memories of my instructor providing my parents with cassettes of the musical pieces, I know that their importance was never impressed on us, and I'm not sure that I ever listened to them.  But at least now I have the answer I needed about why Suzuki method can and does succeed after age 10.  This age is Taneda's projected drop-off point for those students (like me) who are taught music via kinesthetic memory alone.  If a student is learning Suzuki method correctly, then their internal representation of the music will not be "rendered unnecessary" by motor memory; they'll hear the music in their head and their fingers will follow.  In this way, the goals of the Suzuki and Taneda methods are similar.

If I'm reading the Suzuki sources correctly, where the two methods seem to differ most broadly is in reading music.  Becoming able to read music fluently is the main point of Taneda's method.  In Suzuki method it's considered a nice bonus, but not really necessary, as long as the student is able to perform well.  In fact, I remember very clearly from my early piano lessons that I was always urged to get away from looking at the printed music as quickly as possible; I wasn't actually performing the piece well, I was told, until I could perform it entirely from memory.  By contrast, Taneda says that a student should always be looking at the printed music, even when the piece has been fully memorized.

Looking at the "Volume 1" piano books for each method kind of makes the point, too.  The first songs in Suzuki are "Twinkle Twinkle Little Star" and all its variations.  After the first few Suzuki pieces, the child is playing multiple notes and chords with both hands, including melodies and harmonies.  Now, the first piece in Taneda?  B.  Yep.  Just B.  One note.  Takes up a whole page.  B.  And then the second piece is D.  Which takes up its entire page.  These are followed by all the other notes surrounding middle C, one after another, and then by short sequences of only two and three notes-- it's not until the one-hundred-eleventh piece that the student even plays two notes simultaneously.  I only just this moment realized (although I'm sure I should've seen it before!) that this is essentially parallel to a child's reading primer, which begins with the letters alone, proceeds into short three-letter words, and then uses those words in short sentences.

I wonder what percentage of Suzuki students gain absolute pitch?  I've seen and cited the statistics that demonstrate how most people who currently have perfect pitch began their musical training between ages 2-5; but what if you turn that around?  What percentage of people who begin their musical training between ages 2-5-- especially following the Suzuki method as it's intended to be taught (unlike how I learned it)-- will inadvertently get perfect pitch?  It'd have to be a side effect, because Suzuki doesn't deliberately teach it, but I would like to know how often it happens.

April 11 - Particle man

Tonight, as I watched the "random visualization" of the Windows Media Player, I considered how the display was being generated from the frequency analysis of the song it was playing (Ain't Gonna Lie, by Keith, again).  I idly wondered if raw frequencies might probably be the simplest way to represent the sound wave, and then I thought again-- because I remembered that any recording is, ultimately, a single vibration, a summation of all the available sounds.  Yes, yes, I know that I'm glossing over a bunch of complex mathematics here, but when it comes down to it, I look at Cool Edit Pro and this is what I see:

Unless I'm grossly misunderstanding the process, each instantaneous point represents a single mathematical value.  It doesn't matter how many sound sources created the noise; it all boils down to one point at a time.  I don't know which is the most fantastic fact-- that the entirety of a noise recording can be summed into a single jagged line, that the frequencies which composed that line can be instantly inferred during playback, or that our mind interprets the combination of individual frequencies as sounding precisely like what created them.  A frequency is a frequency is a frequency; individually, they're entirely indistinct, but put them together in a patterned bunch and suddenly, unmistakably, you have a unique, recognizable voice.

If your ears, and your aural channels, aren't physically defective, then the process of hearing (as I understand it) is exactly the same for everyone, from the pinnae down the auditory cortex:  the sound enters the ear, it gets broken up into frequencies, it arrives at the brain's tonotopic map.  What we hear, or rather, what we think we hear, depends on how our minds choose to put those frequencies back together again.  [I'm not sure if I've mentioned before that I did hear from someone, last year, who said that his mind didn't recombine them completely; when he heard musical sound it seemed "fuzzy" to him, as a frenzy of unrelated sounds-- "like an oscilloscope display," he said.]

I realized tonight that I've been making a careless assumption about the mental process-- imagining that when a frequency was recombined, it was literally mixed together with something, and was no longer available for other interpretations.  But that can't be so; Levitin's recent research shows that musical sound activates both sensory and symbolic parts of the brain's interpretive mechanism, even in the ordinary non-perfect-pitch mind.  If the sound were being "used up" by sensory perception-- being literally sent to that side of the brain, like a package through a tube-- then no symbolic understanding could exist.

Perhaps, then, it would be more conceptually accurate to think of the frequency information being projected on the tonotopic map as a kind of Rorschach, and various mental observers pick out and report what they perceive in the "blot".  Practicing ear training increases your quantity of internal observers.  In this scenario, the number of interpretations you can apply to a single sound event is theoretically limitless.  Although this would necessarily be limited by (and is, in reality, defined by) your prior learning, it still implies that the only limitation on what you hear is what you learn to interpret.

And, for increasing our ability to interpret, I was reminded this week that I must not overlook the significance of reading in perfect pitch training.  Remember the research from The Language Instinct, which explained how adult subjects couldn't recognize unfamiliar phonemes in words of the phonemes' original languages, even after thousands of trials, over more than a year?  Well, other research shows that we don't recognize phonemes, period.  We recognize syllables, and infer the phonemes.  The main reason we think we recognize the phonemes is because we know how the words look on paper.  This afternoon, over Easter lunch discussion with family, I found myself explaining that we often don't recognize the phonemes in our own language-- when they're in somebody's name.  "How do you spell that?" happens all the time with names, because they're not ordinary words that we have previously learned by reading.  We hear the syllables, but we don't know what specific sounds the syllables are composed of.  It's been demonstrated that illiterate (or dyslexic) adults can't hear phonemes, so I find myself wondering-- what if the adults in Pinker's study hadn't only listened to the phonemes, but had studied the written language as well?

How, then, to introduce adults to the written language of music?  Unless you're a practicing musician, every note looks pretty much the same as every other note-- an indistinguishable black dot.  And, even if you are a musician, if you look at a note and don't hear its sound in your head (without having to play it on your instrument) you can't read music.  I think I'll have to take Taneda's cue and use colored notation-- not because I've suddenly become a convert to the "chromasthetic association" theory, where the color is supposed to evoke the sound, but because coloration may be the best way to make the printed notes individually distinct without requiring spatial comprehension ("Let's see... is that the third or fourth line?") or attaching labels that are themselves unrelated, interfering sounds ("C", "F-sharp", etc).

In other news, Peter wondered about Taneda's method of always praising the child, and never correcting errors.  Sounds like some kind of touchy-feely 1970s pop-psychology nonsense, he said.  I replied that Taneda isn't referring to parental discipline; rather, as I put it to him, "Taneda's exercises are presented as games. Although a game has rules, it's still play. Isn't it annoying when you're playing a game-- any game-- and someone tries to tell you that you're not playing it right? It makes you want to not play any more. Likewise this."  He still wondered, though,

But is it true that "Feedback doesn't affect the actual learning -- at all!"???? When students do their homework we (the students and I) correct their work. We look at the work they did and I explain why an answer is right or wrong. When they understand why an answer is right or wrong they have learned something......or didn't they??  ...[In] my opinion the best way to learn is trial and error. The chance that it will "stick" is the biggest.

Although I could only speak for perceptual learning, because I'm using only the studies in Gibson's book as support for my statement, I offered this response.

Your thoughts are right on target again, as usual. Trial and error is the thing. I should've been more specific in saying that feedback which is external to the learning system-- such as a parent, computer, or instructor who says "no, that's not right"-- none of that makes any difference. If they try again, and in their new attempt, they get a better answer, they know they made a mistake last time... so why bother to point this out? Let them have the satisfaction of making a right choice, instead of emphasizing their failed trial. Or, to put it another way, the point is to guide the child's exploration of their own trial and error process, without interference (which would damage the child's morale). The job as teacher or parent is to keep the child enthusiastic and encouraged to continue their own trial-and-error process.

I am curious about whether the principle extends to other forms of learning.  Which is more helpful-- marking a big red X and then figuring out what went wrong, or revisiting the process and letting the student surprise themselves with a better answer of their own?  Now that I think about it, this is the process I use in my Oral Interpretation class; I don't bother to say "that's bad", but I actively guide towards a model of what I discern to be "good".  In fact, there were two instances where I did, with the best of intentions, deliberately communicate the message "that's wrong", and in each case it took about two months to recover their confidence.

April 23 - Blowin' your mind

I couldn't wait another moment.  I don't usually trust optical character recognition-- typically the result is so bad that it takes me longer to fix up than it would have to simply retype it-- but I thought I'd take a chance, and I am delighted that the scan was near-flawless on the first go.  So now I'll tell you why Evelyn Copp has me so tremendously pleased.

Recently I found myself explaining to a person (off-line) that absolute-pitch training methods have been exclusively, inexplicably, aimed at adults.  The correlation between early musical training and perfect pitch is well-known; I told this fellow that I have no answer as to why all these methods-- and so many scientific experiments-- waste so much time on the wrong age group.  However, having said that, I remembered that I was not sure of my facts.

I couldn't immediately bring to mind all the methods that Mark Rush compiled in his Ohio State dissertation.  So I couldn't be certain that every method had only ever been aimed at adults.  So I went back to Rush's list.  I flipped through the first few, and nodded grimly at the different failures-- unsurprisingly, they tried to teach adults to memorize sounds.  But the fourth method Rush listed was for children, and I was startled to read this sentence in his description's first paragraph:  "Part of the evidence Copp used to support her ideas was her success in teaching absolute pitch to young children."

Success?  I couldn't believe it.  How could this be?  I suddenly remembered that Rush must have cited this source; one bibliographic reference and one University of Florida search later, I was on my way to the Science Library to pick up the Journal of Heredity, Volume 7.

I picked it up.  I checked it out.  I read it.  I was stunned.

How could this woman have known what she knew, and proven what she proved, in 1916, and still have left us in the same nonsensical morass of genetic-vs-training?  How could her training method possibly have died out without trace or whisper?  It makes no sense at all.  Could it have been the lack of mass media--?  Bad business decisions?  How could such plain and evident truth be so totally obscured?

I'll give you one quote from her article that answers so many questions:  "The value of learning music is not in the number of pieces one may play, but in the musical thoughts one can think."  But don't stop there.  Read her article.  Read it now.  Read it all.  I would gladly challenge anyone to understand the contents of this article and then dare to tell me that absolute pitch isn't worth learning.

I am eagerly looking forward to discovering Copp's method, because I want to integrate her concepts and games with Taneda's.  Between Copp and Taneda I can't see why every child everywhere shouldn't have absolute pitch.

Of course, I'm still going to pursue adult training, by developing the next component(s?) of ETC v3.  In case you hadn't guessed, a big part of the point here is that I want absolute pitch too.

April 27 - Wool you, won't you

Tonight I attended a short series of plays which had been selected as "Teen Playwright Festival" winners.  In one of the shows, the climactic scene took place at a hospital bedside, and I became curious about the heart-monitor sound-effect beeping.  I couldn't tell what pitch it was, but figured that with my Interval Loader practice, surely I could identify the scale degree!  A few moments later I had satisfied myself that it was a major sixth.

However, almost as soon as I made that decision, it occurred to me-- the beeping was a constant, isolated tone.  There was no tonic, and thus no scale, for the pitch to be a degree of.  I reconsidered the beep, and I was able to adjust my perception of it to make it a third, a fourth, a seventh, an octave; but these proved difficult, and without my active attention the pitch quickly and firmly drew itself back into the major-sixth sound.

Scale degrees, as I've mentioned before, have specific and (seemingly) universal emotional effects.  I've suggested that an actor could speak in musical intervals, to convey precise emotional messages, such as a major seventh being spoken to achieve an urgent demand ("But Mom!")-- and a major sixth to express a hopeful longing, such as the famous "My Bonnie" melody.  In this case, once I found myself unable to wrestle the beeping sound away from the feeling of a major sixth, I realized that, in the story, the reason we were at the hospital bedside was because the main character was making a forlorn appeal to his abusive (now-comatose) father, wishing hopefully for the love he was never given.  That's major-sixth territory, no question.

I suppose it's possible that the timbre of the beep sound had an unusually strong thirteenth partial, or some more mundane explanation, but this makes me think of the bus-signal effect from a while ago; it is definitely possible to hear a musical sound that our mind then conforms to our emotional expectation of that sound.  In the case of the bus signal, it was the rhythmic downbeat which provided the emotional "juice" to a higher pitch; in this case it was the mood of the scene which caused me to interpret the major-sixth feeling in the sound I heard.  I wonder (idly) if the entire scene was somehow in the key signature of that sixth?

I'm a bit weary to explore this right now, especially since the evidence is so thoroughly inconclusive-- so I'll simply remark that this appears to support my closure theory for vowel-pitch listening, and it adds still another peculiarity to the tonotopic "inkblot".  Our internal reference needn't be a pitch or a key signature; it could be a simple emotional state.

May 1 - Some of my tangerine

Well, now that I've read the informational pamphlet and taken a quick glance at What Is the Fletcher Music Method?, I confess I'm even more impressed than I had been.  The method was endorsed and supported by a host of luminaries, including no less a personage than John Philip Sousa.  The concepts, the philosophy, the pedagogy, the practice, the results-- all of it way ahead of its time.  It's efficient, it's effective, it's engaging, it's enjoyable, for teachers and children alike.  And, even though I would hope that many of the teaching methods that she speaks against are no longer in common use (as one example, it was apparently very popular to verbally abuse your students for making mistakes), it should be evident that the Fletcher Method, just by itself, is a tremendous accomplishment.

So... why did it drop off the face of the earth?  That 1916 article was published in the Journal of Heredity because people thought musicians were born, not made, and that children could not be taught music if they didn't already have the talent.  Copp proved otherwise as early as 1899-- but then how did Suzuki, who didn't even get started until years after Copp had passed away, assume the mantle of the revolutionary and achieve worldwide fame by making exactly that claim?

Bad business.

The death sentence of the Fletcher Method is stamped all over her writing.  Call it professional suicide, if you like, but here's the practice that did her in (quoting from the "Important" section of the informational pamphlet):

The materials, invented by Mrs. Fletcher-Copp, are thoroughly protected by patents, and these, with her certificate of authorization to teach her system, can not be obtained by correspondence any more than a university degree could be obtained in that way. Nor is any person authorized to give the Normal course of instruction but the originator.

In other words, the only way to become a Fletcher Method instructor was to travel to Boston and study with Mrs. Copp herself-- and remember, this was before the airplane was even invented.  In her writings, Copp repeatedly reminds the reader that she refuses to teach "by correspondence."  Furthermore, she explains that the book What is the Fletcher Music Method? is not intended to help anyone to teach the method, but was written merely to explain the method-- because people kept asking!  The informational pamphlet is not a pitch to the parent, but to the instructor, trying to get them to come take the Fletcher Method training in Boston; note how the teacher is told how many more pupils they'll have, and how much more money they'll earn.  The business model of the Fletcher Music Method is to get revenue from teachers who come to learn the Method from Mrs. Copp, and from no one else.  It's pretty difficult to expand the reach of your business internationally, to the hundreds of thousands of instructors worldwide who could be teaching your method, if you won't even let them buy your materials unless they come train with you personally.  Plus, it overlooks the potential of parents and schools who want to institute the program but don't have a single spokesperson to become the Official Fletcher Method Instructor.  Certainly, that's a way to maintain quality control, but how could any business survive such a bottleneck?  Answer:  it can't.  I studied at least three such typical cases when I was at UCI (Mrs. Fields' Cookies being one of them.  As I recall, the entire chain almost went down in flames from growing too big, because Mrs. Fields refused to delegate; but the franchisees forced her to give up control, and as a result the company survived).

And, besides, well... you might have noticed that Mrs. Copp died in 1944.  That couldn't have helped.

Suzuki, by contrast, did it right.  Although there are teacher training classes and workshops for those who wish to become certified Suzuki instructors, anybody can buy the materials and begin training the Suzuki method.  Does this mean that quality control will suffer, and some teachers will teach it wrongly?  You bet!  I've become increasingly convinced that my "Suzuki" piano instructor was one of the uncertified.  I've mentioned that neither I nor my parents understood that the point of the Suzuki training was listen, listen, listen; what I didn't know until yesterday is that the instructor lent the tapes to my parents, "to get Chris started," and then took them back again.  I can't imagine that any trained, certified Suzuki instructor would deprive their student of the single most necessary tool for their successful education.  And yet, despite its failings, I did learn something from my piano education.  I gave it up at age 10, just like Taneda would've predicted, because the method was faulty, but would it have been better not to have been trained at all?  And, more important to the success of the business, my instructor was a Suzuki instructor.  Whether or not she taught the method well, or taught it correctly, she was a living proponent of its messages and philosophy-- and its catalog of instructional material.

I'm still looking forward to receiving The Fletcher Music Method, and I'm trying to find a copy of The Education of the Children In Music, which Fletcher-Copp seems to have published in 1899, and which I can't seem to find in any available library database or antiquarian bookshop.  If anyone happens across it please let me know.  I suspect the latter book will be less important, though; it's The Fletcher Music Method which I hope will be the textbook I'm looking for-- so I can learn step-by-step how to train children using the Fletcher method, in addition to knowing the various activities and why they work (that's the function of What is the Fletcher Music Method?).  It certainly seems entirely possible that, with her insistence on not teaching "by correspondence", Copp would've avoided writing any kind of textbook that could leak out and fall into the hands of someone who wanted to teach her method without her personal stamp of approval (yet another reason for the method's ultimate disappearance)... although I see that I could piece it together from what she's already written in What Is...? I'm hoping that she wasn't so completely short-sighted as that.

Regardless, I'm thrilled about the Fletcher work-- and not solely because it failed due to business, not quality (Betamax, anyone?).  Nope, I'm delighted because not only are Copp's methods totally compatible with Taneda's, in every significant philosophical respect, but they are meant to be implemented at age 5 or older-- which is precisely when Taneda leaves off.  Taneda takes the child between 2-5, and then essentially says "okay, you're on your own, kid," and I had been wondering what to do after that, to avoid the usual handicaps that a child with perfect pitch could experience.  Well, with the Copp method, the child's musical education is covered well into the early teens, including transposition, modulation, and all kinds of other groovy stuff.  There's even a section in What Is...? where Copp explicitly criticizes other musical instructors for not knowing how to help a student with "positive pitch" (her term for perfect pitch) take full advantage of their ability.

I'm going to enjoy transforming the Copp work into a direct continuation of the Taneda method.  It's exactly what I was looking for.  I discovered that I wasn't wrong in identifying the most significant drawback of the Suzuki method, even when it's taught correctly; I recently encountered an aural-skills instructor here at the music school who assured me, "when I get a Suzuki student, they can't read music at all!"  Not to say that Suzuki isn't an excellent program with many truths and benefits to the musician; but it isn't perfect.  The field is still open.

May 2 - Variations on a pheme

Tonight I had an interesting conversation with Pam, who's about to enter a doctoral program in speech and communication.  We talked specifically about music as language, and in our direct comparison of their structures I found out that I may have made a significant conceptual oversight.  I'd thought that a chord was directly parallel to a syllable-- and, if you define a syllable as a collection of phonemes, you can see what I was after.  Tonight, however, it was pointed out to me that a syllable isn't a linguistic unit, but a rhythmic division. "In none of my classes," she said, "did we ever use the word 'syllable'."  Syllable is a function of prosody, she told me, and the word you're looking for is morpheme-- the smallest division of language that still carries some kind of meaning.

Of course, as soon as she pointed this out, it seemed appallingly obvious.  A chord can last for more than a single beat; a syllable can't, because a syllable is a beat.  Furthermore, you can have multiple distinct structures within a single chordal "idea"-- both hands playing two different things at once, perhaps, or pretty much any arpeggio.  In an arpeggio-- a melodic sequence of notes-- any note taken individually isn't meaningful, but the sequence and structure of the combinations becomes so.  This does seem morphemic rather than syllabic.

I'm not just flogging semantics, here.  I received this note today from Gerald (edited for brevity), which makes me wonder why I always seem to receive reader mail which relates to precisely the topics I've been thinking about.

Most people can "remember" a large number of songs, i.e. melodies, whether or not they have any particular interest in music... I am not at all even vaguely aware of the intervals in such familiar melodies, and in fact until working with a few ear training programs was rather poor at naming random intervals (an exercise I find of little or no musical value.) Why is this? What makes a series of melodic intervals in a song one has not heard for dozens of years so easily accessible, but a single interval heard a few minutes ago much harder to recognize or identify without deliberate training? What is going on here that is different from what happens when learning relative pitch?  ...I suppose one factor is that pitches in a song imply some kind of coherent harmony, but I suspect it is much more complicated than that.

I'm sure the entire process is terrifically complicated, and I know I have a few articles on my shelf about pitch and music memory (thanks to Diana Deutsch) and others about the mental grouping of various musical structures (from Thinking In Sound and The Psychology of Music, as well as a few others).  However, I think perhaps the specific question about melodies versus intervals can be answered, theoretically, by morphemes and contour.

Contour is the more obvious factor-- it's been shown that ordinarily we perceive music as an "up-and-down ride" (to use Mathieu's phrase), and as such, it doesn't always matter how far we go up or down, as long as the direction is correct and the pitch thus landed on fulfills its harmonic function.  Because harmony need not be perceived as distance, the specific interval leaps become irrelevant to the overall "shape" of the sound.  Just go up and down, and don't go "off-key", and you're singing the melody.  That would make it possible to remember and reproduce a melody without being aware of the intervals.

But this only answers half the question.  What is actually different between this and "learning relative pitch"?  I'll assume that "learning relative pitch" is as Gerald means it-- learning to identify random intervals.  Why would it be so easy to remember a melody and not an interval?  I think we can find the solution by understanding the morpheme.  A morpheme is a linguistic idea.  As demonstrated on that morpheme web page, this idea can have more than one syllable, such as the word "lady".  If you break this up into "la" and "dy", both pairs of phonemes fail to generate any kind of meaning at all, and have become merely random, arbitrary, self-referential sounds.  Similarly, a melody is a musical idea.  When you pull out the individual intervals, suddenly they lose their connection with the musical idea (or any human idea, for that matter) and thus lose their psychological significance.  Taking it a little further, the web side points out that you can have two morphemes in a single syllable, like "dogs"; and consider that the "s" in "dogs" is a morpheme when it appears as a pluralizer.  Just by itself, "s" isn't an idea; it requires context to gain meaning.  Why do you suppose it's such an effective tactic to remember intervals by associating them with some part of a melody?  It is possible to learn to recognize any disassociated sound, but it's easier when the sound actually means something-- and also, of course, this example shows how two or more distinct ideas can appear in a single rhythmic beat.

Additionally, the distinction between syllable and morpheme helps me to understand why Taneda and Copp both insist that rhythmic training is critical, absolutely essential, for success in musical education.  "What light through yonder window breaks" makes sense.  "what light through yonder window breaks" does not.  Rhythmic training isn't merely necessary for "keeping time"; rhythmic interpretation can completely change the musical idea.

Update 5/3:

I'm being imprecise in saying that, to sing a song without interval awareness, you avoid going "off-key".  Gerald reminded me that the phrase "on-key" implies nothing more than choosing notes from the correct scale.  I mean, more specifically, that the person singing doesn't have to be aware of the relationship between the previous note and the next one; they just have to ensure that the current note fulfills its harmonic function within the melody-- that is, that it "sounds right".  To a very large extent, this is what most of us do anyway, I'd imagine.

May 4 - On Copp of the world

The Fletcher Music Method arrived today.  I'm relieved that it is the textbook I hoped it would be, with complete explanations of what to do and how to do it-- however, structurally, the thing is a disaster.  If only she had simply expanded the chapters in her other book it would've been much easier to follow.  As it is, it's clearly intended to be used as an occasional reference by the person who's already been personally trained by Mrs. Fletcher-Copp; such a reader would know exactly what they were looking for, and wouldn't be bothered by having to flip around for it.  I'm going to have to pull this apart and put it back together again so I can make sense of it for you.

In the meantime, the philosophical treatise (a manifesto, perhaps), What Is the Fletcher Music Method, is now available, and I think it's a fun read.  I decided to go a slightly different route with this publication; I retyped it and reformatted it and made it available in e-book form.  This book will give you some worthwhile insights into musical education which I believe are still unusual even today.  I know that I recognize my own frustrations (from my childhood lessons) in some of the methodical theories she criticizes.  It won't train you to become a Fletcher Method instructor-- but if that is ultimately what you want to do, this is a good primer to find out if you like what she's saying.

It's interesting, too, to see how she prevented her own work from being widely successful; the book is littered with warnings against attempting to teach it without her stamp of approval.  She repeatedly refers to the patents on her materials, and cautions the would-be "infringer".  She even tells a "parable" which says, in essence, that she'd been approached by various school systems who wanted to adopt her method, and she turned them down cold because they wanted the materials and the instructions without her personal involvement-- or they refused to fire their existing music teachers and replace them with Fletcher-certified-and-approved instructors.  In her parable, contrary to the proverb, Copp insists that half a loaf is not better than none, and takes an all-or-nothing stance; and, as history has proven, the ultimate result of that attitude was, indeed, nothing.  That's too bad; if she had been smarter maybe you and I would have learned perfect pitch in school.

Anyway, I think you'll enjoy the book; and, if you've appreciated my work on this site, this is an inexpensive way to show your support.  In any case, thanks for reading!

May 8 - The long and winding road

I have a lot of work to do.

No, really.

I've taken a second look through The Fletcher Music Method.  In my initial assessment of it, I judged that it was written for the teacher who had already trained with Mrs Fletcher-Copp; this time through, I discovered that she also assumes the reader will have all her toys and received instruction in how to use them.  So she doesn't bother to describe them, illustrate them, or explain them in any detail.

This isn't a big problem, for the most part.  Since form follows function, I can infer most of the toys' designs from the games that use them.  Others are not so easy to figure; their purposes and uses are explained, but they will undoubtedly need some field-testing to see if I have understood them correctly.  For example, "Interval Sticks" may or may not be wooden blocks held between a child's fingers to give them the feel for each interval's distance on the keyboard.

There is one item which is going to need some serious puzzling over, and that's the modulating pegboard (a photograph of which can be seen in Musical Ability)In the book, she explains what you're supposed to do with the board; she just doesn't tell you how to do it.  I'm hoping that this is because of another assumption:  that the instructor will be familiar with music theory.  If that's so, then it should be possible to divine the pegboard's function with the help of a more experienced musician.  If not... well, let's hope for the best.  It's such a critical toy that I hate to think it couldn't be understood.

Regardless of the modulating board, though, her method has at least ten different toys and as many as twenty-five different games for each of the toys, plus plenty of games which do not require the toys.  And, in this book, they are not presented in a progressive order, but grouped by subject; and even then, she keeps referring back and forth to ideas in other chapters.  So there's a lot of re-organizing and reverse engineering I need to do just to make sense of what's here.  Then there's the matter of arranging the games and materials into a meaningful curriculum that can be followed over the course of a child's education from ages five through fifteen.  Still, the games are so clever, it should be entertaining just to put it together.  Here's one example which uses the break-apart keyboard for an absolute-pitch ear-training game.

2. Crying Children.  Let each child remove a key. The key is her baby and the teacher lets her hear how it sounds on the piano. When each child knows the voice of her baby and how it looks on the keyboard and staff, the teacher goes to the piano and plays these different keys, and as each child recognizes the voice of her note she takes it out of the keyboard, and if it sounds again puts it back, and so on, pretending that the keyboard is its cradle, and that it cries to come out or to go in. The boys pretend that when their notes are sounded their little chums are whistling outside the window for them, etc.

What fun this would be for a group of students, each of them waiting for "my note" so they can take their turn!  And this is just one of dozens of games.  This is good stuff.  It's going to be quite some work to resurrect this method, but it should be worth it.

I suppose I'll probably do most of that off-line.  If What is the Fletcher Music Method is any indication, her methods are equally effective for adults and children, and since her method is designed for older children I am optimistic that they will be; all the same, I'll probably return to cognitive research (and thoughts on adult learning) here on-line.

As an aside, I'm now up to Level 10, round 4 in Chordsweeper.  I had made it to level 9 previously, but needed to start over again at Level 1 because I resized the Round 2 grid to be 7x7, to make it more difficult.  Now I'm on Level 10 with the walruses, seals, and otters.  I'm looking forward to reaching Level 13 because that's when it breaks out into multiple octaves!

May 9 - Chapels and door hinges

Let's see if I can summarize this business about differentiation.

An "object" is a thing which has multiple characteristics.  A single characteristic alone cannot define an object ("five feet long").  Objects become recognizable when their characteristics have consistent relationships to each other ("blue square").  Any relationship between characteristics is a feature ("long nose").  The more distinctive the feature, the easier it is to recognize the object ("the red-haired guy in our class").  Plus, the more features we identify, the more clearly recognizable an object becomes ("the red Impala with the bent license plate and the fuzzy dice on the broken rear-view mirror").

We learn to recognize any object by studying it until we identify its distinctive features.  Our minds do this by comparing it to other objects.  Two screwdrivers may seem identical, but comparing them can show one to be a "Phillips-head" and the other "flat-head".  Thus we learn that the "head shape" feature is critical to the object's identity.  Once we know an object's critical features, our minds define the object by fusing those features into a single concept (for more about "unitization" check out Robert Goldstone's site). 

A musical sound is an object, because it is a thing with multiple characteristics (duration, timbre, pitch, volume, etc). Traditional ear training teaches us to recognize a sound's relationships within music (scale degree, chordal position, harmonic function, etc) so that the distinctive features of each particular musical object become known to us.  The better we know its features (relationships), the more clearly recognizable each musical sound becomes.

A single musical tone ("B-flat") is not a pitch.  It is an object, of which pitch is a single characteristic.

Perfect-pitch training methods for adults have always failed.  This could be because their creators have consistently assumed that absolute pitch is "recognizing or recalling any musical tone," and they attempt to reproduce that ability.  Because a tone is an object, and our minds learn to recognize objects by their features, I would suggest that any method whose goal is to recognize, name, identify, or recall musical tones will fail to teach perfect pitch.  Since single tones are objects, our minds will necessarily learn to recognize each one by discovering the relationships which define it.

Thousands of people, perhaps tens of thousands of people, have learned to recognize and recall any tone quickly, perfectly, and consistently, in isolation, using tone-identification training; what I'd say has probably happened there is that, through repeated practice, the discovered features of each tone object were unitized into an efficient mental model.  Once the same tones are placed into a musical context, their relationships change (or disappear); lacking their essential features, the tones become instantly unrecognizable.

How do you learn to identify a characteristic?   How can your mind separate the characteristic from its relationship to the object?

It seems your mind does this by comparing many objects which all possess the target characteristic. With repeated practice, our minds automatically "extract the invariant" from among the different objects. Although I normally shy away from analogies, I can say that this process would be sort of like looking at three dozen shirts each of completely different style, color, and size, and being told to figure out what makes them all the same (besides being a shirt). If you studied the shirts long enough, eventually you would realize that whatever their differences, every shirt has a little alligator sewn onto it somewhere. No one would have to tell you what to look for, or even how to look; you would just have to be aware that something makes them all the same, and keep studying the various shirts, and in time your mind would automatically find the answer for you.

Chordsweeper offers compelling support for this theory. When I first began, on level 1, I was only trying to identify the C-major I, IV, and V chords, and I kept confusing them with each other. However, as I advanced in the game, I was presented with additional musical objects which had the same chord structure, and soon I was able to recognize the chord structures as an invariant characteristic.  In fact, the game represents two layers of the process: by comparing the I-chord objects to each other, I learned the structural characteristic of "chord"; then, comparing the I, IV, and V sounds made each unique.  So now I identify I, IV, or V structure instantly and flawlessly. [If I go back to level 1 I make no mistakes at all.]  This is sort of working for pitch as well, but the comparison between pitch sounds isn't as thoroughly reinforced as the chord-structure characteristic.

But wait, you might wonder-- isn't chordal structure a feature, not a characteristic? That is, doesn't a chordal structure represent a relationship between sounds? Well, it can be that-- but it seems that a feature can be a characteristic as long as its components are separately meaningless. Until the relationship can be broken down into meaningful pieces, the relationship between those pieces cannot be understood separately from the pieces themselves.  And, from the other direction, meaningful features can become unitized into a single characteristic (like "Impala").

There appear to be two primary steps towards perceptual learning through differentiation:  extracting the invariant characteristic, then comparing it to make it distinct.  You can add a third part, too, which is its association with a visual symbol (in Chordsweeper, the I-chords are yellow animals, the IV are blue, and the V brown).  Now that I've framed it this way, I'm re-evaluating how the "ö is for öffnen" thing factors into it.  When I was learning the new phoneme, this word-association strategy worked because if the vowel was wrong (or right), the word would sound wrong (or right). This seems identical to the success that some people have reported in recognizing pitches by associating melodies with those pitches-- but now that I think about it, this isn't how they recognize the pitch, any more than that was how I recognized the phoneme. No; that was how I successfully produced the phoneme, both in my mind and in my mouth. So even though C-is-for-Cat will be an important step, at some point, it will be a step towards production. I don't think it will be necessary for perception.

Addendum 5/10

Here's a little additional food for thought, from a person who has absolute pitch:

Hello Chris,

I agree completely that for people with absolute pitch a note is an object, and not a position along a continuum. For me a note has a pitch, a timbre, a loudness, a duration, a spatial position, and many different associations-- in short, it is a thing. I believe this is why, for example, absolute pitch is often tied to timbre, and why some notes are better recognized than others by some people (for example, white keys are recognized better than black keys by some pianists)-- and I believe that this also fits well with the hypothesis that, for tone language speakers, a word is a bundle of features which includes its tone-- so that speakers of one tone language have to learn the tones of a different tone language, in the same way as intonation language speakers (as well as tone language speakers) have to learn the vowels and consonants of different languages.

Cheers,
Diana Deutsch

In other news, I'm up to Level 11, Round 4 in Chordsweeper.  Looking at the board, full of little animal heads, is dizzying; 11 levels means 33 different chords!  I'm going to have to write an entry soon about what happens in my mind while I'm playing the game, because it's peculiar and hard to describe.

May 15 - Kenya dig it?

When I was at Bowling Green State University, in a production of Spoon River Anthology, our music director Tom Scott informed us, "Practice doesn't make perfect.  Practice makes permanent."

Tom's point was clear-- when you repeat something, you reinforce it.  Both good and bad habits become more ingrained.  Indeed, I've heard that a definition of insanity is "doing the same thing over and over again and expecting a different result."  But if you read the literature (including Gibson's book), you see undeniable scientific evidence that a person's ability to accomplish a task does improve with practice.  How can your ability improve if you're just doing the same thing over and over again?  You might be able to do something faster, but why would you expect to be any better (unless faster is better)?

An answer hit me while I was playing a maddeningly addictive* little whack-a-mole game.**  Each time I played the game, the earlier levels became less challenging.  I used to take a deep breath before level 7; now I don't even get worried until level 10.  However, I've been stuck on Level 10 for the past few days, which made me begin to wonder what I could do differently to get past that level.  As I reflected on how I learned to beat the other levels, I suddenly understood what "practice" actually is.  Practice-- good practice-- is never going to be repetition.  Effective practice is discovering how to do something differently.

Many performers will tell you that the surest way to kill a performance is to "over-rehearse"-- too many repetitions and it gets stale.  I've often had scene partners tell me that they want to stop before they lose whatever it is they think they've just captured.  But that decision is based on the misunderstanding that practicing (for anything, not just acting) is repeating and repeating until you "get it right", rather than finding new ways to do it better.

How many people are trying to teach themselves perfect pitch by listening to the same pitches over and over again, expecting something different to happen?  How many people are trying to teach themselves relative pitch by listening to the same intervals repeatedly, guessing the names again and again?  How many people try to teach themselves a second language by using flash cards***, repeating the same nonsense sounds over and over?  Is this practice?  Is this learning?

The value of using games to teach music has become clearer to me.  It's not merely that the games are more fun, and encourage the player to return to their "studies" (although that is important).  Rather, the games are purposeful.  Proceeding in a game requires you to improve your skills.  Drilling questions-and-answers doesn't.  You get them right, you get them wrong, and gradually you might get more of them right than you did before, but no amount of repetition can change your ability to drill.

This brings me to Interval Loader.  For drilling intervals, Interval Loader does have its theoretical strong points: it uses visual instead of verbal labels, the keyboard shortcuts are the same as piano fingering, the game is more entertaining than traditional drills, and the game levels structure the drills and pace the student.  But underneath the hood, it is a drilling machine, which means that its results will, ultimately, be the same as either Functional Ear Trainer or Bruce Arnold's relative-pitch course.

Maybe this is partly why I've been stuck on Chordsweeper's Level 11, Round 4.  There are now 33 different animal heads (plus a tomato) scattered around a board of 104 tiles.  Since the tiles are placed randomly, and the target chords are played randomly, this many animals means that I have very little choice in my responses.  There's still enough choice that I can attempt to click out certain patterns-- and, interestingly, if I don't do that, but simply try to match the chords as I hear them, I lose interest very quickly.  Hm... indeed, I just realized that the reason I've only completed 17 full games out of my many, many attempts is because I always restart the game the moment it's clear that I no longer have any control over the board.

Question-and-answer drilling is not how you learn something; it's how you reinforce what you've already learned.  It is a learning tool, because you "get better" by this kind of drilling, but the drilling itself isn't what taught you.  Using Interval Loader, I made myself able to instantly recognize any scale degree of the C-major scale-- but did Interval Loader teach me those intervals?  With the significant exception of the tritone, no it did not!  Except for the tritone, I learned each one by creating a harmonic or melodic association (m3 with "My Heart Belongs to Daddy", m7 with "Home on the Range", etc) and then used Interval Loader to repeat repeat repeat and reinforce what I had taught myself.  Practice makes permanent.  A method for perfect pitch training, if it's going to work for everyone, must teach the student.

*If anyone is aware of any books which explain what makes a game addictive, or what makes a song catchy, please let me know.

** I had already identified a couple of reasons why this game was relevant to my research.  One reason is the mental conflict introduced in Level 4, where the target's visual-verbal label doesn't match its spatial position.  Another reason is that all the targets, good and bad, share the same prominent feature:  a spike on top.  I don't know if the designer explicitly intended it-- maybe he just wanted a unified design concept-- but as any object rises into position, the first thing you see is this spike.  This makes the game much harder, and demonstrates how our minds learn to recognize objects by identifying their features.  The spike is the most obvious feature, and responding reflexively to that feature almost guarantees you're going to hit the bad targets.  The next most obvious feature is color, but if you swing indiscriminately at blue or green you're done for.  You have to really look at green and blue targets, and identify a different feature, before you can respond.  As the game speeds up, that becomes more difficult to do quickly-- and in spite of itself, your mind keeps trying to use the objects' most obvious features (spike and color) to make quick judgments.  You'll respond to the feature before you realize you shouldn't, and by the time you're at level 10, one mistake can (and does) end your game immediately.

***And how do vocabulary words have any significance or relevance unless you're using them to say something?

Addendum:

You may have noticed that this is largely a speculative entry; in such writings, I'm not proving anything, just clarifying thoughts and establishing hypotheses.  Future entries are then spurred by the need to explore and provide support for what I've suggested.  It's also fun in the short term, because making theoretical statements naturally invites challenge, so I get more responses.

Nonetheless-- if my information is incomplete, I should be cautious when mentioning others' work, to avoid misrepresenting it.  Here's Alain's comment about his Functional Ear Trainer.

That's the big difference between Functional Ear Trainer and Bruce Arnold's relative-pitch course. In my program I teach the user how to learn to recognize notes in the context of the key. He should listen how the note resolves to the nearest tonic. First by singing the tones between the random note and the tonic. The next step is to hear them in your mind, and the last step is to recognize the tone immediately.

As a help, there is the button "Listen to correct answer" that plays that resolution.

Bruce Arnold, on the other hand, says you have to try and guess and it will come through practicing over and over. Functional Ear Trainer eliminates the "guessing" part of the game.

May 20 - Efficient chips

According to Gibson's theories, our minds learn to identify features by comparing objects.  For example, this shape alone has a singular identity,

but if you place it into this series,

your mind will quickly see the shared "circle" and the unique "point".  And, subsequently, your mind will be able to separate out the unique feature when it appears as part of a new object.

This is the theory that guided Chordsweeper's design.  I-IV and I-V each have one shared pitch, which I theorized would force the listener's mind into hearing the shared pitch as a separate feature.  Perhaps, I thought, once do and sol had been recognized as separate, mi would automatically pop out from between them in the I-chord.  Plus, do and sol being separated, comparing IV-V could reinforce the fa-la and ti-re combinations as separate features.

After playing the game for a short while, I began to hear a single tone "ring out" from each chord.  Soon I was able to identify a V-chord because of its re-ti feature.  Eventually, gradually, I was able to separate the individual tones from every chord I heard.  I can either play them back in my head after hearing the chord, or sing them, or pick them out of the chord as it dies away.  Unsurprisingly, attending to the separate tones is a different experience from listening to the chordal sound; if I choose to listen to the tones instead of the chord, I don't recognize the chord.  I say "unsurprisingly" because we all know the old trick of spelling out words so that a little child won't know what you're talking about.  If you know how to read, you can put it together; if you don't, then there's no connection between the individual and combined sounds.

As I continued to listen to chords, both on the computer and on the piano, I seemed able to hear and sing back the tones in any major chord; if this is so, it would suggest that the game is developing the broader skills of being able to hear single tones in complex sounds, and of inferring the structure of a major chord.  I do find myself better able to separate pitch sounds out of music-- and of everyday sounds too (my computer fan, for example, hums a pleasant fourth).

What Chordsweeper is not doing is functioning as an absolute-pitch training tool.  Although it was never intended to be that, I found myself wondering-- especially when I reached Level 11 and was dealing with 11 of the 12 chromatic scale degrees-- how could it not be teaching me absolute pitch?  Wasn't I identifying chords absolutely?  If each chord reinforces the tonic sound of its "key feeling", and I was successfully recognizing these chords, why wouldn't this give me pitch recognition ability?  If you get better at identifying chords absolutely, why wouldn't you expect to gain pitch recognition ability?  I mean, you're doing it!

But, though sometimes I could just sit down and begin matching tiles, other times I felt myself being oriented to the sounds as I began.  I couldn't tell whether the orientation was through some fixed internal tonic, or through my memory being jogged as each chord was re-introduced, or what-- but it was obvious within a few turns that something had clicked into place.  It may be that, when I was able to just jump right in, I had already achieved that orientation (whatever it was) without knowing it.

It also drove me bonkers that I kept making the same mistakes.  I always guessed wrongly for the terrier and the squirrel, which had been introduced in levels one and three, respectively!  I was amazed and baffled that I kept confusing these two so consistently.  In fact, by the time I was at level 9, I recognized these pairs as my mortal enemies:

=
=
=
=
=

What made the terrier-squirrel and parrot-basilisk matches particularly frustrating were their consistency.  If I thought I heard a terrier, and clicked a terrier, it would turn into a squirrel.  But if I thought I heard a terrier, and decided ah ha, you won't fool me THIS time, and clicked a squirrel, it would be a terrier after all.  Same with the parrot-- I kept making this mistake, and it drove me nuts.  Although I did meet a musician a couple weeks ago who told me that, in ear training, IV chords are easily confused (blue animals are IV), I was not mollified.  By Level 11, Round 4, I should have these down, dang it.

So I began to analyze my mistakes.  The first thing I suddenly noticed, which I had not realized before, is that these pairs were all semitone errors!  Dog = C, rodent = B; bird = E, reptile = F.  If I were going to mix up any scale tones, it'd be these.  I found that mildly reassuring, but a semitone error still suggested a lack of absolute judgment.  Why would I make a semitone error?  In listening to the chords, even when "oriented", I never felt as though I was making choices based on a chord being higher or lower than the previous one; if I wasn't aiming for distance, how could I miss by a semitone?  I began to investigate my mind's decision-making process, and discovered the following procedure:

1.  Listen for the absolute quality of the chord.
2.  Choose between I, IV, and V.
3.  Listen again for an absolute quality that matches the new I/IV/V judgment.
4.  Visually scan the field to see if a tile feels right.

Steps 1 and 2 were instant and automatic; by Step 3 I often had the answer.  If I got to Step 4, I'd randomly drag the mouse around the board while I contemplated what I had just heard, asking myself questions like these:

Did I overlook an absolute quality which I can find in my memory?
Does the chord trigger a certain song?  (I have melodic associations for five of the chords.)
Is it one of the new chords, or a more familiar sound?
What relative sound area is it in?

In listing these steps, I realized that my mind was automatically trying to make this process as efficient as possible.  First my mind looked for the one choice which would end the search (the absolute sound).  Failing that, it eliminated 2/3 of the other options in one stroke by identifying the chord structure; then it again sought a single feature which would stop the search-- either from sound memory or by finding the tile that felt right.  Then the questions I began to ask myself were ones which continued to narrow and limit the available choices.  For example, recognizing a chord as new turned 11 choices into 4.  Eventually the choices combined into an answer:  a new chord in the middle sound area that sounds bouncy is the raccoon (where new but sharp is the horse, or bouncy but familiar is the monkey).  So even if I didn't immediately recognize a chord absolutely, I used the available cues to figure it out by an efficient process of elimination.  I continued to make semitone errors-- that's what made it impossible to pass Level 11, Round 4-- not because I was making only "high" or "low" judgments of relative distance, but because I was always able to narrow the search to one or the other.

That's why I confused B/C and E/F more often than anything else:  all of them were categorically familiar chords.  I could more easily distinguish C-C# than B-C, because even though C# was indeed twangier than B or C, C# was more obviously new.  That advantage might not have lasted, though; as I continued to play, I felt as though I was getting worse at identifying the brown bear versus the hippo (F#-G).  Perhaps this was because F# was the first accidental to be introduced, so it gradually became familiar, too.

I think we can trust that this is the way our minds will interpret objects.  Here's a quick example.  I'm going to ask you to find every occurrence of this shape

in the following group of shapes.  You may make this identification very quickly, but try to notice the steps that your mind is taking to make this identification.

Unless you are a highly unusual human being (and some of you are, let's admit it), your mind would've eliminated the upper figures because they were all square, and crossed out the other two circles because they had too many points, leaving you with one occurrence of the target shape.  That's simply the most efficient way to have made that decision.  If I had asked you to find the shape instead of find all occurrences of the shape, you might have immediately seen the correct shape and stopped there without any further effort.  Our minds love efficiency!

But this brings me back to the question-- why doesn't Chordsweeper teach absolute pitch?  The drive to efficiency must be why Chordsweeper does instill some absolute perception, even though the game isn't designed to teach it; somehow, our mind recognizes that absolute identification is the quickest way to an answer, and tries to find an absolute sound in each chord.  But the feeling isn't stable, nor is it reliable.  Sometimes it's there, and sometimes it's not, and if I'm not actively paying attention it's definitely not.

By contrast, the I-IV-V judgment is automatic, instant, unconscious, and unfailing.  In teaching these three chord structures, Chordsweeper worked.  Although there may be some inherent mental advantage to a chord being a system of sound relationships (and thus more easily recognized by our relationship-seeking, language-wired brain), I perceive three critical factors to this learning process.  One is that every new chord gives the opportunity to make the I-IV-V comparison.  Another is that, aside from an absolute judgment, chord structure is the most efficient feature for this task-- and thus the most desirable to know.  Thirdly, by the chords' being presented in multiple contexts (different key signatures), we satisfy the condition of same characteristic, different objects.

This is most likely why Chordsweeper doesn't teach absolute pitch.  Even though absolute sensation is the most efficient characteristic, more so than the chord structure, each new chord is different characteristic, different object.  Although the three key-chords do share a similar quality-- I began to learn to identify the brown dog by mentally "fitting" it to the spaniel-- that's not the same as an actual sounded pitch.  Every one of the chords sounds with a different root tone.  There's no opportunity for the mind to pull out pitch as an invariant sensation from these chords because the pitch always varies.

This gives me some essential points for developing the pitch-training module.  That module is still vaporware; I'm disappointed to find that I may not have it finished by the end of the summer as I'd hoped, but in the short term I rewrote Chordsweeper.  ETC v3.08-- e-mail me if you're using 3.07 or earlier and want an upgrade-- eliminates the five accidental key signatures in favor of multiple octaves.  That is, Level 8 doesn't add F#; instead, it takes you back to hearing only C-major, but in multiple octaves.  I don't know how effective this will be in helping absolute pitch sense-- like I said, Chordsweeper was designed as a structural-hearing tool-- but it can't hurt.  The change will share the same pitch characteristics among four different chord objects (one for each octave).  Level 15 then introduces multiple timbres; level 22 is both multiple octaves and timbres.  I haven't decided whether I'll try to re-introduce the accidental keys after that, but I suspect I might do that just because the animal heads are so cute.

There's one additional factor whose significance I do not yet entirely understand:  what makes it possible for one of the tiles to feel right?  I drag the mouse around randomly because often, when it points at the right one, I simply know it without consciously deciding.  I think there may be at least two things going on-- one could be the need to create a symbolic visual association with an abstract sound (like a phoneme with a grapheme) in order to "fix" the sound's identity.  Also, even though I can't sing back a chord just by looking at an animal head, I suspect there may be some kind of internal production occurring.  When I first reached Level 11, Round 4, looking at the board full of 33 different animal heads actually felt like I was hearing a cacophony of sound.  I don't know precisely what the association is between sound and symbol, but it appears that an unconscious association is indeed being made.

May 22 - Fletch tones

I added Chapter 1 of Fletcher's textbook to the sample page.  She talks a good game.

And here's what she has to say about perfect pitch.  The emphasis is mine.

By absolute pitch, some mean the ability to recognize any note heard or to sing the pitch of any note given. Others go further, and claim that you must be able to immediately recognize the slightest difference in the pitch of the same note on different pianos, in different countries, etc. That this ability will be as great a test of acute memory as of the ear, can be readily seen. Others say that an absolute knowledge of the pitch of a note is not necessary, that the relative is all that is necessary. This is equivalent to saying that I need never know that G is the fourth ledger line above the treble staff if I know that the ledger lines above are A-C-E-G; that is all that is necessary, provided that I get some one to give me the first sound— I can always climb up. Yes, we can all climb the tree if we can get the “boost,” but what shall we do if there is no one there to give us this? Surely, the fact that there is an individuality attached to the sound of each note which makes it necessary to the whole, should make it possible for us to recognize it absolutely, if attention enough were paid to its voice— especially in the beginning. When we think of our ability to recognize the tiniest differences in words, or in the pronunciation of the same word, it seems strange that there should be so many musicians who cannot at all distinguish any one tone of the scale absolutely. The recognizing of a tone relatively is more a mental process than an aural— for much depends upon the memory and upon the mind, to reason from one sound to another.

Continue reading with Phase 10


Home