Strategies

Music and language

  • Some of the attributes of music are particularly memorable, and can be used to assist learning.
  • Music and language are both important in helping humans form large social groups, and one can argue that they co-evolved on the back of this function*.
  • There is growing evidence that the same brain structures are involved in music and language processing.
  • A rare disorder suggests a genetic link between social skills, language skills, and musical skills.
  • These connections between music and language processing support recent evidence that music training can improve children's language skills.

The role of melody in helping recall

The most obvious connection between language and music is that music can be used to help us remember words. It has been convincingly shown that words are better recalled when they are learned as a song rather than speech - in particular conditions.

Melody is what is important. Rhythm is obviously part of that. We are all aware of the power of rhythm in helping make something memorable. But melody, it seems, has quite a lot of attributes, apart from rhythm, that we can use as cues to help our recall. And what seems to be crucial is the simplicity and predictability of the melody.

But the connection between language and music is much more profound than this.

The evolution of language

One of my favorite books is Robin Dunbar's Grooming, gossip and the evolution of language . In it he moves on from the fact that monkeys and apes are intensely social and that grooming each other is a major social bonding mechanism, to the theory that in humans language (particularly the sort of social language we call gossip) has taken the place of grooming. The size of human social groups, he argues cogently, was able to increase (to our species' benefit) because of the advantages language has over grooming. For example, it's hard to groom more than one at a time, but you can talk to several at once.

Language, music, and emotion

I mention this now because he also suggests that both music and language helped humans knit together in social groups, and maybe music was first. We are all familiar with the extraordinary power of music to not only evoke emotion, but also to bind us into a group. Think of your feelings at times of group singing - the singing of the national anthem, singing 'Auld Lang Syne' at New Year's Eve, singing in church, campfire singing, carol singing ... fill in your own experience.

Dunbar also observes that, while skilled oratory has its place of course, language is fairly inadequate at the emotional level - something we all have occasion to notice when we wish to offer comfort and support to those in emotional pain. At times like these, we tend to fall back on the tried and true methods of our forebears - touch.

So, while language is unrivalled in its ability to convey "the facts", there is a point at which it fails. At this point, other facilities need to step in. At an individual level, we have touch, and "body language". At the social level, we have music.

Language and music then, may well have developed together, not entirely independently.More evidence for this comes from recent neurological studies.

The neural substrates of language and music

Language is a very important and complex function in humans, and unsurprisingly it involves a number of brain regions. The most famous is Broca's area. Recent research into neurological aspects of music have held some surprises. Imaging studies have revealed that, while the same area (the planum temporale) was active in all subjects listening to music, in non-musicians it was the right planum temporale that was most active, while in musicians the left side dominated. The left planum temporale is thought to control language processing. It has been suggested that musicians process music as a language. This left-brain activity was most pronounced in people who had started musical training at an early age.

Moreover, several studies have now demonstrated that there are significant differences in the distribution of gray matter in the brain between professional musicians trained at an early age and non-musicians. In particular, musicians have an increased volume of gray matter in Broca's area. The extent of this increase appears to depend on the number of years devoted to musical training. There also appears to be a very significant increase in the amount of gray matter in the part of the auditory cortex called the Heschl's gyrus (also involved in the categorical perception of speech sounds).

An imaging study1 investigating the neural correlates of music processing found that " unexpected musical events" activated the areas of Broca and Wernicke, the superior temporal sulcus, Heschl's gyrus, both planum polare and planum temporale, as well as the anterior superior insular cortices. The important thing about this is that, while some of those regions were already known to be involved in music processing, the cortical network comprising all these structures has up to now been thought to be domain-specific for language processing.

People are sensitive to acoustic cues used to distinguish both different musicians and different speakers

Another study2 has found that people remember music in the same way that they remember speech. Both musicians and non-musicians were found to be equally accurate in distinguishing changes in musical sequences, when those changes were in the length and loudness of certain tones. This discrimination appeared to also be within the capabilities of ten-month-old babies, arguing that the facility is built into us, and does not require training.

These acoustic characteristics are what make two musicians sound different when they are playing the same music, and make two speakers sound different when they are saying the same sentence.

So, if this facility is innate, what do our genes tell us?

Williams syndrome

Williams syndrome is a rare genetic disorder. Those with this syndrome have characteristic facial and physical features, certain cardiovascular problems and mild to moderate mental retardation.

They are also markedly social, and have greater language capabilities than you would expect from their general cognitive ability. They score significantly higher on tests measuring behavior in social situations, including their ability to remember names and faces, eagerness to please others, empathy with others' emotions and tendency to approach strangers.

This connection, between sociability, language skills, and memory for names and faces, is what makes Williams syndrome interesting in this context. And of course, the final characteristic: an extraordinary connection with music
(see http://www.the-scientist.com/yr2001/nov/research_011126.html )

Mozart effect

A Canadian study is now underway to look at whether musical training gives children an edge over non-musical counterparts in verbal and writing skills (as well as perhaps giving the elderly an edge in preserving cognitive function for as long as possible). In view of the factors discussed here, the idea that music training benefits verbal skills is certainly plausible. I discuss this in more detail in my discussion of the much-hyped Mozart effect.

 

* I'm sorry, I know this is expressed somewhat clumsily. More colloquially, many people would say they co-evolved for this purpose. But functions don't evolve purposively - the eye didn't evolve because one day an organism thought it would be a really good idea to be able to see. We know this, but it is ... oh so much easier ... to talk about evolution as if it was purposeful. Unfortunately, what starts simply because as a sloppy shorthand way of saying something, becomes how people think of it. I don't want to perpetuate this myself, so, I'm sorry, we have to go with the clumsy.

References: 

  1. Dunbar, R. 1996. Grooming, gossip, and the evolution of language. Cambridge, Mass.: Harvard University Press.
  2. Wallace, W.T. 1994. Memory for music: effect of melody on recall of text. Journal of Experimental Psychology: Learning, Memory & Cognition, 20, 1471-85.
  3. 1. Koelsch, S., Gunter, T.C., von Cramon, D.Y., Zysset, S., Lohmann, G. & Friederici, A.D. 2002. Bach Speaks: A Cortical "Language-Network" Serves the Processing of Music, NeuroImage, 17(2), 956-966.
  4. 2. Palmer, C.,Jungers, M.K. & Jusczyk, P.W. 2001. Episodic Memory for Musical Prosody. Journal of Memory and Language, 45, 526-545. http://www.eurekalert.org/pub_releases/2002-01/osu-lrn010902.htm

tags strategies: 

Singing For Memory

Song is a wonderful way to remember information, although some songs are better than others. Songs that help you remember need to have simple tunes, with a lot of repetition -- although a more complex tune can be used if it is very familiar. Most importantly, the words should be closely tied to the tune, so that it provides information about the text, such as line and syllable length. You can read more about this in my article on Music as a mnemonic aid, but here I simply want to mention a few specific songs designed for teaching facts.

I was always impressed by Flanders & Swann’s song describing the First and Second Laws of Thermodynamics, and Tom Lehrer’s song of the Periodic Table.

The Thermodynamics song, I think, is much easier to remember than the Periodic Table, but the latter is an interesting demonstration of how much you can improve memorability simply by setting the information to music.

You can find some more “science songs” at http://www.haverford.edu/physics-astro/songs/links.html (this is actually designed for instruction: you can hear some of the songs, there are associated lesson plans, etc).

Drug Discovery Today also has an article recounting the lyrics for various songs, by scientists, celebrating various science subjects, which you can read at www.mnstate.edu/malott/Molecular04/SingaSongofScience.pdf (it's in pdf format).

Songs are in fact such a popular means of learning science facts that in the U.S. there is a Science Songwriters' Association!

Songs are also a great way to learn poems or prose texts. Many well-known texts have been put to music (for example, The Lied and Art Song Texts site has 87 listed for Shakespeare), or you can of course (bearing in mind the need to find a melody that "fits" the text) match texts to music yourself.

Part of this article originally appeared in the August 2004 newsletter.

tags strategies: 

Remembering names & faces

There are two well-established strategies for remembering people’s names. The simplest basically involves paying attention. Most of the time our memory for someone’s name fails because we never created an effective memory code for it.

An easy strategy for improving your memory for names

We can dramatically improve our memory for names simply by:

  • paying attention to the information
  • elaborating the information (e.g., “Everett? Is that with two t’s?”; “Rankin? Any relation to the writer?”; “Nielson? What nationality is that?”)
  • repeating the information at appropriate times.

The mnemonic strategy for remembering names and faces

The other method, of proven effectiveness but considerably more complicated, is a mnemonic strategy called the face-name association method.

You can find details of this strategy in most memory-improvement books, including my own. It is one of the most widely known and used mnemonic strategies, and it is undoubtedly effective when done properly. Like all mnemonic strategies however, it requires considerable effort to master. And as with most mnemonic strategies, imagery is the cornerstone. However, physical features are not necessarily the best means of categorizing a face.

What research tells us

Specific physical features (such as size of nose) are of less value in helping us remember a person than more global physical features (such as heaviness) or personality judgments (such as friendliness, confidence, intelligence). Rather than concentrating on specific features, we’d be better occupied in asking ourselves this sort of question: “Would I buy a used car from this person?”

However, searching for a distinctive feature (as opposed to answering a question about a specific feature, such as “does he have a big nose?”) is as effective as making a personality judgment. It seems clear that it is the thinking that is so important.

To remember better, think about what you want to remember.

Specifically, make a judgment (“she looks like a lawyer”), or a connection (“she’s got a nose like Barbara Streisand”). The connection can be a visual image, as in the face-name association strategy.

References: 

McCarty, D.L. 1980. Investigation of a visual imagery mnemonic device for acquiring face-name associations. Journal of Experimental Psychology: Human Learning and Memory, 6, 145-155.

tags strategies: 

Metamemory

Research has found that people are most likely to successfully apply appropriate learning and remembering strategies when they have also been taught general information about how the mind works.

The more you understand about how memory works, the more likely you are to benefit from instruction in particular memory skills.

When you have a good general understanding of how memory works, different learning strategies make much more sense. You will remember them more easily, because they are part of your general understanding. You will be able to adapt them to different situations, because you understand why they work and which aspects are important. You will be able to recognize which skills are useful in different situations. Not least important, because you understand why the strategies work, you will have much greater confidence in them.

[taken from The Memory Key]

Knowledge about memory is called "metamemory". There are four broad aspects of this kind of knowledge:

  • Factual knowledge about memory tasks and processes (that is, knowledge about both how memory works and about strategic behaviors)
  • Memory monitoring (that is, both awareness of how you typically use your memory as well as awareness of the current state of your memory)
  • Memory self-efficacy (that is, your sense of how well you use memory in demanding situations)
  • Memory-related affect (emotional states that may be related to or generated by memory demanding situations)

[taken from Hertzog, 1992]

Metamemory is assumed to play a significant role in the development of children's learning and memory performance. It's also — more surprisingly mdash; now thought to play some part in the decline in cognitive performance with age.

Part of the reason for this is, of course, the widespread perception that memory does decline with age, and accordingly, when older adults experience memory failure, they are more inclined to simply attribute it to age, rather than attempt to improve their performance. Relatedly, older adults are less inclined to use new strategies, partly because they don't believe it makes a difference.

But, whatever your age, old or young, your memory can be improved by mastering and using effective strategies. The main obstacle, for both old and young, is in fact convincing them that it's not them, it's what they're doing. And they can learn to do things better.

References: 

  • Hertzog, C. 1992. Improving memory: The possible roles of metamemory. In D. Herrmann, H. Weingartner, A. Searleman & C. McEvoy (eds.) Memory Improvement: Implications for Memory Theory. New York: Springer-Verlag. pp 61-78.
  • McPherson, F. 2000. The Memory Key. Franklin Lakes, NJ: Career Press.

tags strategies: 

The most effective way of spacing your learning

We don’t deliberately practice our memories of events — not as a rule, anyway. But we don’t need to — because just living our life is sufficient to bring about the practice. We remember happy, or unpleasant, events to ourselves, and we recount our memories to other people. Some will become familiar stories that we re-tell again and again. But facts, the sort of information we learn in formal settings such as school and university, these are not something we tend to repeatedly recount to ourselves or others — not for pleasure anyway! (Unless you’re a teacher, and that’s part of the reason teaching is such a good way of learning!)

So, this is one of the big issues in learning: how to get the repetition we need to fix something in our brain. Simple repetition — the sort of drill we deplore in pre-modern schools — is not a great answer. Not simply because it’s boring, but because boring tasks are not particularly effective means of getting the brain to do things. Our brains respond much better to the surprising, the novel, the emotional, the interesting.

Teachers today are of course aware of this, and do try (or I hope they do!) to provide as much variety, and interest, as they can. But there is another aspect to repetition that is less widely understood, and that is the spacing between repetitions. Now the basic principle has been known for some time: spaced repetition is better than massed practice. But research has been somewhat lacking as to what constitutes the optimal spacing for learning. Studies have tended to use quite short intervals. But now a new study has finally given us something to work with.

For a start, the study was much bigger than the usual such study — over 1350 people took part — increasing the faith we can have in the findings. And, crucially, the interval between the initial learning session and the second review session ranged from several minutes to 3.5 months (specifically, 3 minutes; one day; 2 days; 4 days; 7 days; 11 days; 14 days; 21 days; 35 days; 70 days; 105 days). The time until test also covered more ground — up to nearly a year (more specifically: 7 days; 35 days; 70 days; 350 days). The initial learning session involved the participants learning 32 obscure facts to a criterion level of one perfect recall for each fact. The review session involved the participants being tested twice on each fact. They were then shown the correct answer. Testing included both a recall test and a recognition (multi-choice) test. The participants, by the way, ranged in age from 18 to 72 years, with an average of 34 (the study was done using the internet; so nice to get away from the usual undergraduate fodder).

So there we are, a very systematic study, made possible by having such a large pool of participants (the benefits of the internet!). What was found? Well, first of all, the benefits of spacing review were quite significant, much larger than had been seen in earlier research when shorter intervals had been used. Given a fixed amount of study time, the optimal gap, compared to no gap (i.e. 3 minutes), improved recall by 64% and recognition by 26%.

Secondly, at any given test delay, longer intervals between initial study session and review session first improved test performance, then gradually reduced it. In other words, there was an optimal interval between study and review. This optimal gap increased as test delay increased — that is, the longer you want to remember the information, the more you should spread the gap between study and review (this simplifies the situation of course — if you’re serious about study, you’re going to review it more than once!). So, for those remembering for a week, the optimal gap was one day; for remembering for a month, it was 11 days; for 2 months (70 days) it was 3 weeks, and similarly for remembering for a year. Extrapolating, it seems likely that if you’re wanting to remember information for several years, you should review it over several months.

Note that the general rule is absolute rather than relative: when measured as a proportion of test delay, the optimal gap declined from about 20 to 40% of a 1-week test delay to about 5 to 10% of a 1-year test delay. In other words, although the optimal gap between study and review increases as the length of time you want to remember for increases, the ratio of gap to that length of time will decrease. Which seems very commonsensical.

As the researchers point out (and as has been said before), “the interaction of gap and test delay implies that many educational practices are highly inefficient”, concentrating topics tightly into short periods of time. This practice is likely to give misleadingly high levels of immediate mastery (as shown in tests given at the end of this time) — performance which is unlikely to be sustained over longer periods of time.

It’s also worth noting that the costs of using a gap that is longer than the optimal gap are decidedly less than the costs of using a shorter gap — in other words, better to space your learning longer than too short.

This article first appeared in the Memory Key Newsletter for December 2008

tags strategies: 

Subliminal & sleep learning

Subliminal learning achieved notoriety back in 1957, when James Vicary claimed moviegoers could be induced to buy popcorn and Coca-Cola through the use of messages that flashed on the screen too quickly to be seen. The claim was later shown to be false, but though the idea that people can be brainwashed by the use of such techniques has been disproven (there was quite a bit of hysteria about the notion at the time), that doesn’t mean the idea of subliminal learning is crazy.

Ten years ago, researchers demonstrated that subliminal messages do indeed affect human cognition — and showed the limits of that influence [1]. The study demonstrated that, to have an effect on a person’s decision, the subliminal message had to be received very very soon before that decision (a tenth of a second or less), and the person had to be forced to make the decision very quickly. Moreover, there was no memory trace detectable, indicating no permanent record was stored in memory.

But even such brief, low-level learning seems to require some level of attention. A study [2] found that subliminal learning doesn’t occur if the subliminal stimuli are presented during what has been termed an "attentional blink" You may recall when I’ve discussed multi-tasking, I’ve said that we can’t do two things at the same time — that tasks have to "queue" for attention. When a bottleneck occurs in the system, this attentional "blink" occurs.

But low-level sensory processing, which is an automatic process, isn’t affected by the attentional blink, so the finding that subliminal learning is affected by the blink indicates that subliminal stimuli require some high-level cognitive processing.

This finding has been confirmed by other studies. One such study [3] also has implications for reading. Participants in the study were shown either words or pronounceable nonwords and asked to perform either a lexical task (to identify whether the word they saw was a real word or a nonsense word) or a pronunciation task on the words. Unbeknownst to the participants however, they had been first presented with a subliminal word that either matched or didn't match the target word. People performed the tasks faster when the subliminal word was identical to the target word. However (and this is the interesting bit), the researchers then applied a magnetic pulse (transcranial magnetic stimulation) to the key brain regions of the brain before presenting the subliminal information. By applying TMS to one brain area or the other, they found they could selectively disrupt the subliminal effect for either the lexical or pronunciation task. In other words, it seems that, even when the stimulus is subliminal, the brain takes into account the conscious task instructions. Our expectations shape our processing of subliminal stimuli.

Another study [4] suggests that motivation is important, and also, perhaps, that some stimuli are more suitable than others. The study found that thirsty people could be encouraged to drink more, and also pay more for their drink, after being exposed to subliminal smiling faces. Subliminal frowning faces had the opposite effect. However, how much, and whether, the faces had an effect on drinking, depended on the person’s thirst. Those who weren’t thirsty weren’t affected at all. Smiles and frowns are of course stimuli to which we are very responsive.

So clearly, although it is possible to be unconsciously affected by stimuli that can’t be consciously detected, the effect is both small and fleeting. However, that doesn’t mean more long-term effects can’t be experienced as a result of information we’re not conscious of.

Psychologists make a distinction between explicit memory and implicit memory. Explicit memory is what you’re using when you remember or recognize something — it’s what we tend to think of as "memory". Implicit memory, on the other hand, is a concept that reflects the fact that sometimes people act in ways that are clearly affected by earlier experiences they have had, even though they are not consciously recalling such experiences.

Another study [5] that used erotic images (because, like smiling and frowning faces, these are particularly potent stimuli, making it easy to see a response) found that when your eyes are presented with erotic images in a way that keeps you from becoming aware of them, your brain can still detect them — evidenced by the way people respond to the images according to their gender and sexual orientation.

The study is more evidence that the brain processes more visual information than we are conscious of — which is an important part in the process of determining what we’ll pay attention to. But the researchers believe that the information is probably destroyed at an early stage of processing — in other words, as with subliminal stimuli, there is probably no permanent record of the experience.

Which leads me to sleep learning. This was a big idea when I was young, in the science fiction I read — the idea that you could easily master new languages by being instructed while you were asleep.

Well, the question of whether learning can take place during sleep (and I’m not talking about the consolidation of learning that’s occurred earlier) is one that has been looked at in animal studies. It has been shown that simple forms of learning are indeed possible during sleep. However, the way in which associations are formed is clearly altered even for simple learning, and complex forms of learning do not appear to be possible.[6]

As far as humans are concerned, the evidence is that any learning during sleep must occur during the lightest stage of sleep, when you still have some awareness of the world around you, and that what you are learning must be already familiar (presented previously while you were awake and paying attention) and not requiring any understanding.

All the evidence suggests that, although information can be processed without conscious awareness, there are severe limitations on that information. If you want to "know" something in the proper meaning of the word — be able to recall it, think about it — you need to actively engage with the information. No free lunches, I’m afraid!

But that doesn’t mean unconscious influences don’t have important implications for learning and memory. A paper provided online in the Scientific American Mind Matters blog describes how a single, 15-minute intervention erased almost half the racial achievement gap between African American and white students. And this is entirely consistent with a number of studies showing how our cognitive performance is affected by what we think of ourselves (which is affected by what others think of us).

This article first appeared in the Memory Key Newsletter for March 2007

References: 

  1. Greenwald, A.G., Draine, S.C. & Abrams, R.L. 1996. Three Cognitive Markers of Unconscious Semantic Activation. Science, 273 (5282), 1699-1702.
  2. Seitz, A. et al. 2005. Requirement for High Level Processing in Subliminal Learning. Current Biology, 15, R753-R755, September 20, 2005.
  3. Nakamura, K. et al. 2006. Task-Guided Selection of the Dual Neural Pathways for Reading. Neuron, 52, 557-564.
  4. Winkielman, P. 2005. Paper presented at the American Psychological Society annual convention in Los Angeles, May 26-29. Press release
  5. Jiang, Y. et al. 2006. A gender- and sexual orientation-dependent spatial attentional effect of invisible images. PNAS, 103 (45), 17048-17052.
  6. Coenen, A.M. & Drinkenburg, W.H. 2002. Animal models for information processing during sleep. International Journal of Psychophysiology, 46(3), 163-175.

tags lifestyle: 

tags memworks: 

tags strategies: 

Acquiring expertise through deliberate practice

K. Anders Ericsson, the guru of research into expertise, makes a very convincing case for the absolutely critical importance of what he terms “deliberate practice”, and the minimal role of what is commonly termed “talent”. I have written about this question of talent and also about the principles of expertise. Here I would like to talk briefly about Ericsson’s concept of deliberate practice.

Most people, he suggests, spend very little (if any) time engaging in deliberate practice even in those areas in which they wish to achieve some level of expertise. Experts, on the other hand, only achieve their expertise after several years (at least ten, in general) of maintaining high levels of regular deliberate practice.

What distinguishes deliberate practice from less productive practice? Ericsson suggests several factors are of importance:

The acquisition of expert performance needs to be broken down into a sequence of attainable training tasks.

  • Each of these tasks requires a well-defined goal.
  • Feedback for each step must be provided.
  • Repetition is needed — but that repetition is not simple; rather the student should be provided with opportunities that gradually refine his performance.
  • Attention is absolutely necessary — it is not enough to simply mechanically “go through the motions”.
  • The aspiring expert must constantly and attentively monitor her progress, adjusting and correcting her performance as required.

For these last two reasons, deliberate practice is limited in duration. Whatever the particular field of endeavor, there seems a remarkable consistency in the habits of elite performers that suggests 4 to 5 hours of deliberate practice per day is the maximum that can be maintained. This, of course, cannot all be done at one time without resting. When the concentration flags, it is time to rest — this most probably is after about an hour. But the student must train himself up to this level; the length of time he can concentrate will increase with practice.

Higher levels of concentration are often associated with longer sleeping, in particular in the form of day-time naps.

Not all practice is, or should be, deliberate practice. Deliberate practice is effortful and rarely enjoyable. Some practice is however, what Ericsson terms “playful interaction”, and presumably provides a motivational force — it should not be despised!

In general, experts reduce the amount of time they spend on deliberate practice as they age. It seems that, once a certain level of expertise has been achieved, it is not necessary to force yourself to continue the practice at the same level in order to maintain your skill. However, as long as you wish to improve, a high level of deliberate practice is required.

This article first appeared in the Memory Key Newsletter for November 2005

References: 

Ericsson, K.A. 1996. The acquisition of expert performance: An introduction to some of the issues. In K. Anders Ericsson (ed.), The Road to Excellence: The acquisition of expert performance in the arts and sciences, sports, and games. Mahwah, NJ: Lawrence Erlbaum.

tags strategies: 

tags study: 

Everyday memory strategies

Common everyday memory strategies

The most frequently used everyday memory strategies are:

  • writing calendar or diary notes
  • putting things in a special place
  • writing reminder notes
  • writing shopping lists
  • using face-name associations
  • mentally rehearsing information
  • using a timer
  • asking someone else to help

Of these, all but two are external memory aids. With the exception of face-name associations, mnemonic strategies (the foundation of most memory-improvement courses) are little used.

How effective are these strategies?

In general, external aids are regarded as easier to use, more accurate, and more dependable. In particular, external aids are preferred for reminding oneself to do things (planning memory). Mental strategies however, are equally preferred as retrieval cues for stored information. The preferred strategies are mentally retracing (for retrieving stored information) and mentally rehearsing (for storing information for later retrieval).

Note that these preferred strategies are not those that are most effective, but those strategies that are least effortful. The popularity of asking someone to help you remember has surprised researchers, but in this context it is readily understandable — asking someone is easiest strategy of all! It is not, however, particularly effective.

Older people in particular, are less inclined to use a strategy merely because it is effective. For them it is far more important that a strategy be familiar and easy to use.

Learning effective strategies does require effort, but once you have mastered them, the effort involved in using them is not great. The reason most people fail to use effective strategies is that they haven’t mastered them properly. A properly mastered skill is executed automatically, with little effort. (see Skill learning)

tags strategies: 

Successful remembering requires effective self-monitoring

We forget someone’s name, and our response might be: “Oh I’ve always been terrible at remembering names!” Or: “I’m getting old; I really can’t remember things anymore.” Or: nothing — we shrug it off without thought. What our response might be depends on our age and our personality, but that response has nothing to do with the reason we forgot.

We forget things for a number of short-term reasons: we’re tired; we’re distracted by other thoughts; we’re feeling emotional. But underneath all that, at all ages and in all situations, there is one fundamental reason why we fail to remember something: we didn’t encode it well enough at the time we learned/experienced it. And, yes, that is a strategy failure, and possibly also a reflection of those same factors (tired, distracted, emotional), but again, at bottom there is one fundamental reason: we didn’t realize what we needed to do to ensure we would remember it. This is a failure of self-monitoring, and self-monitoring is a crucial, and under-appreciated, strategy.

I’ve written about self-monitoring as a study skill, but self-monitoring is a far broader strategy than that. It applies to children and to seniors; it applies to remembering names and intentions and facts and experiences and skills. And it has a lot to do with cognitive fluency.

Cognitive fluency is as simple a concept as it sounds: it’s about how easy it is to think about something. We use this ease as a measure of familiarity — if it’s easy, we assume we’ve met it before. The easier it is, the more familiar we assume it is. Things that are familiar are (rule of thumb) assumed to be safe, seen as more attractive, make us feel more confident.

And are assumed to be known — that is, we don’t need to put any effort into encoding this information, because clearly we already know it.

Familiarity is a heuristic (rule of thumb) for several attributes. Fluency is a heuristic for familiarity.

Heuristics are vital — without these, we literally couldn’t function. The world is far too complex a place for us to deal with it without a whole heap of these rules of thumb. But the problem with them is that they are not rules, they are rules of thumb — guidelines, indicators. Meaning that a lot of the time, they’re wrong.

That’s why it’s not enough to unthinkingly rely on fluency as a guide to whether or not you need to make a deliberate effort to encode/learn something.

The secret to getting around the weaknesses of fluency is effective testing.

Notice I said effective.

If you intend to buy some bread on the way home from work, does the fact that you reminded yourself when you got to work constitute an effective test? Not in itself. If you are introduced to someone and you remember their name long enough to use it when you say goodbye, does this constitute an effective test? Again, not in itself. If you’re learning the periodic table and at the end of your study session are able to reel off all the elements in the right order, can you say you have learned this, and move on to something else? Not yet.

Effective testing has three elements: time, context, and feedback.

The feedback component should be self-evident, but apparently is not. It’s no good being tested or testing yourself, if your answer is wrong and you don’t know it! Of course, it’s not always possible to get feedback — and we don’t need feedback if we really are right. But how do we know if we’re right? Again, we use fluency to tell us. If the answer comes easily, we assume it’s correct. Most of the time it will be — but not always. So if you do have some means of checking your answer, you should take it.

[A brief aside to teachers and parents of school-aged students: Here in New Zealand we have a national qualifying exam (actually a series of exams) for our older secondary school students. The NCEA is quite innovative in many ways (you can read about it here if you’re curious), and since its introduction a few years ago there has been a great deal of controversy about it. As a parent of students who have gone through and are going through this process, I have had many criticisms about it myself. However, there are a number of good things about it, and one of these (which has nothing to do with the nature of the exams) is a process which I believe is extremely rare in the world (for a national exam): every exam paper is returned to the student. This is quite a logistical nightmare of course, when you consider each subject has several different papers (as an example, my younger son, sitting Level 2 this year, did 18 papers) and every paper has a different marker. But I believe the feedback really is worth it. Every test, whatever its ostensible purpose, should also be a learning experience. And to be a good learning experience, the student needs feedback.]

But time and context are the important, and under-appreciated, elements. A major reason why people fail to realize they haven’t properly encoded/learned something, is that they retrieve it easily soon after encoding, as in my examples above. But at this point, the information is still floating around in an accessible state. It hasn’t been consolidated; it hasn’t been properly filed in long-term memory. Retrieval this soon after encoding tells you (almost) nothing (obviously, if you did fail to retrieve it at this point, that would tell you something!).

So effective testing requires a certain amount of time to pass. And as I discussed when I talked about retrieval practice, it really requires quite a lot of time to pass before you can draw a line under it and say, ok, this is now done.

The third element is the least obvious. Context.

Why do we recognize the librarian when we see her at the library, but don’t recognize her at the supermarket? She’s out of context. Why does remembering we need to buy bread on the way home no good if we remember it when we arrive at work? Because successful intention remembering is all about remembering at the right time and in the right place.

Effective encoding means that we will be able to remember when we need the information. In some cases (like intention memory), that means tying the information to a particular context — so effective testing involves trying to retrieve the information in response to the right contextual cue.

In most cases, it means testing across a variety of contexts, to ensure you have multiple access points to the information.

Successful remembering requires effective monitoring at the time of encoding (when you encounter the information). Effective monitoring requires you not to be fooled by easy fluency, but to test yourself effectively, across time and context. These principles apply to all memory situations and across all ages.

 

Additional resources:

If you want to know more about cognitive fluency and its effect on the mind (rather than memory specifically), there's nice article in the Boston Globe. As an addendum (I'd read the more general and in-depth article in the Globe first), Miller-McCune have a brief article on one particular aspect of cognitive fluency -- the effect of names.

Miller-McCune have have a good article on the value of testing and the motivating benefits of failure.

tags memworks: 

tags strategies: 

tags study: 

Rhyme & rhythm

As we all know, rhyme and rhythm help make information more memorable. Here's a few ideas that may help you use them more effectively.

Rhythm and rhyme are of course quite separate things, and are processed in different regions of the brain. However, they do share some commonalities in why and how they benefit memory. Rhyme and rhythm impose pattern. For that reason, rhyme and rhythm are particularly valuable when information is not inherently meaningful.

Remember that organization is the key to memory. If information cannot be meaningfully organized, it must be organized by other means.

Imposing a pattern, by using, for example, rhyme and/or rhythm, is one of those means.

Patterns are remembered because they are orderly. An important aspect of order is that it is predictable. When we can anticipate the next part of a sequence or pattern, we encode that information better, probably because our attention has been focused on structurally important points.

There is another aspect to patterns, and to rhyme and rhythm in particular. They help recall by limiting the possible solutions. In the same way that being told the name you want to remember starts with “B” helps your search your memory, so knowing that the next word rhymes with “time” will help your search. Of course, knowing the sound ending of a word helps far more than simply knowing the initial letter, and when this is in the context of a verse, you are usually also constrained by meaning, reducing the possibilities immensely.

Rhythm isn’t quite so helpful, yet it too helps constrain the possibilities by specifying the number of syllables you are searching for.

It is clear from this that for rhyme in particular, it is most effective if the rhyming words are significant words. For example, “In fourteen hundred and ninety two, Columbus sailed the ocean blue” is pretty good (not brilliant), because “two” is a significant word, and “blue” is sufficiently strongly associated with the ocean (another significant word, since it suggests why we remember him). On the other hand, this verse for remembering England’s kings and queens is not particularly good:

“Willie, Willie, Harry, Steve,
Harry, Dick, John, Harry Three,
Edward One, Two, Three, Dick Two,
Henry Four, Five, Six, then who?
Edward Four, Five, Dick the Bad,
Harrys twain and Ned, the lad.
Mary, Lizzie, James the Vain,
Charlie, Charlie, James again.
William and Mary, Anne o'Gloria,
Four Georges, William and Victoria.
Edward Seven, Georgie Five,
Edward, George and Liz (alive)”

The fact that it is in verse, providing rhyme and rhythm as mnemonic aids, is obviously helpful, but its effectiveness is lessened by the fact that the rhyming words are forced, with little significance to them.

Rhythm has another function, one it doesn’t share with rhyme. Rhythm groups information.

Grouping is of course another fundamental means of making something easy to remember. We can only hold a very limited number of bits of information in our mind at one time, so grouping is necessary for this alone. But in addition, grouping information into a meaningful cluster, or at least one where all bits are closely related, is what organization (the key to memory — can I say it too often?) is all about.

Studies indicate that groups of three are most effective. The gap between such groups can be quite tiny, provided it is discernible by the listener. The way we customarily group phone numbers is a reflection of that.

If you can’t group the information entirely in threes, twos are apparently better than fours (i.e., a 7 figure number would be broken into 3-2-2: 982 34 67). Having said that, I would add that I would imagine that meaningfulness might override this preference; if a four-digit number had meaning in itself, say a famous date, I would group it that way rather than breaking it into smaller chunks and losing the meaning.

But let us never forget the importance of individual difference. Baddeley[1] cites the case of a Scottish professor who had amazing memory abilities. One of his feats was to recall the value of pi to the first thousand decimal places — a feat he would not have bothered to perform if it had not been “so easy”! Apparently, he found that simply arranging the digits in rows of 50, with each row grouped in lots of 5 digits, and reciting them in a particular rhythm, made them very easy (for him) to memorize: “rather like learning a Bach fugue”. The psychologist who observed him doing this feat (Ian Hunter, known for his book, “Memory”) said he did the whole thing in 150 seconds, pausing only (for breath) after the first 500. The rhythm and tempo was basically 5 digits per second, with half a second between each group.

There’s also some evidence to suggest those with musical abilities may benefit more from rhythm, and even rhyme (musically trained people tend to have better verbal skills, and, intriguingly, a 1993 study[2] found a positive correlation between pitch discrimination and an understanding of rhyme and alliteration in children).

The “3 Rs” — rhyme, rhythm, and repetition. It’s not a fair analogy, because these differ considerably in their importance, but I couldn’t resist it.

I want to repeat something I’ve said before — because it is absolutely fundamental. Repetition is essential to memory.

There is sometimes a feeling among novice learners that mnemonic strategies “do away” with the need for repetition. They do not. Nothing does. What memory strategies of all kinds do is reduce the need for repetition. Nothing eliminates the need for repetition.

Even experiences that seem to be examples of “one-trial” learning (i.e., the single experience is enough to remember it forever) are probably re-experienced mentally a number of times. Can you think of any single experience you had, or fact you learned, that you experienced/heard/saw only once, and NEVER thought about again for a long time, until something recalled it to mind?

It’s a difficult thing to prove or disprove, of course.

However, for practical purposes, it is enough to note that, yes, if we want to remember something, we must repeat it. If we’re using a mnemonic strategy to help us remember, we must include the mnemonic cue in our remembering. Thus, if you’re trying to remember that the man with a nose like a beak was called Bill Taylor, don’t omit any of your associative links in your remembering until they’re firmly cemented. I say that because if the “answer” (nose like a beak à Bill Taylor) pops up readily, it’s easy to not bother with remembering the linking information (beak = bill; pay the tailor’s bill). However, if you want the information to stick, you want to make sure those associations are all firmly embedded.

Rhyme and rhythm are mnemonic cues of a different sort, but however effectively you might use them (and if you use them wisely they can be very effective), you still can’t avoid the need for repetition.

Always remember the essential rules of repetition:

  • space it out
  • space it at increasing intervals

(see my article on practice for more on this)

Interesting resource:

The Omnificent English Dictionary In Limerick Form:: A wonderful idea for remembering those difficult or rare words, if you’re learning English as a second-language or simply want to expand your vocabulary.

This article first appeared in the Memory Key Newsletter for June 2005

References: 

  1. Baddeley, A. 1994. Your memory: A user’s guide. Penguin
  2. Lamb, S. & Gregory, A. 1993. The relationship between music and reading in beginning readers. Educational Psychology, 13, 19-28.

tags strategies: 

Pages

Subscribe to Strategies