Skip to main content

attention

Desirable difficulty for effective learning

When we are presented with new information, we try and connect it to information we already hold. This is automatic. Sometimes the information fits in easily; other times the fit is more difficult — perhaps because some of our old information is wrong, or perhaps because we lack some of the knowledge we need to fit them together.

When we're confronted by contradictory information, our first reaction is usually surprise. But if the surprise continues, with the contradictions perhaps increasing, or at any rate becoming no closer to being resolved, then our emotional reaction turns to confusion.

Confusion is very common in the learning process, despite most educators thinking that effective teaching is all about minimizing, if not eliminating, confusion.

But recent research has suggested that confusion is not necessarily a bad thing. Indeed, in some circumstances, it may be desirable.

I see this as an example of the broader notion of ‘desirable difficulty’, which is the subject of my current post. But let’s look first at this recent study on confusion for learning.

In the study, students engaged in ‘trialogues’ involving themselves and two animated agents. The trialogues discussed possible flaws in a scientific study, and the animated agents took the roles of a tutor and a student peer. To get the student thinking about what makes a good scientific study, the agents disagreed with each other on certain points, and the student had to decide who was right. On some occasions, the agents made incorrect or contradictory statements about the study.

In the first experiment, involving 64 students, there were four opportunities for contradictions during the discussion of each research study. Because the overall levels of student confusion were quite low, a second experiment, involving 76 students, used a delayed manipulation, where the animated agents initially agreed with each other but eventually started to express divergent views. In this condition, students were sometimes then given a text to read to help them resolve their confusion. It was thought that, given their confusion, students would read the text with particular attention, and so improve their learning.

In both experiments, on those trials which genuinely confused the students, those students who were initially confused by the contradiction between the two agents did significantly better on the test at the end.

A side-note: self-reports of confusion were not very sensitive, and students’ responses to forced-choice questions following the contradictions were more sensitive at inferring confusion. This is a reminder that students are not necessarily good judges of their own confusion!

The idea behind all this is that, when there’s a mismatch between new information and prior knowledge, we have to explore the contradictions more deeply — make an effort to explain the contradictions. Such deeper processing should result in more durable and accessible memory codes.

Such a mismatch can occur in many, quite diverse contexts — not simply in the study situation. For example, unexpected feedback, anomalous events, obstacles to goals, or interruptions of familiar action sequences, all create some sort of mismatch between incoming information and prior knowledge.

However, all instances of confusion aren’t necessarily useful for learning and memory. They need to be relevant to the activity, and of course the individual needs to have the means to resolve the confusion.

As I said, I see a relationship between this idea of the right level and type of confusion enhancing learning, and the idea of desirable difficulty. I’ve talked before about the ‘desirable difficulty’ effect (see, for example, Using 'hard to read' fonts may help you remember more). Both of these ideas, of course, connect to a much older and more fundamental idea: that of levels of processing. The idea that we can process information at varying levels, and that deeper levels of processing improve memory and learning, dates back to a paper written in 1972 by Craik and Lockhart (although it has been developed and modified over the years), and underpins (usually implicitly) much educational thinking.

But it’s not so much this fundamental notion that deeper processing helps memory and learning, and certain desirable difficulties encourage deeper processing, that interests me as much as idea of getting the level right.

Too much confusion is usually counter-productive; too much difficulty the same.

Getting the difficulty level right is something I have talked about in connection with flow. On the face of it, confusion would seem to be counterproductive for achieving flow, and yet ... it rather depends on the level of confusion, don't you think? If the student has clear paths to follow to resolve the confusion, the information flow doesn't need to stop.

This idea also, perhaps, has connections to effective practice principles — specifically, what I call the ‘Just-in-time rule’. This is the principle that the optimal spacing for your retrieval practice depends on you retrieving the information just before you would have forgotten it. (That’s not as occult as it sounds! But I’m not here to discuss that today.)

It seems to me that another way of thinking about this is that you want to find that moment when retrieval of that information is at the ‘right’ level of difficulty — neither too easy, nor too hard.

Successful teaching is about shaping the information flow so that the student experiences it — moment by moment — at the right level of difficulty. This is, of course, impossible in a factory-model classroom, but the mechanics of tailoring the information flow to the individual are now made possible by technology.

But technology isn't the answer on its own. To achieve optimal results, it helps if the individual student is aware that the success of their learning depends on (or will at least be more effective — for some will be successful regardless of the inadequacy of the instruction) managing the information flow. Which means they need to provide honest feedback, they need to be able to monitor their learning and recognize when they have ‘got’ something and when they haven’t, and they need to understand that if one approach to a subject isn’t working for them, then they need to try a different one.

Perhaps this provides a different perspective for some of you. I'd love to hear of any thoughts or experiences teachers and students have had that bear on these issues.

References

D’Mello, S., Lehman B., Pekrun R., & Graesser A. (Submitted). Confusion can be beneficial for learning. Learning and Instruction.

Benefits from fixed quiet points in the day

On my walk today, I listened to a downloaded interview from the On Being website. The interview was with ‘vocal magician and conductor’ Bobby McFerrin, and something he said early on in the interview really caught my attention.

In response to a question about why he’d once (in his teens) contemplated joining a monastic order, he said that the quiet really appealed to him, and also ‘the discipline of the hours … there’s a rhythm to the day. I liked the fact that you stopped whatever you were doing at a particular time and you reminded yourself, you brought yourself back to your calling’.

Those words resonated with me, and they made me think of the Moslem habit of prayer. Of the idea of having specified times during the day when you stop your ‘ordinary’ life, and touch base, as it were, with something that is central to your being.

I don’t think you need to be a monk or a Moslem to find value in such an activity! Nor does the activity need to be overtly religious.

Because this idea struck another echo in me — some time ago I wrote a brief report on how even a short ‘quiet time’ can help you consolidate your memories. It strikes me that developing the habit of having fixed points in the day when (if at all possible) you engage in some regular activity that helps relax you and center your thoughts, would help maintain your focus during the day, and give you a mental space in which to consolidate any new information that has come your way.

Appropriate activities could include:

  • meditating on your breath;
  • performing a t’ai chi routine;
  • observing nature;
  • listening to certain types of music;
  • singing/chanting some song/verse (e.g., the Psalms; the Iliad; the Tao te Ching)

Regarding the last two suggestions, as I reported in my book on mnemonics, there’s some evidence that reciting the Iliad has physiological effects on synchronizing heartbeat and breath that is beneficial for both mood and cognitive functioning. It’s speculated that the critical factor might be the hexametric pace (dum-diddy, dum-diddy, dum-diddy, dum-diddy, dum-diddy, dum-dum). Dactylic hexameter, the rhythm of classical epic, has a musical counterpart: 6/8 time.

Similarly, another small study found that singing Ave Maria in Latin, or chanting a yoga mantra, likewise affects brain blood flow, and the crucial factor appeared to be a rhythm that involved breathing at the rate of six breaths a minute.

Something to think about!

Neglect your senses at your cognitive peril!

Impaired vision is common in old age and even more so in Alzheimer’s disease, and this results not only from damage in the association areas of the brain but also from problems in lower-level areas. A major factor in whether visual impairment impacts everyday function is contrast sensitivity.

Contrast sensitivity not only slows down your perceiving and encoding, it also interacts with higher-order processing, such as decision-making. These effects may be behind the established interactions between age, perceptual ability, and cognitive ability. Such interactions are not restricted to sight — they’ve been reported for several senses.

In fact, it’s been suggested that much of what we regard as ‘normal’ cognitive decline in aging is simply a consequence of having senses that don’t work as well as they used to.

The effects in Alzheimer’s disease are, I think, particularly interesting, because we tend to regard any cognitive impairment here as inevitable and a product of pathological brain damage we can’t do anything much about. But what if some of the cognitive impairment could be removed, simply by improving the perceptual input?

That’s what some recent studies have shown, and I think it’s noteworthy not only because of what it means for those with Alzheimer’s and mild cognitive impairment, but also because of the implications for any normally aging person.

So let’s look at some of this research.

Let’s start with the connection between visual and cognitive impairment.

Analysis of data from the Health and Retirement Study and Medicare files, involving 625 older adults, found that those with very good or excellent vision at baseline had a 63% reduced risk of developing dementia over a mean follow-up period of 8.5 years. Those with poorer vision who didn’t visit an ophthalmologist had a 9.5-fold increased risk of Alzheimer disease and a 5-fold increased risk of mild cognitive impairment. Poorer vision without a previous eye procedure increased the risk of Alzheimer’s 5-fold. For Americans aged 90 years or older, 78% who kept their cognitive skills had received at least one previous eye procedure compared with 52% of those with Alzheimer’s disease.

In other words, if you leave poor vision untreated, you greatly increase your risk of cognitive impairment and dementia.

Similarly, cognitive testing of nearly 3000 older adults with age-related macular degeneration found that cognitive function declined with increased macular abnormalities and reduced visual acuity. This remained true after factors such as age, education, smoking status, diabetes, hypertension, and depression, were accounted for.

And a study comparing the performance of 135 patients with probable Alzheimer’s and 97 matched normal controls on a test of perceptual organization ability (Hooper Visual Organization Test) found that the VOT was sensitive to severity of dementia in the Alzheimer’s patients.

So let’s move on to what we can do about it. Treatment for impaired vision is of course one necessary aspect, but there is also the matter of trying to improve the perceptual environment. Let’s look at this research in a bit more detail.

A 2007 study compared the performance of 35 older adults with probable Alzheimer’s, 35 healthy older adults, and 58 young adults. They were all screened to exclude those with visual disorders, such as cataracts, glaucoma, or macular degeneration. There were significant visual acuity differences between all 3 groups (median scores: 20/16 for young adults; 20/25 for healthy older adults; 20/32 for Alzheimer’s patients).

Contrast sensitivity was also significantly different between the groups, although this was moderated by spatial frequency (normal contrast sensitivity varies according to spatial frequency, so this is not unexpected). Also unsurprisingly, the young adults outperformed both older groups at every spatial frequency, except at the lowest, where it was matched by that of healthy older adults. Similarly, healthy older adults outperformed Alzheimer’s patients at every frequency bar one — the highest frequency.

For Alzheimer’s patients, there was a significant correlation between contrast sensitivity and their cognitive (MMSE) score (except at the lowest frequency of course).

Participants carried out a number of cognitive/perceptual tasks: letter identification; word reading; unfamiliar-face matching; picture naming; pattern completion. Stimuli varied in their perceptual strength (contrast with background).

Letter reading: there were no significant differences between groups in terms of accuracy, but stimulus strength affected reaction time for all participants, and this was different for the groups. In particular, older adults benefited most from having the greatest contrast, with the Alzheimer’s group benefiting more than the healthy older group. Moreover, Alzheimer’s patients seeing the letters at medium strength were not significantly different from healthy older adults seeing the letters at low strength.

Word reading: here there were significant differences between all groups in accuracy as well as reaction time. There was also a significant effect of stimulus strength, which again interacted with group. While young adults’ accuracy wasn’t affected by stimulus strength, both older groups were. Again, there were no differences between the Alzheimer’s group and healthy older adults when the former group was at high stimulus strength and the latter at medium, or at medium vs low. That was true for both accuracy and reaction time.

Picture naming: By and large all groups, even the Alzheimer’s one, found this task easy. Nevertheless, there were effects of stimulus strength, and once again, the performance of the Alzheimer’s group when the stimuli were at medium strength matched that of healthy older adults with low strength stimuli.

Raven’s Matrices and Benton Faces: Here the differences between all groups could not in general be ameliorated by manipulating stimulus strength. The exception was with the Benton Faces, where Alzheimer’s patients seeing the medium strength stimuli matched the performance of healthy older adults seeing low strength stimuli.

In summary, then, for letter reading (reaction time), word reading (identification accuracy and reaction time), picture naming, and face discrimination, manipulating stimulus strength in terms of contrast was sufficient to bring the performance of individuals with Alzheimer’s to a level equal to that of their healthy age-matched counterparts.

It may be that the failure of this manipulation to affect performance on the Raven’s Matrices reflects the greater complexity of these stimuli or the greater demands of the task. However, the success of the manipulation in the case of the Benton Faces — a similar task with stimuli of apparently similar complexity — contradicts this. It may that the stimulus manipulation simply requires some more appropriate tweaking to be effective.

It might be thought that these effects are a simple product of making stimuli easier to see, but the findings are a little more complex than I’ve rendered them. The precise effect of the manipulation varied depending on the type of stimuli. For example, in some cases there was no difference between low and medium stimuli, in others no difference between medium and high; in some, the low contrast stimuli were the most difficult, in others the low and medium strength stimuli were equally difficult, and on one occasion high strength stimuli were more difficult than medium.

The finding that Alzheimer’s individuals can perform as well as healthy older adults on letter and word reading tasks when the contrast is raised suggests that the reading difficulties that are common in Alzheimer’s are not solely due to cognitive impairment, but are partly perceptual. Similarly, naming errors may not be solely due to semantic processing problems, but also to perceptual problems.

Alzheimer’s individuals have been shown to do better recognizing stimuli the closer the representation is to the real-world object. Perhaps it is this that underlies the effect of stimulus strength — the representation of the stimulus when presented at a lower strength is too weak for the compromised Alzheimer’s visual system.

All this is not to say that there are not very real semantic and cognitive problems! But they are not the sole issue.

I said before that for Alzheimer’s patients there was a significant correlation between contrast sensitivity and their MMSE score. This is consistent with several studies, which have found that dementia severity is correlated with contrast sensitivity at some spatial frequencies. This, and these experimental findings, suggests that contrast sensitivity is in itself an important variable in cognitive performance, and contrast sensitivity and dementia severity have a common substrate.

It’s also important to note that the manipulations of contrast were standard across the group. It may well be that individualized manipulations would have even greater benefits.

Another recent study comparing the performance of healthy older and younger adults and individuals with Alzheimer's disease and Parkinson's disease on the digit cancellation test (a visual search task used in the diagnosis of Alzheimer’s), found that increased contrast brought the healthy older adults and those with Parkinson’s up to the level of the younger adults, and significantly benefited Alzheimer’s individuals — without, however, overcoming all their impairment.

There were two healthy older adults control groups: one age-matched to the Alzheimer’s group, and one age-matched to the Parkinson’s group. The former were some 10.5 years older to the latter. Interestingly, the younger control group (average age 64) performed at the same level as the young adults (average age 20), while the older old control group performed significantly worse. As expected, both the Parkinson’s group and the Alzheimer’s group performed worse than their age-matched controls.

However, when contrast was individually tailored at the level at which the person correctly identified a digit appearing for 35.5 ms 80% of the time, there were no significant performance differences between any of the three control groups or the Parkinson’s group. Only the Alzheimer’s group still showed impaired performance.

The idea of this “critical contrast” comparison was to produce stimuli that would be equally challenging for all participants. It was not about finding the optimal level for each individual (and indeed, young controls and the younger old controls both performed better at higher contrast levels). The findings indicate that poorer performance by older adults and those with Parkinson’s is due largely to their weaker contrast sensitivity, but those with Alzheimer’s are also hampered by their impaired ability to conduct a visual search.

The same researchers demonstrated this in a real-world setting, using Bingo cards. Bingo is a popular activity in nursing homes, senior centers and assisted-living facilities, and has both social and cognitive benefits.

Varying cards in terms of contrast, size, and visual complexity found that all groups benefited from increasing stimulus size and decreasing complexity. Those with mild Alzheimer’s were able to perform at levels comparable to their healthy peers, although those with more severe dementia gained little benefit.

Contrast boosting has also been shown to work in everyday environments: people with dementia can navigate more safely around their homes when objects in it have more contrast (e.g. a black sofa in a white room), and eat more if they use a white plate and tableware on a dark tablecloth or are served food that contrasts the color of the plate.

There’s a third possible approach that might also be employed to some benefit, although this is more speculative. A study recently reported at the American Association for the Advancement of Science annual conference revealed that visual deficits found in individuals born with cataracts in both eyes who have had their vision corrected can be overcome through video game playing.

After playing an action video game for just 40 hours over four weeks, the patients were better at seeing small print, the direction of moving dots, and the identity of faces.

The small study (this is not, after all, a common condition) involved six people aged 19 to 31 who were born with dense cataracts in each eye. Despite these cataracts being removed early in life, such individuals still grow up with poorer vision, because normal development of the visual cortex has been disrupted.

The game required players to respond to action directly ahead of them and in the periphery of their vision, and to track objects that are sometimes faint and moving in different directions. Best results were achieved when players were engaged at the highest skill level they could manage.

Now this is quite a different circumstance to that of individuals whose visual system developed normally but is now degrading. However, if vision worsens for some time before being corrected, or if relevant activities/stimulation have been allowed to decline, it may be that some of the deficit is not due to damage as such, but more malleable effects. In the same way that we now say that cognitive abilities need to be kept in use if they are not to be lost, perceptual abilities (to the extent that they are cognitive, which is a great extent) may benefit from active use and training.

In other words, if you have perceptual deficits, whether in sight, hearing, smell, or taste, you should give some thought to dealing with them. While I don’t know of any research to do with taste, I have reported on several studies associating hearing loss with age-related cognitive impairment or dementia, and similarly olfactory impairment. Of particular interest is the research on reviving a failing sense of smell through training, which suggested that one road to olfactory impairment is through neglect, and that this could be restored through training (in an animal model). Similarly, I have reported, more than once, on the evidence that music training can help protect against hearing loss in old age. (You can find more research on perception, training, and old age, on the Perception aggregated news page.)

 

For more on the:

Bingo study: https://www.eurekalert.org/pub_releases/2012-01/cwru-gh010312.php

Video game study:

https://www.guardian.co.uk/science/2012/feb/17/videogames-eyesight-rare-eye-disorder

https://medicalxpress.com/news/2012-02-gaming-eyesight.html

References

(In order of mention)

Rogers MA, Langa KM. 2010. Untreated poor vision: a contributing factor to late-life dementia. American Journal of Epidemiology, 171(6), 728-35.

Clemons TE, Rankin MW, McBee WL, Age-Related Eye Disease Study Research Group. 2006. Cognitive impairment in the Age-Related Eye Disease Study: AREDS report no. 16. Archives of Ophthalmology, 124(4), 537-43.

Paxton JL, Peavy GM, Jenkins C, Rice VA, Heindel WC, Salmon DP. 2007. Deterioration of visual-perceptual organization ability in Alzheimer's disease. Cortex, 43(7), 967-75.

Cronin-Golomb, A., Gilmore, G. C., Neargarder, S., Morrison, S. R., & Laudate, T. M. (2007). Enhanced stimulus strength improves visual cognition in aging and Alzheimer’s disease. Cortex, 43, 952-966.

Toner, Chelsea K.;Reese, Bruce E.;Neargarder, Sandy;Riedel, Tatiana M.;Gilmore, Grover C.;Cronin-Golomb, A. 2011. Vision-fair neuropsychological assessment in normal aging, Parkinson's disease and Alzheimer's disease. Psychology and Aging, Published online December 26.

Laudate, T. M., Neargarder S., Dunne T. E., Sullivan K. D., Joshi P., Gilmore G. C., et al. (2011). Bingo! Externally supported performance intervention for deficient visual search in normal aging, Parkinson's disease, and Alzheimer's disease. Aging, Neuropsychology, and Cognition. 19(1-2), 102 - 121.

Event boundaries and working memory capacity

In a recent news report, I talked about how walking through doorways creates event boundaries, requiring us to update our awareness of current events and making information about the previous location less available. I commented that we should be aware of the consequences of event boundaries for our memory, and how these contextual factors are important elements of our filing system. I want to talk a bit more about that.

One of the hardest, and most important, things to understand about memory is how various types of memory relate to each other. Of course, the biggest problem here is that we don’t really know! But we do have a much greater understanding than we used to do, so let’s see if I can pull out some salient points and draw a useful picture.

Let’s start with episodic memory. Now episodic memory is sometimes called memory for events, and that is reasonable enough, but it perhaps gives an inaccurate impression because of the common usage of the term ‘event’. The fact is, everything you experience is an event, or to put it another way, a lifetime is one long event, broken into many many episodes.

Similarly, we break continuous events into segments. This was demonstrated in a study ten years ago, that found that when people watched movies of everyday events, such as making the bed or ironing a shirt, brain activity showed that the event was automatically parsed into smaller segments. Moreover, changes in brain activity were larger at large boundaries (that is, the boundaries of large, superordinate units) and smaller at small boundaries (the boundaries of small, subordinate units).

Indeed, following research showing the same phenomenon when people merely read about everyday activities, this is thought to reflect a more general disposition to impose a segmented structure on events and activities (“event structure perception”).

Event Segmentation Theory postulates that perceptual systems segment activity as a side effect of trying to predict what’s going to happen. Changes in the activity make prediction more difficult and cause errors. So these are the points when we update our memory representations to keep them effective.

Such changes cover a wide gamut, from changes in movement to changes in goals.

If you’ve been following my blog, the term ‘updating’ will hopefully bring to mind another type of memory — working memory. In my article How working memory works: What you need to know, I talked about the updating component of working memory at some length. I mentioned that updating may be the crucial component behind the strong correlation between working memory capacity and intelligence, and that updating deficits might underlie poor comprehension. I distinguished between three components of updating (retrieval; transformation; substitution), and how transformation was the most important for deciding how accurately and how quickly you can update your contents in working memory. And I discussed how the most important element in determining your working memory ‘capacity’ seems to be your ability to keep irrelevant information out of your memory codes.

So this event segmentation research suggests that working memory updating occurs at event boundaries. This means that information before the boundary becomes less accessible (hence the findings from the walking through doorways studies). But event boundaries relate not only to working memory (keeping yourself updated to what’s going on) but also to long-term storage (we’re back to episodic memory now). This is because those boundaries are encoded particularly strongly — they are anchors.

Event boundaries are beginnings and endings, and we have always known that beginnings and endings are better remembered than middles. In psychology this is known formally as the primacy and recency effects. In a list of ten words (that favorite subject of psychology experiments), the first two or three items and the last two or three items are the best remembered. The idea of event boundaries gives us a new perspective on this well-established phenomenon.

Studies of reading have shown that readers slow down at event boundaries, when they are hypothesized to construct a new mental model. Such boundaries occur when the action moves to a new place, or a new time, or new characters enter the action, or a new causal sequence is begun. The most important of these is changes in characters and their goals, and changes in time.

As I’ve mentioned before, goals are thought to play a major role (probably the major role) in organizing our memories, particularly activities. Goals produce hierarchies — any task can be divided into progressively smaller elements. Research suggests that higher-order events (coarse-grained, to use the terminology of temporal grains) and lower-order events (fine-grained) are sensitive to different features. For example, in movie studies, coarse-grained events were found to focus on objects, using more precise nouns and less precise verbs. Finer-grained events, on the other hand, focused on actions on those objects, using more precise verbs but less precise nouns.

The idea that these are separate tasks is supported by the finding of selective impairments of coarse-grained segmentation in patients with frontal lobe lesions and patients with schizophrenia.

While we automatically organize events hierarchically (even infants seem to be sensitive to hierarchical organization of behavior), that doesn’t mean our organization is always effortlessly optimal. It’s been found that we can learn new procedures more easily if the hierarchical structure is laid out explicitly — contrariwise, we can make it harder to learn a new procedure by describing or constructing the wrong structure.

Using these hierarchical structures helps us because it helps us use knowledge we already have in memory. We can co-op chunks of other events/activities and plug them in. (You can see how this relates to transfer — the more chunks a new activity shares with a familiar one, the more quickly you can learn it.)

Support for the idea that event boundaries serve as anchors comes from several studies. For example, when people watched feature films with or without commercials, their recall of the film was better when there were no commercials or the commercials occurred at event boundaries. Similarly, when people watched movies of everyday events with or without bits removed, their recall was better if there were no deletions or the deletions occurred well within event segments, preserving the boundaries.

It’s also been found that we remember details better if we’ve segmented finely rather than coarsely, and some evidence supports the idea that people who segment effectively remember the activity better.

Segmentation, theory suggests, helps us anticipate what’s going to happen. This in turn helps us adaptively create memory codes that best reflect the structure of events, and by simplifying the event stream into a number of chunks of which many if not most are already encoded in your database, you save on processing resources while also improving your understanding of what’s going on (because those already-coded chunks have been processed).

All this emphasizes the importance of segmenting well, which means you need to be able to pinpoint the correct units of activity. One way we do that is by error monitoring. If we are monitoring our ongoing understanding of events, we will notice when that understanding begins to falter. We also need to pay attention to the ordering of events and the relationships between sequences of events.

The last type of memory I want to mention is semantic memory. Semantic memory refers to what we commonly think of as ‘knowledge’. This is our memory of facts, of language, of generic information that is untethered from specific events. But all this information first started out as episodic memory — before you ‘knew’ the word for cow, you had to experience it (repeatedly); before you ‘knew’ what happens when you go to the dentist, you had to (repeatedly) go to the dentist; before you ‘knew’ that the earth goes around the sun, there were a number of events in which you heard or read that fact. To get to episodic memory (your memory for specific events), you must pass through working memory (the place where you put incoming information together into some sort of meaningful chunk). To get to semantic memory, the information must pass through episodic memory.

How does information in episodic memory become information in semantic memory? Here we come to the process of reconstruction, of which I have often spoken (see my article on memory consolidation for some background on this). The crucial point here is that memories are rewritten every time they are retrieved.

Remember, too, that neurons are continually being reused — memories are held in patterns of activity, that is, networks of neurons, not individual neurons. Individual neurons may be involved in any number of networks. Here’s a new analogy for the brain: think of a manuscript, one of those old parchments, so precious that it must be re-used repeatedly. Modern technology can reveal those imperfectly erased hidden layers. So the neural networks that are memory codes may be thought of as imposed one on top of each other, none of them matching, as different patterns re-use the same individual neurons. The strongest patterns are the most accessible; patterns that are most similar (use more of the same neurons) will provide the most competition.

So, say you are told by your teacher that the earth goes around the sun. That’s the first episode, and there’ll be lots of contextual detail that relates to that particular event. Then on another occasion, you read a book showing how the earth goes around the sun. Again, lots of episodic detail, of which some will be shared with the first incident, and some will be different. Another episode, more detail, some shared, some not. And so on, again and again, until the extraneous details, irrelevant to the fact and always different, are lost, while those details that common to all the episodes will be strong, and form a new, tight chunk of information in semantic memory.

Event boundaries start off with an advantage — they are beginnings or endings, to which we are predisposed to attend (for obvious reasons). So they start off stronger than other bits of information, and by their nature are more likely to be vital elements, that will always co-occur with the event. So — if you have chosen your boundaries well (i.e., they truly are vital elements) they will become stronger with each episode, and will end up as vital parts of the chunk in semantic memory.

Let’s connect that thought back to my comment that the most important difference between those with ‘low’ working memory capacity and those with ‘high’ capacity is the ability to select the ‘right’ information and disregard the irrelevant. Segmenting your events well would seem to be another way of saying that you are good at selecting the changes that are most relevant, that will be common to any such events on other occasions.

And that, although some people are clearly ‘naturally’ better at it, is surely something that people can learn.

References

Culham, J. 2001. The brain as film director. Trends in Cognitive Sciences, 5 (9), 376-377.

Kurby, C. a, & Zacks, J. M. (2008). Segmentation in the perception and memory of events. Trends in cognitive sciences, 12(2), 72-9. doi:10.1016/j.tics.2007.11.004

Speer, N. K., Zacks, J. M., & Reynolds, J. R. (2007). Human Brain Activity Time-Locked to Narrative Event Boundaries. Psychological Science, 18(5), 449–455. doi:10.1111/j.1467-9280.2007.01920.x

Choosing when to think fast & when to think slow

I recently read an interesting article in the Smithsonian about procrastination and why it’s good for you. Frank Partnoy, author of a new book on the subject, pointed out that procrastination only began to be regarded as a bad thing by the Puritans — earlier (among the Greeks and Romans, for example), it was regarded more as a sign of wisdom.

The examples given about the perils of deciding too quickly made me think about the assumed connection between intelligence and processing speed. We equate intelligence with quick thinking, and time to get the correct answer is part of many tests. So, regardless of the excellence of a person’s cognitive product, the time it takes for them to produce it is vital (in test).

Similarly, one of the main aspects of cognition impacted by age is processing speed, and one of the principal reasons for people to feel that they are ‘losing it’ is because their thinking is becoming noticeably slower.

But here’s the question: does it matter?

Certainly in a life-or-death, climb-the-tree-fast-or-be-eaten scenario, speed is critical. But in today’s world, the major reason for emphasizing speed is the pace of life. Too much to do and not enough time to do it in. So, naturally, we want to do everything fast.

There is certainly a place for thinking fast. I recently looked through a short book entitled “Speed Thinking” by Ken Huds. The author’s strategy for speed thinking was basically to give yourself a very brief window — 2 minutes — in which to come up with 9 thoughts (the nature of those thoughts depends on the task before you — I’m just generalizing the strategy here). The essential elements are the tight time limit and the lack of a content limit — to accomplish this feat of 9 relevant thoughts in 2 minutes, you need to lose your inner censor and accept any idea that occurs to you.

If you’ve been reading my last couple of posts on flow, it won’t surprise you that this strategy is one likely to produce that state of consciousness (at least, once you’re in the way of it).

So, I certainly think there’s a place for fast thinking. Short bouts like this can re-energize you and direct your focus. But life is a marathon, not a sprint, and of course we can’t maintain such a pace or level of concentration. Nor should we want to, because sometimes it’s better to let things simmer. But how do we decide when it’s best to think fast or best to think slow? (shades of Daniel Kahneman’s wonderful book Thinking, Fast and Slow here!)

In the same way that achieving flow depends on the match between your skill and the task demands, the best speed for processing depends on your level of expertise, the demands of the task, and the demands of the situation.

For example, Sian Beilock (whose work on math anxiety I have reported on) led a study that demonstrated that, while novice golfers putted better when they could concentrate step-by-step on the accuracy of their performance, experts did better when their attention was split between two tasks and when they were focused on speed rather than accuracy.

Another example comes from a monkey study that has just been in the news. In this study, rhesus macaques were trained to reach out to a target. To do so, their brains needed to know three things: where their hand is, where the target is, and the path for the hand to travel to reach the target. If there’s a direct path from the hand to the target, the calculation is simple. But in the experiment, an obstacle would often block the direct path to the target. In such cases, the calculation becomes a little bit more complicated.

And now we come to the interesting bit: two monkeys participated. As it turns out, one was hyperactive, the other more controlled. The hyperactive monkey would quickly reach out as soon as the target appeared, without waiting to see if an obstacle blocked the direct path. If an obstacle did indeed appear in the path (which it did on 2/3 trials), he had to correct his movement in mid-reach. The more self-controlled monkey, however, waited a little longer, to see where the obstacle appeared, then moved smoothly to the target. The hyperactive monkey had a speed advantage when the way was clear, but the other monkey had the advantage when the target was blocked.

So perhaps we should start thinking of processing speed as a personality, rather than cognitive, variable!

[An aside: it’s worth noting that the discovery that the two monkeys had different strategies, undergirded by different neural activity, only came about because the researcher was baffled by the inconsistencies in the data he was analyzing. As I’ve said before, our focus on group data often conceals many fascinating individual differences.]

The Beilock study indicates that the ‘correct’ speed — for thinking, for decision-making, for solving problems, for creating — will vary as a function of expertise and attentional demands (are you trying to do two things at once? Is something in your environment or your own thoughts distracting you?). In which regard, I want to mention another article I recently read — a blog post on EdWeek, on procedural fluency in math learning. That post referenced an article on timed tests and math anxiety (which I’m afraid is only available if you’re registered on the EdWeek site). This article makes the excellent point that timed tests are a major factor in developing math anxiety in young children. Which is a point I think we can generalize.

Thinking fast, for short periods of time, can produce effective results, and the rewarding mental state of flow. Being forced to try and think fast, when you lack the necessary skills, is stressful and non-productive. If you want to practice thinking fast, stick with skills or topics that you know well. If you want to think fast in areas in which you lack sufficient expertise, work on slowly and steadily building up that expertise first.

Daydreaming nurtures creativity?

Back in 2010, I read a charming article in the New York Times about a bunch of neuroscientists bravely disentangling themselves from their technology (email, cellphones, laptops, …) and going into the wilderness (rafting down the San Juan River) in order to get a better understanding of how heavy use of digital technology might change the way we think, and whether we can reverse the problem by immersing ourselves in nature.

One of those psychologists has now co-authored a study involving 56 people who participated in four- to six-day wilderness hiking, electronic-device-free, trips organized by Outward Bound schools. The study looked at the effect of this experience on creativity, comparing the performance of 24 participants who took the 10-item creativity test the morning before they began the trip, and 32 who took the test on the morning of the trip's fourth day.

Those few days in the wilderness increased performance on the task by 50% — from an average of 4.14 pre-trip to 6.08.

However, much as I like the idea, I have to say my faith in these results is not particularly great, given that there was a significant age difference between the two groups. The average age of the pre-hike group was 34, and that of the in-hike group 24. Why the researchers didn’t try to control this I have no idea, but I’m not convinced by their statement that they statistically accounted for age effects — which are significant.

Moreover, this study doesn’t tell us whether the effect was due to the experience of nature, simply the experience of doing something different, or the unplugging from technology. Still, it adds to the growing research exploring Attention Restoration Theory.

view from my office window
View from my window

I’m a great fan of nature myself, and count myself very fortunate to live surrounded by trees and within five minutes of a stream and bush (what people in other countries might call ‘woods’, though New Zealand bush is rather different). However, whether or not it is a factor in itself, there’s no denying other factors are also important — not least, perhaps, the opportunity to let your mind wander. “Mind wandering”, it has been suggested, evokes a unique mental state that allows otherwise opposing networks to work in cooperation, and stimulates problem-solving.

This is supported, perhaps, in another recent study. Again, I’m not putting too much weight on this, because it was a small study and most particularly because it was presented at a conference and very few details are available. But it’s an interesting idea, so let me give you the bullet points.

In the first study, 40 people were asked to copy numbers out of a telephone directory for 15 minutes, before being to complete a more creative task (coming up with different uses for a pair of polystyrene cups). Those who had first copied out the telephone numbers (the most boring task the researchers could think of) were more creative than a control group of 40 who had simply been asked to come up with uses for the cups, with no preamble.

In a follow-up experiment, an extra experimental group was added — these people simply read the phone numbers. While, once again, those copying the numbers were more creative than the controls, those simply reading the numbers scored the most highly on the creativity test.

The researchers suggest that boring activities that allow the most scope for daydreaming can lead to the most creativity. (You can read more about this study in the press release and in a Huffington Post article by one of the researchers.)

Remembering other research suggesting that thinking about your experiences when living abroad can make you more creative, I would agree, in part, with this conclusion: I think doing a boring task can help creativity, if you are not simply bogged down in the feeling of boredom, if you use the time granted you to think about something else — but it does matter what you think about!

The wilderness experiment has two parts to it: like the boring task, but to a much greater degree (longer span of time), it provides an opportunity to let your mind run free; like the living-abroad experiment, it puts you in a situation where you are doing something completely different in a different place. I think both these things are very important — but the doing-something-different is more important than putting yourself in a boring situation! Boredom can easily stultify the brain. The significance of the boredom study is not that you should do boring tasks to become more creative, but that, if you are doing something boring (that doesn’t require much of your attention), you should let your thoughts wander into happy and stimulating areas, not just wallow in the tedium!

But of course the most important point of these studies is a reminder that creativity - the ability to think divergently - is not simply something a person 'has', but that it flowers or dwindles in different circumstances. If you want to encourage your ability to think laterally, to solve problems, to be creative, then you need to nurture that ability.

What babies can teach us about effective information-seeking and management

Here’s an interesting study that’s just been reported: 72 seven- and eight-month-old infants watched video animations of familiar fun items being revealed from behind a set of colorful boxes (see the 3-minute YouTube video). What the researchers found is that the babies reliably lost interest when the video became too predictable – and also when the sequence of events became too unpredictable.

In other words, there’s a level of predictability/complexity that is “just right” (the researchers are calling this the ‘Goldilocks effect’) for learning.

Now it’s true that the way babies operate is not necessarily how we operate. But this finding is consistent with other research suggesting that adult learners find it easier to learn and pay attention to material that is at just the right level of complexity/difficulty.

The findings help explain why some experiments have found that infants reliably prefer familiar objects, while other experiments have found instead a preference for novel items. Because here’s the thing about the ‘right amount’ of surprise or complexity — it’s a function of the context.

And this is just as true for us adults as it is for them.

We live in a world that’s flooded with information and change. Clay Shirky says: “There’s no such thing as information overload — only filter failure.” Brian Solis re-works this as: “information overload is a symptom of our inability to focus on what’s truly important or relevant to who we are as individuals, professionals, and as human beings.”

I think this is simplistic. Maybe that’s just because I’m interested in too many things and they all tie together in different ways, and because I believe, deeply, in the need to cross boundaries. We need specialists, sure, because every subject now has too much information even for a specialist to master. But maybe that’s what computers are going to be for. More than anything else, we need people who can see outside their specialty.

Part of the problem as we get older, I think, is that we expect too much of ourselves. We expect too much of our memory, and we expect too much of our information-processing abilities. Babies know it. Children know it. You take what you can; each taking is a step; on the next step you will take some more. And eventually you will understand it all.

Perhaps it is around adolescence that we get the idea that this isn’t good enough. Taking bites is for children; a grown-up person should be able to read a text/hear a conversation/experience an event and absorb it all. Anything less is a failure. Anything less is a sign that you’re not as smart as others.

Young children drive their parents crazy wanting the same stories read over and over again, but while the stories may seem simple to us, that’s because we’ve forgotten how much we’ve learned. Probably they are learning something new each time (and quite possibly we could learn something from the repetitions too, if we weren’t convinced we already knew it all!).

We don’t talk about the information overload our babies and children suffer, and yet, surely, we should. Aren’t they overloaded with information? When you think about all they must learn … doesn’t that put our own situation in perspective?

You could say they are filtering out what they need, but I don’t think that’s accurate. Because they keep coming back to pick out more. What they’re doing is taking bites. They’re absorbing what they need in small, attainable bites. Eventually they will get through the entire meal (leaving to one side, perhaps, any bits that are gristly or unpalatable).

The researchers of the ‘Goldilocks’ study tell parents they don’t need to worry about providing this ‘just right’ environment for their baby. Just provide a reasonably stimulating environment. The baby will pick up what they need at the time, and ignore the rest.

I think we can learn from this approach. First of all, we need to cultivate an awareness of the complexity of an experience (I’m using this as an umbrella word encompassing everything from written texts to personal events), being aware that any experience must be considered in its context, and that what might appear (on present understanding) to be quite simple might become less so in the light of new knowledge. So the complexity of an event is not a fixed value, but one that reflects your relationship to it at that time. This suggests we need different information-management tools for different levels of complexity (e.g., tagging that enables you to easily pull out items that need repeated experiencing at appropriate occasions).

(Lucky) small children have an advantage (this is not the place to discuss the impact of ‘disadvantaged’ backgrounds) — the environment is set up to provide plenty of opportunities to re-experience the information they are absorbing in bites. We are not so fortunate. On the other hand, we have the huge advantage of having far more control over our environment. Babies may use instinct to control their information foraging; we must develop more deliberate skills.

We need to understand that we have different modes of information foraging. There is the wide-eyed, human-curious give-me-more mode — and I don’t think this is a mode to avoid. This wide, superficial mode is an essential part of what makes us human, and it can give us a breadth of understanding that can inform our deeper knowledge of specialist subjects. We may think of this as a recreational mode.

Other modes might include:

  • Goal mode: I have a specific question I want answered
  • Learning mode: I am looking for information that will help me build expertise in a specific topic
  • Research mode: I have expertise in a topic and am looking for information in a specific part of that domain
  • Synthesis mode: I have expertise in one topic and want information from other domains that would enrich my expertise and give me new perspectives

Perhaps you can think of more; I would love to hear other suggestions.

I think being consciously aware of what mode you are in, having specific information-seeking and information-management tools for each mode, and having the discipline to stay in the chosen mode, are what we need to navigate the information ocean successfully.

These are some first thoughts. I would welcome comments. This is a subject I would like to develop.