Practice counts! So does talent

The thing to remember about Ericsson’s famous expertise research, showing us the vital importance of deliberate practice in making an expert, is that it was challenging the long-dominant view that natural-born talent is all-important. But Gladwell’s popularizing of Ericsson’s “10,000 hours” overstates the case, and of course people are only too keen to believe that any height is achievable if you just work hard enough.

The much more believable story is that, yes, practice is vital — a great deal of the right sort of practice — but we can’t disavow “natural” abilities entirely.

Last year I reported on an experiment in which 57 pianists with a wide range of deliberate practice (from 260 to more than 31,000 hours) were compared on their ability to sight-read. Number of hours of practice did indeed predict much of the difference in performance (nearly half) — but not all. Working memory capacity also had a statistically significant impact on performance, although this impact was much smaller (accounting for only about 7% of the performance difference). Nevertheless, there’s a clear consequence: given two players who have put in the same amount of effective practice, the one with the higher WMC is likely to do better. Why should WMC affect sight-reading? Perhaps by affecting how many notes a player can look ahead as she plays — this is a factor known to affect sight-reading performance.

Interestingly, the effect of working memory capacity was quite independent of practice, and hours of practice apparently had no effect on WMC. Although it’s possible (the study was too small to tell) that a lot of practice at an early age might affect WMC. After all, music training has been shown to increase IQ in children.

So, while practice is certainly the most important factor in developing expertise, other factors, some of them less amenable to training, have a role to play too.

But do general abilities such as WMC or intelligence matter once you’ve put in the requisite hours of good practice? It may be that ability becomes less important once you achieve expertise in a domain.

The question of whether WMC interacts with domain knowledge in this way has been studied by Hambrick and his colleagues in a number of experiments. One study used a memory task in which participants listened to fictitious radio broadcasts of baseball games and tried to remember major events and information about the players. Baseball knowledge had a very strong effect on performance, and WMC had a much smaller effect, but there was no interaction between the two. Similarly, in two poker tasks, in which players had to assess the likelihood of drawing a winning card, and players had to remember hands during a game of poker, both poker knowledge and WMC affected performance, but again there was no interaction between domain knowledge and WMC.

Another study took a different tack. Participants were asked to remember the movements of spaceships flying from planet to planet in the solar system. What they didn’t know was that the spaceships flew in a pattern that matched the way baseball players run around a baseball diamond. They were then given the same task, this time with baseball players running around a diamond. Baseball knowledge only helped performance in the task in which the baseball scenario was explicit — activating baseball knowledge. But activation of domain knowledge had no effect on the influence of WMC.

Although these various studies fail to show an interaction between domain knowledge and WMC, this doesn’t mean that domain knowledge never interacts with basic abilities. The same researchers recently found such an interaction in a geological bedrock mapping task, in which geological structure of a mountainous area had to be inferred. Visuospatial ability predicted performance only at low levels of geological knowledge; geological experts were not affected by their visuospatial abilities. Unfortunately, that study is not yet published, so I don’t know the details. But I assume they mean visuospatial working memory capacity.

It’s possible that general intelligence or WMC are most important during the first stages of skill acquisition (when attention and working memory capacity are so critical), and become far less important once the skill has been mastered.

Similarly, Ericsson has argued that deliberate practice allows performers to circumvent limits on working memory capacity. This is, indeed, related to the point I often make about how to functionally increase your working memory capacity — if you have a great amount of well-organized and readily accessible knowledge on a particular topic, you can effectively expand how much your working memory can hold by keeping a much larger amount of information ‘on standby’ in what has been termed long-term working memory.

Proponents of deliberate practice don’t deny that ‘natural’ abilities have some role, but they restrict it to motivation and general activity levels (plus physical attributes such as height where that is relevant). But surely these would only affect number of hours. Clearly the ability to keep yourself on task, to motivate and discipline yourself, impinges on your ability to keep your practice up. And the general theory makes sense — that if you show some interest in something, such as music or chess, when you’re young, your parents or teachers usually encourage you in that direction; this encouragement and rewards lead you to spend more time and energy in that domain, and if you have enough persistence, enough dedication, then lo and behold, you’ll get better and better. And your parents will say, well, it was obvious from an early age that she was talented that way.

But is it really the case that attributes such as intelligence make no difference? Is it really as simple as “10,000 hours of deliberate practice = expert”? Is it really the case that each hour has the same effect on any one of us?

A survey of 104 chess masters found that, while all the players that became chess masters had practiced at least 3,000 hours, the amount of practice it took to achieve that mastery varied considerably. Although, consistent with the “10,000 hour rule”, average time to achieve mastery was around 11,000 hours, time ranged from 3,016 hours to 23,608 hours. The difference is even more extreme if you only consider individual practice (previous research has pointed to individual practice being of more importance than group practice): a range from 728 hours to 16,120 hours! And some people practiced more than 20,000 hours and still didn't achieve master level.

Moreover, a comparison of titled masters and untitled international players found that the two groups practiced the same amount of hours in the first three years of their serious dedication to chess, and yet there were significant differences in their ratings. Is this because of some subtle difference in the practice, making it less effective? Or is it that some people benefit more from practice?

A comparison of various degrees of expertise in terms of starting age is instructive. While the average age of starting to play seriously was around 18 for players without an international rating, it was around 14 for players with an international rating, and around 11 for masters. But the amount of variability within each group varies considerably. For players without an international rating, the age range within one standard deviation of the mean is over 11 years, but for those with an international rating, FIDE masters, and international masters, the range is only 2-3 years, and for grand masters, the range is less than a year. [These numbers are all approximate, from my eyeball estimates of a bar graph.]

It has been suggested that the younger starting age of chess masters and expert musicians is simply a reflection of the greater amount of practice achieved with a young start. But a contrary suggestion is that there might be other advantages to learning a skill at an early age, reflecting what might be termed a ‘sensitive period’. This study found that the association between skill and starting age was still significant after amount of practice had been taken account of.

Does this have to do with the greater plasticity of young brains? Expertise “grows” brains — in the brain regions involved in that specific domain. Given that younger brains are much more able to create new neurons and new connections, it would hardly be a surprise that it’s easier for them to start building up the dense structures that underlie expertise.

This is surely easier if the young brain is also a young brain that has particular characteristics that are useful for that domain. For music, that might relate to perceptual and motor abilities. In chess, it might have more to do with processing speed, visuospatial ability, and capacious memory.

Several studies have found higher cognitive ability in chess-playing children, but the evidence among adults has been less consistent. This may reflect the growing importance of deliberate practice. (Or perhaps it simply reflects the fact that chess is a difficult skill, for which children, lacking the advantages that longer education and training have given adults, need greater cognitive skills.)

Related to all this, there’s a popular idea that once you get past an IQ of around 120, ‘extra’ IQ really makes no difference. But in a study involving over 2,000 gifted young people, those who scored in the 99.9 percentile on the math SAT at age 13 were eighteen times more likely to go on to earn a doctorate in a STEM discipline (science, technology, engineering, math) compared to those who were only(!) in the 99.1 percentile.

Overall, it seems that while practice can take you a very long way, at the very top, ‘natural’ ability is going to sort the sheep from the goats. And ‘natural’ ability may be most important in the early stages of learning. But what do we mean by ‘natural ability’? Is it simply a matter of unalterable genetics?

Well, palpably not! Because if there’s one thing we now know, it’s that nature and nurture are inextricably entwined. It’s not about genes; it’s about the expression of genes. So let me remind you that aspects of the prenatal, the infant, and the child’s, environment affect that ‘natural’ ability. We know that these environments can affect IQ; the interesting question is what we can do, at each and any of these stages, to improve affect basic processes such as speed of processing, WMC, and inhibitory control. (Although I should say here that I am not a fan of the whole baby-Einstein movement! Nor is there evidence that many of those practices work.)

Bottom line:

  • talent still matters
  • effective practice is still the most important factor in developing expertise
  • individuals vary in how much practice they need
  • individual abilities do put limits on what’s achievable (but those limits are probably higher than most people realize).

How to Revise and Practice

References

Campitelli, G., & Gobet F. (2011).  Deliberate Practice. Current Directions in Psychological Science. 20(5), 280 - 285.

Campitelli, G., & Gobet, F. (2008). The role of practice in chess: A longitudinal study. Learning and Individual Differences, 18, 446–458.

Gobet, F., & Campitelli, G. (2007). The role of domain-specific practice, handedness and starting age in chess. Developmental Psychology, 43, 159–172.

Hambrick, D. Z., & Meinz, E. J. (2011). Limits on the Predictive Power of Domain-Specific Experience and Knowledge in Skilled Performance. Current Directions in Psychological Science, 20(5), 275 –279. doi:10.1177/0963721411422061

Hambrick, D.Z., & Engle, R.W. (2002). Effects of domain knowledge, working memory capacity and age on cognitive performance: An investigation of the knowledge-is-power hypothesis. Cognitive Psychology, 44, 339–387.

Hambrick, D.Z., Libarkin, J.C., Petcovic, H.L., Baker, K.M., Elkins, J., Callahan, C., et al. (2011). A test of the circumvention-of-limits hypothesis in geological bedrock mapping. Journal of Experimental Psychology: General, Published online Oct 17, 2011.

Hambrick, D.Z., & Oswald, F.L. (2005). Does domain knowledge moderate involvement of working memory capacity in higher level cognition? A test of three models. Journal of Memory and Language, 52, 377–397.

Meinz, E. J., & Hambrick, D. Z. (2010). Deliberate Practice Is Necessary but Not Sufficient to Explain Individual Differences in Piano Sight-Reading Skill. Psychological Science, 21(7), 914–919. doi:10.1177/0956797610373933

 

Attributes of effective practice

One of my perennial themes is the importance of practice, and in the context of developing expertise, I have talked of ‘deliberate practice’ (a concept articulated by the well-known expertise researcher K. Anders Ericsson). A new paper in the journal Psychology of Music reports on an interesting study that shows how the attributes of music practice change as music students develop in expertise. Music is probably the most studied domain in expertise research, but I think we can gain some general insight from this analysis. Here’s a summary of the findings.

[Some details about the U.K. study for those interested: the self-report study involved 3,325 children aged 6-19, ranging from beginner to Grade 8 level, covering a variety of instruments, with violin the most common at 28%, and coming from a variety of musical settings: junior conservatoires, youth orchestras, Saturday music schools, comprehensive schools.]

For a start, and unsurprisingly, amount of practice (both in terms of amount each day, and number of days in the week) steadily increases as expertise develops. Interestingly, there is a point where it plateaus (around grade 5-6 music exams) before increasing more sharply (presumably this reflects a ‘sorting the sheep from the goats’ effect — that is, after grade 6, it’s increasingly only the really serious ones that continue).

It should not be overlooked, however, that there was huge variability between individuals in this regard.

More interesting are the changes in the attributes of their practice.

 

These attributes became less frequent as the players became more expert:

Practicing strategies:

Practicing pieces from beginning to end without stopping

Going back to the beginning after a mistake

Analytic strategies:

Working things out by looking at the music without actually playing it

Trying to find out what a piece sounds like before trying to play it

Analyzing the structure of a piece before learning it

Organization strategies:

Making a list of what to practice

Setting targets for each session.

 

These attributes became more frequent as the players became more expert:

Practicing strategies:

Practicing small sections;

Getting recordings of a piece that is being learned;

Practicing things slowly;

Knowing when a mistake has been made;

When making a mistake, practicing a section slowly;

When something was difficult playing it over and over again;

Marking things on the part;

Practicing with a metronome;

Recording practice and listening to the tapes;

Analytic strategies:

Identifying difficult sections;

Thinking about how to interpret the music;

Organization strategies:

Doing warm-up exercises;

Starting practice with studies;

Starting practice with scales.

 

Somewhat surprisingly, levels of concentration and distractability didn’t vary significantly as a function of level of expertise. The researchers suggest that this may reflect the reliance on self-reported data rather than reality, but, also somewhat surprisingly, enjoyment of practice didn’t change as a function of expertise either.

Interestingly (but perhaps not so surprisingly once you think about it), the adoption of systematic practicing strategies followed a U-shaped curve rather than a linear trend. Those who had passed Grade 1 scored relatively high on this, but those who had most recently passed Grade 2 scored more poorly, and those with Grade 3 were worst of all. After that, it begins to pick up again, achieving the same level at Grade 6 as at Grade 1.

Organization of practice, on the other hand, while it varied with level of expertise, showed no systematic relationship (if anything, it declined with expertise! But erratically).

The clearest result was the very steady and steep decline in the use of ineffective strategies. These include:

  • Practicing pieces from beginning to end without stopping;
  • Going back to the beginning after a mistake;
  • Immediate correction of errors.

It should be acknowledged that these strategies might well be appropriate at the beginning, but they are not effective with longer and more complex pieces. It’s suggested that the dip at Grade 3 probably reflects the need to change strategies, and the reluctance of some students to do so.

But of course grade level in itself is only part of the story. Analysis on the basis of how well the students did on their most recent exam (in terms of fail, pass, commended, and highly commended) reveals that organization of practice, and making use of recordings and a metronome, were the most important factors (in addition to the length of time they had been learning).

The strongest predictor of expertise, however, was not using ineffective strategies.

This is a somewhat discouraging conclusion, since it implies that the most important thing to learn (or teach) is what not to do, rather than what to do. But I think a codicil to this is also implicit. Given the time spent practicing (which is steadily increasing with expertise), the reduction in wasting time on ineffective strategies means that, perforce, time is being spent on effective strategies. The fact that no specific strategies can be inequivocally pointed to, suggests that (as I have repeatedly said), effective strategies are specific to the individual.

This doesn’t mean that identifying effective strategies and their parameters is a pointless activity! Far from it. You need to know what strategies work to know what to choose from. But you cannot assume that because something is the best strategy for your best friend, that it is going to be equally good for you.

Notwithstanding this, the adoption of systematic practice strategies was significantly associated with expertise, accounting for the largest chunk of the variance between individuals — some 11%.

Similarly, organization of practice (accounting for nearly 8% of variance), making use of recordings and a metronome (nearly 8% of variance), analytic strategies (over 7% of variance) were important factors in developing expertise in music, and it seems likely that many if not most individuals would benefit from these.

It’s also worth noting that playing straight through the music was the strongest predictor of expertise — as a negative factor.

So what general conclusions can we draw from these findings?

The wide variability in practice amount is worth noting — practice is hugely important, but it’s a mistake to have hard-and-fast rules about the exact number of hours that is appropriate for a given individual.

Learning which strategies are a waste of time is very important (and one that many students don’t learn — witness the continuing popularity of rote repetition as a method of learning).

Organization — in respect of structuring your learning sessions — is perhaps one of those general principles that doesn’t necessarily apply to every individual, and certainly the nature and extent of organization is likely to vary by individual. Nevertheless, given its association with better performance, it is certainly worth trying to find the level of organization that is best for you (or your student). The most important factors in this category were starting practice with scales (for which appropriate counterparts are easily found for other skills being practiced, including language learning, although perhaps less appropriate for other forms of declarative learning), and making a list of what needs to be practiced.

Having expert models/examples/case studies (as appropriate), and appropriate levels of scaffolding, are very helpful (in the case of music, this is instantiated by the use of recordings, both listening to others and self-feedback, and use of a metronome).

Identifying difficult aspects, and dealing with them by tackling them on their own, using a slow and piecemeal process, is usually the most helpful approach. (Of the practice strategies, the most important were practicing sections slowly when having made a mistake, practicing difficult sections over and over again, slow practice, gradually speeding up when learning fast passages, and recognizing errors.)

Preparing for learning is also a generally helpful strategy. In music this is seen in the most effective analytic strategies: trying to find out what a piece sounds like before trying to play it, and getting an overall idea of a piece before practicing it. In declarative learning (as opposed to skill learning), this can be seen in such strategies as reading the Table of Contents, advance organizers and summaries (in the case of textbooks), or doing any required reading before a lecture, and (in both cases) thinking about what you expect to learn from the book or lecture.

How to Revise and Practice

References

Hallam, S., Rinta, T., Varvarigou, M., Creech, a., Papageorgi, I., Gomes, T., & Lanipekun, J. (2012). The development of practising strategies in young people. Psychology of Music, 40(5), 652–680. doi:10.1177/0305735612443868

The value of intensive practice

Let’s talk about the cognitive benefits of learning and using another language.

In a recent news report, I talked about the finding that intensive learning of a very novel language significantly grew several brain regions, of which two were positively associated with language proficiency. These regions were the right hippocampus and the left superior temporal gyrus. Growth of the first of these probably reflects the learning of a great many new words, and the second may reflect heavy use of the phonological loop (a part of working memory).

There are several aspects to this study that are worth discussing in the context of using language learning as a means of protecting against age-related cognitive decline.

First of all, let me start with a general reminder. We now know that, analogous to muscles, we can ‘grow’ specific brain regions by working them. But an adult brain is confined by the skull — growth in one part is generally at the expense of another part. So, unlike body-building, you can’t just grow your whole brain!

This suggests that it pays to think about the areas you want to improve (which goes right back to the first chapter of The Memory Key: it’s no good talking about improving ‘your memory’ — rather, you should pick the memory tasks you want to improve).

One of the big advantages of growing the parts of the brain involved in language is that language is so utterly critical to our intellectual ability. Most of us use language to think and to communicate. There’s a reason why so many studies of older adults’ cognitive performance use verbal fluency as the measure!

But, in the same way that the increase in London cab drivers’ right posterior hippocampus appears to be at the expense of the anterior hippocampus, the growth in the right hippocampus may be at the expense of other functions (perhaps spatial navigation).

Is this a reason for not learning? Certainly not! But it is perhaps a reminder that we should be aiming for two things in preventing cognitive decline. The first is in ‘growing’ brain tissue: making new neurons, and new connections. This is to counteract the shrinkage (brain atrophy) that tends to occur with age.

The second concerns flexibility. Retaining the brain’s plasticity is a vital part of fighting cognitive decline, even more vital, perhaps, than retaining brain tissue. To keep this plasticity, we need to keep the brain changing.

Here’s a question we don’t yet know the answer to: how much age-related cognitive decline is down to people steadily experiencing fewer and fewer novel events, learning less, thinking fewer new thoughts?

But we do know it matters.

So let’s go back to our intensive language learners growing parts of their brain. Does the growth in the right hippocampus (unfortunately we don’t know how much that growth was localized within the right hippocampus) mean that it will now remain that size, at the expense, presumably, of some other area (and function)?

No, it doesn’t. As far as language is concerned, the hippocampus is primarily a short-term processor. As those new words are consolidated, they’ll move into long-term memory, in the language network across the cortex. Once the interpreters stop acquiring new vocabulary at this rate, I would expect to see this region reduce. Indeed (and I am speculating here), I would expect this to happen once a solid ‘semantic network’ for the new language was established in long-term memory. At this point, new vocabulary will be more and more encoded in terms of that network, and reliance on the short-term processes of the hippocampus will become less (although still important!).

I think that intensity is important. Intensity by its very nature is rarely maintained. People at the top of their field — champion sportspeople, top-ranking musicians, ‘geniuses’, and so on —they have to maintain that intensity as long as they want to stay at the top, and I would expect their brains to show more enduring changes (that is, particular regions that are unusually large, and others that are smaller than average). For the rest of us, any enduring changes are less marked.

But making those changes is important!

In recent years, research has come to suggest that, although regular moderate exercise is highly beneficial for physical and mental health, short bouts of intense activity have their own specific benefits above and beyond that. I think the same might be true for mental activity.

This may be particularly (or differently) true as we get older, when it does tend to get harder to learn — making (relatively) short bouts of intensive study/learning/activity so vital. We need that concentrated practice more than we did when we were young and learning came easier. And concentrated practice may be exactly the way to produce significant change in our brains.

But we don’t need to worry about becoming ‘muscle-bound’ — if we learn thousands of new words in a few months (an excellent step in acquiring a new language), we will then go on to acquire grammar and practice reading and writing whole sentences. The words will consolidate; different language skills will build different parts of the brain; those areas no longer being intensively worked will diminish (a little).

Moreover, it’s not only about growing particular regions, it’s also very much about building new or stronger connections between regions — building new networks. Because language learning involves so many regions, it may be especially good for that aspect too (see, for example, another recent news report, on how language learning grows white matter and reorganizes brain structures).

The important thing is that your brain is changing; the important thing is that your brain keeps changing. I think intensive periods of new learning are the way to achieve this, interspersed with consolidation periods.

As I’ve said before, variety is key. By providing variety in learning and experiences across tasks and domains, you can keep your brain flexible. By providing intense focus for a period, you can better build specific ‘mental muscles’.

How to Revise and Practice

How working memory works: What you need to know

A New Yorker cartoon has a man telling his glum wife, “Of course I care about how you imagined I thought you perceived I wanted you to feel.” There are a number of reasons you might find that funny, but the point here is that it is very difficult to follow all the layers. This is a sentence in which mental attributions are made to the 6th level, and this is just about impossible for us to follow without writing it down and/or breaking it down into chunks.

According to one study, while we can comfortably follow a long sequence of events (A causes B, which leads to C, thus producing D, and so on), we can only comfortably follow four levels of intentionality (A believes that B thinks C wants D). At the 5th level (A wants B to believe that C thinks that D wants E), error rates rose sharply to nearly 60% (compared to 5-10% for all levels below that).

Why do we have so much trouble following these nested events, as opposed to a causal chain?

Let’s talk about working memory.

Working memory (WM) has evolved over the years from a straightforward “short-term memory store” to the core of human thought. It’s become the answer to almost everything, invoked for everything related to reasoning, decision-making, and planning. And of course, it’s the first and last port of call for all things memory — to get stored in long-term memory an item first has to pass through WM, where it’s encoded; when we retrieve an item from memory, it again passes through WM, where the code is unpacked.

So, whether or not the idea of working memory has been over-worked, there is no doubt at all that it is utterly crucial for cognition.

Working memory has also been equated with attentional control, and working memory and attention are often used almost interchangeably. And working memory capacity (WMC) varies among individuals. Those with a higher WMC have an obvious advantage in reasoning, comprehension, remembering. No surprise then that WMC correlates highly with fluid intelligence.

So let’s talk about working memory capacity.

The idea that working memory can hold 7 (+/-2) items has passed into popular culture (the “magic number 7”). More recent research, however, has circled around the number 4 (+/-1). Not only that, but a number of studies suggest that in fact the true number of items we can attend to is only one. What’s the answer? (And where does it leave our high- and low-capacity individuals? There’s not a lot of room to vary there.)

Well, in one sense, 7 is still fine — that’s the practical sense. Seven items (5-9) is about what you can hold if you can rehearse them. So those who are better able to rehearse and chunk will have a higher working memory capacity (WMC). That will be affected by processing speed, among other factors.

But there is a very large body of evidence now pointing to working memory holding only four items, and a number of studies indicating that most likely we can only pay attention to one of these items at a time. So you can envision this either as a focus of attention, which can only hold one item, and a slightly larger “outer store” or area of “direct access” which can hold another three, or as a mental space holding four items of which only one can be the focus at any one time.

A further tier, which may be part of working memory or part of long-term memory, probably holds a number of items “passively”. That is, these are items you’ve put on the back burner; you don’t need them right at the moment, but you don’t want them to go too far either. (See my recent news item for more on all this.)

At present, we don’t have any idea how many items can be in this slightly higher state of activation. However, the “magic number 7” suggests that you can circulate 3 (+/-1) items from the backburner into your mental space. In this regard, it’s interesting to note that, in the case of verbal material, the amount you can hold in working memory with rehearsal has been found to more accurately equate to 2 seconds, rather than 7 items. That is, you can remember as much as you can verbalize in about 2s (so, yes, fast speakers have a distinct advantage over slower ones). You see why processing speed affects WMC.

Whether you think of WM as a focus of one and an outer store of 3, or as a direct access area with 4 boxes and a spotlight shining on one, it’s a mental space or blackboard where you can do your working out. Thinking of it this way makes it easier to conceptualize and talk about, but these items are probably not going into a special area as such. The thought now is that these items stay in long-term memory (in their relevant areas of association cortex), but they are (a) highly activated, and (b) connected to the boxes in the direct access area (which is possibly in the medial temporal lobe). This connection is vitally important, as we shall see.

Now four may not seem like much, but WM is not quite as limited as it seems, because we have different systems for verbal (includes numerical) and visuospatial information. Moreover, we can probably distinguish between the items and the processing of them, which equates to a distinction between declarative and procedural memory. So that gives us three working memory areas: verbal declarative; visuospatial declarative; procedural.

Now all of this may seem more than you needed to know, but breaking down the working memory system helps us discover two things of practical interest. First, which particular parts of the system are the parts that make a task more difficult. Second, where individual differences come from, and whether they are in aspects that are trainable.

For example, this picture of a mental space with a focus of one and a maximum of three eager-beavers waiting their turn, points to an important aspect of the working memory system: switching the focus. Experiments reveal that there is a large focus-switching cost, incurred whenever you have to switch the item in the spotlight. And the extent of this cost has been surprising — around 240ms in one study, which is about six times the length of time it takes to scan an item in a traditional memory-search paradigm.

But focus-switch costs aren’t a constant. They vary considerably depending on the difficulty of the task, and they also tend to increase with each item in the direct-access area. Indeed, just having one item in the space outside the focus causes a significant loss of efficiency in processing the focused item.

This may reflect increased difficulty in discriminating one highly activated item from other highly activated items. This brings us to competition, which, in its related aspects of interference and inhibition, is a factor probably more crucial to WMC than whether you have 3 or 4 or 5 boxes in your direct access area.

But before we discuss that, we need to look at another important aspect of working memory: updating. Updating is closely related to focus-switching, and it’s easy to get confused between them. But it’s been said that working memory updating (WMU) is the only executive function that correlates with fluid intelligence, and updating deficits have been suggested as the reason for poor comprehension (also correlated with low-WMC). So it’s worth spending a little time on.

To get the distinction clear in your mind, imagine the four boxes and the spotlight shining on one. Any time you shift the spotlight, you incur a focus-switching cost. If you don’t have to switch focus, if you simply need to update the contents of the box you’re already focusing on, then there will be an update cost, but no focus-switching cost.

Updating involves three components: retrieval; transformation; substitution. Retrieval simply involves retrieving the contents from the box. Substitution involves replacing the contents with something different. Transformation involves an operation on the contents of the box to get a new value (eg, when you have to add a certain number to an earlier number).

Clearly the difficulty in updating working memory will depend on which of these components is involved. So which of these processes is most important?

In terms of performance, the most important component is transformation. While all three components contribute to the accuracy of updating, retrieval apparently doesn’t contribute to speed of updating. For both accuracy and speed, substitution is less important than transformation.

This makes complete sense: obviously having to perform an operation on the content is going to be more difficult and time-consuming than simply replacing it. But it does help us see that the most important factor in determining the difficulty of an updating task will be the complexity of the transformation.

The finding that retrieval doesn’t affect speed of updating sounds odd, until you realize the nature of the task used to measure these components. The number of items was held constant (always three), and the focus switched from one box to another on every occasion, so focus-switching costs were constant too. What the finding says is that once you’ve shifted your focus, retrieval takes no time at all — the spotlight is shining and there the answer is. In other words, there really is no distinction between the box and its contents when the spotlight is on it — you don’t need to open the box.

However, retrieval does affect accuracy, and this implies that something is degrading or interfering in some way with the contents of the boxes. Which takes us back to the problems of competition / interference.

But before we get to that, let’s look at this issue of individual differences, because like WMC, working memory updating correlates with fluid intelligence. Is this just a reflection of WMC?

Differences in transformation accuracy correlated significantly with WMC, as did differences in retrieval accuracy. Substitution accuracy didn’t vary enough to have measurable differences. Neither transformation nor substitution speed differences correlated with WMC. This implies that the reason why people with high WMC also do better at WMU tasks is because of the transformation and retrieval components.

So what about the factors that aren’t correlated with WMC? The variance in transformation speed is argued to primarily reflect general processing speed. But what’s going on in substitution that isn’t going on in when WMC is measured?

Substitution involves two processes: removing the old contents of the box, and adding new content. In terms of the model we’ve been using, we can think of unbinding the old contents from the box, and binding new contents to it (remember that the item in the box is still in its usual place in the association cortex; it’s “in” working memory by virtue of the temporary link connecting it to the box). Or we can think of it as deleting and encoding.

Consistent with substitution not correlating with WMC, there is some evidence that high- and low-WMC individuals are equally good at encoding. Where high- and low-WMC individuals differ is in their ability to prevent irrelevant information being encoded with the item. Which brings me to my definition of intelligence (from 30 years ago — these ideas hadn’t even been invented yet. So I came at it from quite a different angle): the ability to (quickly) select what’s important.

So why do low-WMC people tend to be poorer at leaving out irrelevant information?

Well, that’s the $64,000 question, but related to that it’s been suggested that those with low working memory capacity are less able to resist capture by distracting stimuli than those with high WMC. A new study, however, provides evidence that low- and high-WMC individuals are equally easily captured by distracters. What distinguishes the two groups is the ability to disengage. High-capacity people are faster in putting aside irrelevant stimuli. They’re faster at deleting. And this, it seems, is unrelated to WMC.

This is supported by another recent finding, that when interrupted, older adults find it difficult to disengage their brain from the new task and restore the original task.

So what’s the problem with deleting / removing / putting aside items in focus? This is about inhibition, which takes us once again to competition / interference.

Now interference occurs at many different levels: during encoding, retrieval, and storage; with items, with tasks, with responses. Competition is ubiquitous in our brain.

In the case of substitution during working memory updating, it’s been argued that the contents of the box are not simply removed and replaced, but instead gradually over-written by the new contents. This fits in with a view of items as assemblies of lower-level “feature-units”. Clearly, items may share some of these units with other items (reflected in their similarity), and clearly the more they compete for these units, the greater interference there will be between the units.

You can see why it’s better to keep your codes (items) “lean and mean”, free of any irrelevant information.

Indeed, some theorists completely discard the idea of number of items as a measure of WMC, and talk instead in terms of “noise”, with processing capacity being limited by such factors as item complexity and similarity. While there seems little justification for discarding our “4+/-1”, which is much more easily quantified, this idea does help us get to grips with the concept of an “item”.

What is an item? Is it “red”? “red cow”? “red cow with blue ribbons round her neck”? “red cow with blue ribbons and the name Isabel painted on her side”? You see the problem.

An item is a fuzzy concept. We can’t say, “it’s a collection of 6 feature units” (or 4 or 14 or 42). So we have to go with a less defined description: it’s something so tightly bound that it is treated as a single unit.

Which means it’s not solely about the item. It’s also about you, and what you know, and how well you know it, and what you’re interested in.

To return to our cases of difficulty in disengaging, perhaps the problem lies in the codes being formed. If your codes aren’t tightly bound, then they’re going to start to degrade, losing some of their information, losing some of their distinctiveness. This is going to make them harder to re-instate, and it’s going to make them less distinguishable from other items.

Why should this affect disengagement?

Remember what I said about substitution being a gradual process of over-writing? What happens when your previous focus and new focus have become muddled?

This also takes us to the idea of “binding strength” — how well you can maintain the bindings between the contents and their boxes, and how well you can minimize the interference between them (which relates to how well the items are bound together). Maybe the problem with both disengagement and reinstatement has to do with poorly bound items. Indeed, it’s been suggested that the main limiting factor on WMC is in fact binding strength.

Moreover, if people vary in their ability to craft good codes, if people vary in their ability to discard the irrelevant and select the pertinent, to bind the various features together, then the “size” (the information content) of an item will vary too. And maybe this is what is behind the variation in “4 +/-1”, and experiments which suggest that sometimes the focus can be increased to 2 items. Maybe some people can hold more information in working memory because they get more information into their items.

So where does this leave us?

Let’s go back to our New Yorker cartoon. The difference between a chain of events and the nested attributions is that chaining doesn’t need to be arranged in your mental space because you don’t need to keep all the predecessors in mind to understand it. On the other hand, the nested attributions can’t be understood separately or even in partitioned groups — they must all be arranged in a mental space so we can see the structure.

We can see now that “A believes that B thinks C wants D” is easy to understand because we have four boxes in which to put these items and arrange them. But our longer nesting, “A wants B to believe that C thinks that D wants E”, is difficult because it contains one more item than we have boxes. No surprise there was a dramatic drop-off in understanding.

So given that you have to fill your mental space, what is it that makes some tasks more difficult than others?

  • The complexity and similarity of the items (making it harder to select the relevant information and bind it all together).
  • The complexity of the operations you need to perform on each item (the longer the processing, the more tweaking you have to do to your item, and the more time and opportunity for interference to degrade the signal).
  • Changing the focus (remember our high focus-switching costs).

But in our 5th level nested statement, the error rate was 60%, not 100%, meaning a number of people managed to grasp it. So what’s their secret? What is it that makes some people better than others at these tasks?

They could have 5 boxes (making them high-WMC). They could have sufficient processing speed and binding strength to unitize two items into one chunk. Or they could have the strategic knowledge to enable them to use the other WM system (transforming verbal data into visuospatial). All these are possible answers.


This has been a very long post, but I hope some of you have struggled through it. Working memory is the heart of intelligence, the essence of attention, and the doorway to memory. It is utterly critical, and cognitive science is still trying to come to grips with it. But we’ve come a very long way, and I think we now have sufficient theoretical understanding to develop a model that’s useful for anyone wanting to understand how we think and remember, and how they can improve their skills.

There is, of course, far more that could be said about working memory (I’ve glossed over any number of points in an effort to say something useful in less than 50,000 words!), and I’m planning to write a short book on working memory, its place in so many educational and day-to-day tasks, and what we can do to improve our skills. But I hope some of you have found this enlightening.

References

Clapp, W. C., Rubens, M. T., Sabharwal, J., & Gazzaley, A. (2011). Deficit in switching between functional brain networks underlies the impact of multitasking on working memory in older adults. Proceedings of the National Academy of Sciences. doi:10.1073/pnas.1015297108

Ecker, U. K. H., Lewandowsky, S., Oberauer, Klaus, & Chee, A. E. H. (2010). The Components of Working Memory Updating : An Experimental Decomposition and Individual Differences. Cognition, 36(1), 170 -189. doi: 10.1037/a0017891.

Fukuda, K., & Vogel, E. K. (2011). Individual Differences in Recovery Time From Attentional Capture. Psychological Science, 22(3), 361 -368. doi:10.1177/0956797611398493

Jonides, J., Lewis, R. L., Nee, D. E., Lustig, C. a, Berman, M. G., & Moore, K. S. (2008). The mind and brain of short-term memory. Annual review of psychology, 59, 193-224. doi: 10.1146/annurev.psych.59.103006.093615.

Kinderman, P., Dunbar, R.I.M. & Bentall, R.P. (1998).Theory-of-mind deficits and causal attributions. British Journal of Psychology 89: 191-204.

Lange, E. B., & Verhaeghen, P. (in press). No age differences in complex memory search: Older adults search as efficiently as younger adults. Psychology and Aging.

Oberauer, K, Sus, H., Schulze, R., Wilhelm, O., & Wittmann, W. (2000). Working memory capacity — facets of a cognitive ability construct. Personality and Individual Differences, 29(6), 1017-1045. doi: 10.1016/S0191-8869(99)00251-2.

Oberauer, K. (2005). Control of the Contents of Working Memory--A Comparison of Two Paradigms and Two Age Groups. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(4), 714-728. doi:10.1037/0278-7393.31.4.714

Oberauer, Klaus. (2006). Is the Focus of Attention in Working Memory Expanded Through Practice ? Cognition, 32(2), 197-214. doi: 10.1037/0278-7393.32.2.197.

Oberauer, Klaus. (2009). Design for a Working Memory. Psychology of Learning and Motivation, 51, 45-100.

Verhaeghen, P., Cerella, J. & Basak, C. (2004) A Working Memory Workout : How to Expand the Focus of Serial Attention From One to Four Items in 10 Hours or Less. Cognition, 30 (6), 1322-1337.

Event boundaries and working memory capacity

In a recent news report, I talked about how walking through doorways creates event boundaries, requiring us to update our awareness of current events and making information about the previous location less available. I commented that we should be aware of the consequences of event boundaries for our memory, and how these contextual factors are important elements of our filing system. I want to talk a bit more about that.

One of the hardest, and most important, things to understand about memory is how various types of memory relate to each other. Of course, the biggest problem here is that we don’t really know! But we do have a much greater understanding than we used to do, so let’s see if I can pull out some salient points and draw a useful picture.

Let’s start with episodic memory. Now episodic memory is sometimes called memory for events, and that is reasonable enough, but it perhaps gives an inaccurate impression because of the common usage of the term ‘event’. The fact is, everything you experience is an event, or to put it another way, a lifetime is one long event, broken into many many episodes.

Similarly, we break continuous events into segments. This was demonstrated in a study ten years ago, that found that when people watched movies of everyday events, such as making the bed or ironing a shirt, brain activity showed that the event was automatically parsed into smaller segments. Moreover, changes in brain activity were larger at large boundaries (that is, the boundaries of large, superordinate units) and smaller at small boundaries (the boundaries of small, subordinate units).

Indeed, following research showing the same phenomenon when people merely read about everyday activities, this is thought to reflect a more general disposition to impose a segmented structure on events and activities (“event structure perception”).

Event Segmentation Theory postulates that perceptual systems segment activity as a side effect of trying to predict what’s going to happen. Changes in the activity make prediction more difficult and cause errors. So these are the points when we update our memory representations to keep them effective.

Such changes cover a wide gamut, from changes in movement to changes in goals.

If you’ve been following my blog, the term ‘updating’ will hopefully bring to mind another type of memory — working memory. In my article How working memory works: What you need to know, I talked about the updating component of working memory at some length. I mentioned that updating may be the crucial component behind the strong correlation between working memory capacity and intelligence, and that updating deficits might underlie poor comprehension. I distinguished between three components of updating (retrieval; transformation; substitution), and how transformation was the most important for deciding how accurately and how quickly you can update your contents in working memory. And I discussed how the most important element in determining your working memory ‘capacity’ seems to be your ability to keep irrelevant information out of your memory codes.

So this event segmentation research suggests that working memory updating occurs at event boundaries. This means that information before the boundary becomes less accessible (hence the findings from the walking through doorways studies). But event boundaries relate not only to working memory (keeping yourself updated to what’s going on) but also to long-term storage (we’re back to episodic memory now). This is because those boundaries are encoded particularly strongly — they are anchors.

Event boundaries are beginnings and endings, and we have always known that beginnings and endings are better remembered than middles. In psychology this is known formally as the primacy and recency effects. In a list of ten words (that favorite subject of psychology experiments), the first two or three items and the last two or three items are the best remembered. The idea of event boundaries gives us a new perspective on this well-established phenomenon.

Studies of reading have shown that readers slow down at event boundaries, when they are hypothesized to construct a new mental model. Such boundaries occur when the action moves to a new place, or a new time, or new characters enter the action, or a new causal sequence is begun. The most important of these is changes in characters and their goals, and changes in time.

As I’ve mentioned before, goals are thought to play a major role (probably the major role) in organizing our memories, particularly activities. Goals produce hierarchies — any task can be divided into progressively smaller elements. Research suggests that higher-order events (coarse-grained, to use the terminology of temporal grains) and lower-order events (fine-grained) are sensitive to different features. For example, in movie studies, coarse-grained events were found to focus on objects, using more precise nouns and less precise verbs. Finer-grained events, on the other hand, focused on actions on those objects, using more precise verbs but less precise nouns.

The idea that these are separate tasks is supported by the finding of selective impairments of coarse-grained segmentation in patients with frontal lobe lesions and patients with schizophrenia.

While we automatically organize events hierarchically (even infants seem to be sensitive to hierarchical organization of behavior), that doesn’t mean our organization is always effortlessly optimal. It’s been found that we can learn new procedures more easily if the hierarchical structure is laid out explicitly — contrariwise, we can make it harder to learn a new procedure by describing or constructing the wrong structure.

Using these hierarchical structures helps us because it helps us use knowledge we already have in memory. We can co-op chunks of other events/activities and plug them in. (You can see how this relates to transfer — the more chunks a new activity shares with a familiar one, the more quickly you can learn it.)

Support for the idea that event boundaries serve as anchors comes from several studies. For example, when people watched feature films with or without commercials, their recall of the film was better when there were no commercials or the commercials occurred at event boundaries. Similarly, when people watched movies of everyday events with or without bits removed, their recall was better if there were no deletions or the deletions occurred well within event segments, preserving the boundaries.

It’s also been found that we remember details better if we’ve segmented finely rather than coarsely, and some evidence supports the idea that people who segment effectively remember the activity better.

Segmentation, theory suggests, helps us anticipate what’s going to happen. This in turn helps us adaptively create memory codes that best reflect the structure of events, and by simplifying the event stream into a number of chunks of which many if not most are already encoded in your database, you save on processing resources while also improving your understanding of what’s going on (because those already-coded chunks have been processed).

All this emphasizes the importance of segmenting well, which means you need to be able to pinpoint the correct units of activity. One way we do that is by error monitoring. If we are monitoring our ongoing understanding of events, we will notice when that understanding begins to falter. We also need to pay attention to the ordering of events and the relationships between sequences of events.

The last type of memory I want to mention is semantic memory. Semantic memory refers to what we commonly think of as ‘knowledge’. This is our memory of facts, of language, of generic information that is untethered from specific events. But all this information first started out as episodic memory — before you ‘knew’ the word for cow, you had to experience it (repeatedly); before you ‘knew’ what happens when you go to the dentist, you had to (repeatedly) go to the dentist; before you ‘knew’ that the earth goes around the sun, there were a number of events in which you heard or read that fact. To get to episodic memory (your memory for specific events), you must pass through working memory (the place where you put incoming information together into some sort of meaningful chunk). To get to semantic memory, the information must pass through episodic memory.

How does information in episodic memory become information in semantic memory? Here we come to the process of reconstruction, of which I have often spoken (see my article on memory consolidation for some background on this). The crucial point here is that memories are rewritten every time they are retrieved.

Remember, too, that neurons are continually being reused — memories are held in patterns of activity, that is, networks of neurons, not individual neurons. Individual neurons may be involved in any number of networks. Here’s a new analogy for the brain: think of a manuscript, one of those old parchments, so precious that it must be re-used repeatedly. Modern technology can reveal those imperfectly erased hidden layers. So the neural networks that are memory codes may be thought of as imposed one on top of each other, none of them matching, as different patterns re-use the same individual neurons. The strongest patterns are the most accessible; patterns that are most similar (use more of the same neurons) will provide the most competition.

So, say you are told by your teacher that the earth goes around the sun. That’s the first episode, and there’ll be lots of contextual detail that relates to that particular event. Then on another occasion, you read a book showing how the earth goes around the sun. Again, lots of episodic detail, of which some will be shared with the first incident, and some will be different. Another episode, more detail, some shared, some not. And so on, again and again, until the extraneous details, irrelevant to the fact and always different, are lost, while those details that common to all the episodes will be strong, and form a new, tight chunk of information in semantic memory.

Event boundaries start off with an advantage — they are beginnings or endings, to which we are predisposed to attend (for obvious reasons). So they start off stronger than other bits of information, and by their nature are more likely to be vital elements, that will always co-occur with the event. So — if you have chosen your boundaries well (i.e., they truly are vital elements) they will become stronger with each episode, and will end up as vital parts of the chunk in semantic memory.

Let’s connect that thought back to my comment that the most important difference between those with ‘low’ working memory capacity and those with ‘high’ capacity is the ability to select the ‘right’ information and disregard the irrelevant. Segmenting your events well would seem to be another way of saying that you are good at selecting the changes that are most relevant, that will be common to any such events on other occasions.

And that, although some people are clearly ‘naturally’ better at it, is surely something that people can learn.

References

Culham, J. 2001. The brain as film director. Trends in Cognitive Sciences, 5 (9), 376-377.

Kurby, C. a, & Zacks, J. M. (2008). Segmentation in the perception and memory of events. Trends in cognitive sciences, 12(2), 72-9. doi:10.1016/j.tics.2007.11.004

Speer, N. K., Zacks, J. M., & Reynolds, J. R. (2007). Human Brain Activity Time-Locked to Narrative Event Boundaries. Psychological Science, 18(5), 449–455. doi:10.1111/j.1467-9280.2007.01920.x

Achieving flow

I’ve recently had a couple of thoughts about flow — that mental state when you lose all sense of time and whatever you’re doing (work, sport, art, whatever) seems to flow with almost magical ease. I’ve mentioned flow a couple of times more or less in passing, but today I want to have a deeper look, because learning (and perhaps especially that rewiring I was talking about in my last post) is most easily achieved if we can achieve "flow" (also known as being ‘in the zone’).

Let’s start with some background.

Mihaly Csikszentmihalyi is the man who identified and named this mental state, and he identified 9 components:

  1. The skills you need to perform the task must match the challenges of the task, AND the task must exceed a certain level of difficulty (above everyday level).
  2. Your concentration is such that your behavior becomes automatic and you have little conscious awareness of your self, only of what you’re doing.
  3. You have a very clear sense of your goals.
  4. The task provides unambiguous and immediate feedback concerning your progress toward those goals.
  5. Your focus is entirely on the task and you are completely unaware of any distracting events.
  6. You feel in control, but paradoxically, if you try to consciously hold onto that control, you’ll lose that sense of flow. In other words, you only feel in control as long as you don’t think about it.
  7. You lose all sense of self and become one with the task.
  8. You lose all sense of time.
  9. You experience what Csikszentmihalyi called the ‘autotelic experience’ (from Greek auto (self) and telos (goal)), which is inherently rewarding, providing the motivation to re-experience it.

Clearly many of these components are closely related. More usefully, we can distinguish between elements of the experience, and preconditions for the experience.

The key elements of the experience are your total absorption in the task (which leads to you losing all awareness of self, of time, and any distractions in the environment), and your enjoyment of it.

The key preconditions are:

  • the match between skills and task
  • the amount of challenge in the task
  • the clear and proximal nature of your goals (that is, at least some need to be achievable in that session)
  • the presence of useful feedback.

Additionally, later research suggests:

  • the task needs to be high in autonomy and meaningfulness.

Brain studies have found that this mental state is characterized by less activity in the prefrontal cortex (which provides top-down control — including that evidenced by that critical inner voice), and a small increase in alpha brainwaves (correlated with slower breathing and a lower pulse rate). This inevitably raises the question of whether meditation training can help you more readily achieve flow. Supporting this, a neurofeedback study improved performance in novice marksmen, who learned to shoot expertly in less than half the time after they had been trained to produce alpha waves. There are also indications that some forms of mild electrical stimulation to the brain (tDCS) can induce a flow state.

Some people may be more prone to falling into a flow state than others. Csikszentmihalyi referred to an ‘autotelic personality’, and suggested that such people have high levels of curiosity, persistence, and interest in performing activities for their own sake rather than to achieve some external goal. Readers of my books may be reminded of cognitive styles — those who are intrinsically motivated rather than extrinsically usually are more successful in study.

Recent research has supported the idea of the autotelic personality, and roots it particularly in the achievement motive. Those who have a strong need for achievement, and a self-determined approach, are more likely to experience flow. Such people also have a strong internal locus of control — that is, they believe that achievement rests in their own hands, in their own work and effort. I have, of course, spoken before of the importance of this factor.

There is some indication that autotelic students push themselves harder. A study of Japanese students found that autotelic students tended to put themselves in situations where the perceived challenges were higher than their perceived skills, while the reverse was true for other students.

Interestingly, a 1994 study found that college students perceived work where skills exceeded challenges to be more enjoyable than flow activities where skills matched challenges — which suggests, perhaps, that we are all inclined to underestimate our own skills, and do better when pushed a little.

In regard to occupation, research suggests that five job characteristics are positively related to flow at work. These characteristics (which come from the Job Characteristics Model) are:

  • Skill variety

  • Task identity (the extent to which you complete a whole and identifiable piece of work)

  • Task significance

  • Autonomy

  • Feedback

These clearly echo the flow components.

All of this suggests that to consistently achieve a flow state, you need the right activities and the right attitude.

So, that’s the background. Now for my new thoughts. It occurred to me that flow might have something to do with working memory. I’ve suggested before that flow might have something to do with getting the processing speed just right. My new thought extends this idea.

Remember that working memory is extremely limited, and that it seems to reflect a three-tiered system, whereby you have one item in your immediate focus, with perhaps three more items hovering very closely within an inner store, able to very quickly move into immediate focus, and a further three or so items in the ‘backburner’ — and all these items have to keep moving around and around these tiers if you want to keep them all ‘alive’. Because they can’t stay very long at all in this system without being refreshed (through the focus).

Beyond this system is the huge database of your long-term memory, and that’s where all these items come from. Thus, whenever you’re working on something, you’re effectively circulating items through this whole four-tier system: long-term memory to focus to inner store to backburner and then returning to LTM or to focus. And returning to LTM is the default — if it’s to return to focus, it has to happen within a very brief period of time.

And so here’s my thesis (I don’t know if it’s original; I just had the idea this morning): flow is our mental experience of a prolonged period of balancing this circulation perfectly. Items belonging to one cohesive structure are flowing through the system at the right speed and in the right order, with no need to stop and search, and no room for any items that aren’t part of this cohesive structure (i.e., there are no slots free in which to experience any emotions or distracting thoughts).

What this requires is for the necessary information to all be sufficiently strongly connected, so that activation/retrieval occurs without delay. And what that requires is for the foundations to be laid. That is, you need to have the required action sequences or information clusters well-learned.

Here we have a mechanism for talent — initial interest and some skill produces a sense of flow; this motivating state is pursued by the individual by persevering at the same activity/subject; if they are not pushed too hard (which will not elicit flow), or held back (ditto), they will once again achieve the desired state, increasing the motivation to pursue this course. And so on.

All of which begs the question: are autotelic personalities created or made? Because the development of people who find it easier to achieve flow may well have more to do with their good luck in childhood (experiencing the right support) than their genetic makeup.

Is flow worth pursuing? Flow helps us persist at a task, because it is an intrinsically rewarding mental state. Achieving flow, then, is likely to result in greater improvement if only because we are likely to spend more time on the activity. The interesting question is whether it also, in and of itself, means we gain more from the time we spend. At the moment, we can only speculate.

But research into the value of mental stimulation in slowing cognitive decline in older people indicates that engagement, and its correlate enjoyment, are important if benefits are to accrue. I think the experience of flow is not only intrinsically rewarding, but also intrinsically beneficial in achieving the sort of physical brain changes we need to fight age-related cognitive decline.

So I’ll leave you with the findings from a recent study of flow in older adults, that has some helpful advice for anyone wanting to achieve flow, as well as demonstrating that you're never too old to achieve this state (even if it does seem harder to achieve as you age, because of the growing difficulty in inhibiting distraction).

The study, involving 197 seniors aged 60-94, found that those with higher fluid cognitive abilities (processing speed, working memory, visual spatial processing, divergent thinking, inductive reasoning, and everyday problem-solving) experienced higher levels of flow in cognitive activities, while those with lower fluid abilities experienced lower levels of flow. However, those with lower fluid abilities experienced higher levels of flow in non-cognitive activities, while those with higher fluid abilities experienced lower levels of flow.

High cognitive demand activities included: working, art and music, taking classes and teaching, reading, puzzles and games, searching for information. Low cognitive demand activities included: social events, exercise, TV, cooking, going on vacation. Note that the frequency of these activities did not differ between those of higher fluid ability and those of lower.

These findings reinforce the importance of matching skills and activities in order to achieve flow, and also remind us that flow can be achieved in any activity.

Choosing when to think fast & when to think slow

I recently read an interesting article in the Smithsonian about procrastination and why it’s good for you. Frank Partnoy, author of a new book on the subject, pointed out that procrastination only began to be regarded as a bad thing by the Puritans — earlier (among the Greeks and Romans, for example), it was regarded more as a sign of wisdom.

The examples given about the perils of deciding too quickly made me think about the assumed connection between intelligence and processing speed. We equate intelligence with quick thinking, and time to get the correct answer is part of many tests. So, regardless of the excellence of a person’s cognitive product, the time it takes for them to produce it is vital (in test).

Similarly, one of the main aspects of cognition impacted by age is processing speed, and one of the principal reasons for people to feel that they are ‘losing it’ is because their thinking is becoming noticeably slower.

But here’s the question: does it matter?

Certainly in a life-or-death, climb-the-tree-fast-or-be-eaten scenario, speed is critical. But in today’s world, the major reason for emphasizing speed is the pace of life. Too much to do and not enough time to do it in. So, naturally, we want to do everything fast.

There is certainly a place for thinking fast. I recently looked through a short book entitled “Speed Thinking” by Ken Huds. The author’s strategy for speed thinking was basically to give yourself a very brief window — 2 minutes — in which to come up with 9 thoughts (the nature of those thoughts depends on the task before you — I’m just generalizing the strategy here). The essential elements are the tight time limit and the lack of a content limit — to accomplish this feat of 9 relevant thoughts in 2 minutes, you need to lose your inner censor and accept any idea that occurs to you.

If you’ve been reading my last couple of posts on flow, it won’t surprise you that this strategy is one likely to produce that state of consciousness (at least, once you’re in the way of it).

So, I certainly think there’s a place for fast thinking. Short bouts like this can re-energize you and direct your focus. But life is a marathon, not a sprint, and of course we can’t maintain such a pace or level of concentration. Nor should we want to, because sometimes it’s better to let things simmer. But how do we decide when it’s best to think fast or best to think slow? (shades of Daniel Kahneman’s wonderful book Thinking, Fast and Slow here!)

In the same way that achieving flow depends on the match between your skill and the task demands, the best speed for processing depends on your level of expertise, the demands of the task, and the demands of the situation.

For example, Sian Beilock (whose work on math anxiety I have reported on) led a study that demonstrated that, while novice golfers putted better when they could concentrate step-by-step on the accuracy of their performance, experts did better when their attention was split between two tasks and when they were focused on speed rather than accuracy.

Another example comes from a monkey study that has just been in the news. In this study, rhesus macaques were trained to reach out to a target. To do so, their brains needed to know three things: where their hand is, where the target is, and the path for the hand to travel to reach the target. If there’s a direct path from the hand to the target, the calculation is simple. But in the experiment, an obstacle would often block the direct path to the target. In such cases, the calculation becomes a little bit more complicated.

And now we come to the interesting bit: two monkeys participated. As it turns out, one was hyperactive, the other more controlled. The hyperactive monkey would quickly reach out as soon as the target appeared, without waiting to see if an obstacle blocked the direct path. If an obstacle did indeed appear in the path (which it did on 2/3 trials), he had to correct his movement in mid-reach. The more self-controlled monkey, however, waited a little longer, to see where the obstacle appeared, then moved smoothly to the target. The hyperactive monkey had a speed advantage when the way was clear, but the other monkey had the advantage when the target was blocked.

So perhaps we should start thinking of processing speed as a personality, rather than cognitive, variable!

[An aside: it’s worth noting that the discovery that the two monkeys had different strategies, undergirded by different neural activity, only came about because the researcher was baffled by the inconsistencies in the data he was analyzing. As I’ve said before, our focus on group data often conceals many fascinating individual differences.]

The Beilock study indicates that the ‘correct’ speed — for thinking, for decision-making, for solving problems, for creating — will vary as a function of expertise and attentional demands (are you trying to do two things at once? Is something in your environment or your own thoughts distracting you?). In which regard, I want to mention another article I recently read — a blog post on EdWeek, on procedural fluency in math learning. That post referenced an article on timed tests and math anxiety (which I’m afraid is only available if you’re registered on the EdWeek site). This article makes the excellent point that timed tests are a major factor in developing math anxiety in young children. Which is a point I think we can generalize.

Thinking fast, for short periods of time, can produce effective results, and the rewarding mental state of flow. Being forced to try and think fast, when you lack the necessary skills, is stressful and non-productive. If you want to practice thinking fast, stick with skills or topics that you know well. If you want to think fast in areas in which you lack sufficient expertise, work on slowly and steadily building up that expertise first.

Taking things too seriously

I was listening to a podcast the other day. Two psychologists (Andrew Wilson and Sabrina Galonka) were being interviewed about embodied cognition, a topic I find particularly interesting. As an example of what they meant by embodied cognition (something rather more specific than the fun and quirky little studies that are so popular nowadays — e.g., making smaller estimations of quantities when leaning to the left; squeezing a soft ball making it more likely that people will see gender neutral faces as female while squeezing a hard ball influences them to see the faces as male; holding a heavier clipboard making people more likely to judge currencies as more valuable and their opinions and leaders as more important), they mentioned the outfielder problem. Without getting into the details (if you’re interested, the psychologists have written a good article on it on their blog), here’s what I took away from the discussion:

We used to think that, in order to catch a ball, our brain was doing all these complex math- and physics-related calculations — try programming a robot to do this, and you’ll see just how complex the calculations need to be! And of course this is that much more complicated when the ball isn’t aimed at you and is traveling some distance (the outfielder problem).

Now we realize it’s not that complicated — our outfielder is moving, and this is the crucial point. Apparently (according to my understanding), if he moves at the right speed to make his perception of the ball’s speed uniform (the ball decelerates as it goes up, and accelerates as it comes down, so the catcher does the inverse: running faster as the ball rises and slower as it falls), then — if he times it just right — the ball will appear to be traveling a straight line, and the mental calculation of where it will be is simple.

(This, by the way, is what these psychologists regard as ‘true’ embodied cognition — cognition that is the product of a system that includes the body and the environment as well as the brain.)

This idea suggests two important concepts that are relevant to those wishing to improve their memory:

We (like all animals) have been shaped by evolution to follow the doctrine of least effort. Mental processing doesn’t come cheap! If we can offload some of the work to other parts of the system, then it’s sensible to do so.

In other words, there’s no great moral virtue in insisting on doing everything mentally. Back in the day (2,500 odd years ago), it was said that writing things down would cause people to lose their ability to remember (in Plato’s Phaedrus, Socrates has the Egyptian god-pharaoh say to Thoth, the god who invented writing, “this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.”)

This idea has lingered. Many people believe that writing reminders to oneself, or using technology to remember for us, ‘rots our brains’ and makes us incapable of remembering for ourselves.

But here’s the thing: the world is full of information. And it is of varying quality and importance. You might feel that someone should be remembering certain information ‘for themselves’, but this is a value judgment, not (as you might believe) a helpful warning that their brain is in danger of atrophying itself into terminal dysfunction. The fact is, we all choose what to remember and what to forget — we just might not have made a deliberate and conscious choice. Improving your memory begins with this: actually thinking about what you want to remember, and practicing the strategies that will help you do just that.

However, there’s an exception to the doctrine of least effort, and it’s evident among all the animals with sufficient cognitive power — fun. All of us who have enough brain power to spare, engage in play. Play, we are told, has a serious purpose. Young animals play to learn about the world and their own capabilities. It’s a form, you might say, of trial-&-error — but a form with enjoyability built into the system. This enjoyability is vital, because it motivates the organism to persist. And persistence is how we discover what works, and how we get the practice to do it well.

What distinguishes a good outfielder from someone who’s never tried to catch a ball before? Practice. To judge the timing, to get the movement just right — movement which will vary with every ball — you need a lot of practice. You can’t just read about what to do. And that’s true of every physical skill. Less obviously, it’s true of cognitive skills also.

It also ties back to what I was saying about trying to achieve flow. If you’re not enjoying what you’re doing, it’s probably either too easy or too hard for you. If it’s too easy, try and introduce some challenge into it. If it’s too hard, break it down into simpler components and practice them until you have achieved a higher level of competence on them.

Enjoyability is vital for learning well. So don’t knock fun. Don’t think play is morally inferior. Instead, try and incorporate a playful element into your work and study (there’s a balance, obviously!). If you have hobbies you enjoy, think about elements you can carry across to other activities (if you don’t have a hobby you enjoy, perhaps you should start by finding one!).

So the message for today is: the holy grail in memory and learning is NOT to remember everything; the superior approach to work / study / life is NOT total mastery and serious dedication. An effective memory is one that remembers what you want/need it to remember. Learning occurs through failure. Enjoyability greases the path to the best learning and the most effective activity.

Let focused fun be your mantra.

Daydreaming nurtures creativity?

Back in 2010, I read a charming article in the New York Times about a bunch of neuroscientists bravely disentangling themselves from their technology (email, cellphones, laptops, …) and going into the wilderness (rafting down the San Juan River) in order to get a better understanding of how heavy use of digital technology might change the way we think, and whether we can reverse the problem by immersing ourselves in nature.

One of those psychologists has now co-authored a study involving 56 people who participated in four- to six-day wilderness hiking, electronic-device-free, trips organized by Outward Bound schools. The study looked at the effect of this experience on creativity, comparing the performance of 24 participants who took the 10-item creativity test the morning before they began the trip, and 32 who took the test on the morning of the trip's fourth day.

Those few days in the wilderness increased performance on the task by 50% — from an average of 4.14 pre-trip to 6.08.

However, much as I like the idea, I have to say my faith in these results is not particularly great, given that there was a significant age difference between the two groups. The average age of the pre-hike group was 34, and that of the in-hike group 24. Why the researchers didn’t try to control this I have no idea, but I’m not convinced by their statement that they statistically accounted for age effects — which are significant.

Moreover, this study doesn’t tell us whether the effect was due to the experience of nature, simply the experience of doing something different, or the unplugging from technology. Still, it adds to the growing research exploring Attention Restoration Theory.

view from my office window
View from my window

I’m a great fan of nature myself, and count myself very fortunate to live surrounded by trees and within five minutes of a stream and bush (what people in other countries might call ‘woods’, though New Zealand bush is rather different). However, whether or not it is a factor in itself, there’s no denying other factors are also important — not least, perhaps, the opportunity to let your mind wander. “Mind wandering”, it has been suggested, evokes a unique mental state that allows otherwise opposing networks to work in cooperation, and stimulates problem-solving.

This is supported, perhaps, in another recent study. Again, I’m not putting too much weight on this, because it was a small study and most particularly because it was presented at a conference and very few details are available. But it’s an interesting idea, so let me give you the bullet points.

In the first study, 40 people were asked to copy numbers out of a telephone directory for 15 minutes, before being to complete a more creative task (coming up with different uses for a pair of polystyrene cups). Those who had first copied out the telephone numbers (the most boring task the researchers could think of) were more creative than a control group of 40 who had simply been asked to come up with uses for the cups, with no preamble.

In a follow-up experiment, an extra experimental group was added — these people simply read the phone numbers. While, once again, those copying the numbers were more creative than the controls, those simply reading the numbers scored the most highly on the creativity test.

The researchers suggest that boring activities that allow the most scope for daydreaming can lead to the most creativity. (You can read more about this study in the press release and in a Huffington Post article by one of the researchers.)

Remembering other research suggesting that thinking about your experiences when living abroad can make you more creative, I would agree, in part, with this conclusion: I think doing a boring task can help creativity, if you are not simply bogged down in the feeling of boredom, if you use the time granted you to think about something else — but it does matter what you think about!

The wilderness experiment has two parts to it: like the boring task, but to a much greater degree (longer span of time), it provides an opportunity to let your mind run free; like the living-abroad experiment, it puts you in a situation where you are doing something completely different in a different place. I think both these things are very important — but the doing-something-different is more important than putting yourself in a boring situation! Boredom can easily stultify the brain. The significance of the boredom study is not that you should do boring tasks to become more creative, but that, if you are doing something boring (that doesn’t require much of your attention), you should let your thoughts wander into happy and stimulating areas, not just wallow in the tedium!

But of course the most important point of these studies is a reminder that creativity - the ability to think divergently - is not simply something a person 'has', but that it flowers or dwindles in different circumstances. If you want to encourage your ability to think laterally, to solve problems, to be creative, then you need to nurture that ability.

Seeing without words

I was listening on my walk today to an interview with Edward Tufte, the celebrated guru of data visualization. He said something I took particular note of, concerning the benefits of concentrating on what you’re seeing, without any other distractions, external or internal. He spoke of his experience of being out walking one day with a friend, in a natural environment, and what it was like to just sit down for some minutes, not talking, in a very quiet place, just looking at the scene. (Ironically, I was also walking in a natural environment, amidst bush, beside a stream - but I was busily occupied listening to this podcast!)

Tufte talked of how we so often let words get between us and what we see. He spoke of a friend who was diagnosed with Alzheimer’s, and how whenever he saw her after that, he couldn’t help but be watchful for symptoms, couldn’t help interpreting everything she said and did through that perspective.

There are two important lessons here. The first is a reminder of how most of us are always rushing to absorb as much information as we can, as quickly as we can. There is, of course, an ocean of information out there in the world, and if we want to ‘keep up’ (a vain hope, I fear!), we do need to optimize our information processing. But we don’t have to do that all the time, and we need to be aware that there are downsides to that attitude.

There is, perhaps, an echo here with Kahnemann’s fast & slow thinking, and another to the idea that quiet moments of reflection during the day can bring cognitive benefits.

In similar vein, then, we’d probably all find a surprising amount of benefit from sometimes taking the time to see something familiar as if it was new — to sit and stare at it, free from preconceptions about what it’s supposed to be or supposed to tell us. A difficult task at times, but if you try and empty your mind of words, and just see, you may achieve it.

The second lesson is more specific, and applies to all of us, but perhaps especially to teachers and caregivers. Sometimes you need to be analytical when observing a person, but if you are interacting with someone who has a label (‘learning-disabled’, ‘autistic’, ‘Alzheimer’s’, etc), you will both benefit if you can sometimes see them without thinking of that label. Perhaps, without the preconception of that label, you will see something unexpected.