Have we really forgotten how to remember?

A new book, Moonwalking with Einstein: The Art and Science of Remembering Everything, has been creating some buzz recently. The book (I haven’t read it) is apparently about a journalist’s year of memory training that culminated in him making the finals of the U.S.A. Memory Championships. Clearly this sort of achievement resonates with a lot of people — presumably because of the widespread perception that forgetfulness is a modern-day plague, for which we must find a cure.

Let’s look at some of the points raised in the book and the discussion of it. There’s the issue of disuse. It’s often argued that technology, in the form of mobile phones and computers, means we no longer need to remember phone numbers or addresses. That calculators mean we don’t need to remember multiplication tables. That books mean we don’t need to remember long poems or stories (this one harks back to ancient times — the oft-quoted warning that writing would mean the death of memory).

Some say that we have forgotten how to remember.

The book recounts the well-known mnemonic strategies habitually used by those who participate in memory championships. These strategies, too, date back to ancient times. And you know something? Back then, just like now, only a few people ever bothered with these strategies. Why? Because for the most part, they’re far more trouble than they’re worth.

Now, this is not to say that mnemonic strategies are of no value. They are undoubtedly effective. But to achieve the sort of results that memory champions aspire to requires many many hours of effort. Moreover, and more importantly, these hours do not improve any other memory skills. That is, if you spend months practicing to remember playing cards, that’s not going to make you better at remembering the name of the person you met yesterday, or remembering that you promised to pick up the bread, or remembering what you heard in conversation last week. It’s not, in fact, going to help you with any everyday memory problem.

It may have helped you learn how to concentrate — but there are far more enjoyable ways to do that! (For example, both Lumosity and Posit Science offer games that are designed to help you improve your ability to concentrate. Both programs are based on cognitive science, and are run by cognitive scientists. Both advertise on my website.)

Does it matter that we can’t remember phone numbers? It’s argued that being unable to remember the phone numbers of even your nearest and dearest, if your phone has a melt-down, is a problem — although I don’t think anyone’s arguing that it’s a big problem. But if you are fretting about not being able to remember the numbers of those most important to you, the answer is simple, and doesn’t require a huge amount of training. Just make sure you make the effort to recall the number each time before you use it. After a while it’ll come automatically, and effortlessly, to mind (assuming that these are numbers you use often). If there’s a number you don’t use often, but don’t want to write down or record digitally, then, yes, a mnemonic is a good way to go. But again, you don’t have to get wildly complicated about it. The sort of complex mnemonics that memory champs use are the sort required for very fast encoding of many numbers, words, or phrases. For the occasional number, a few simple tricks suffice.

Shopping lists are another oft-quoted example. Sure, we’ve all forgotten to buy something from the supermarket, but it’s a long way from that problem and the ‘solution’ of complicated mnemonic images and stories. Personally, I find that if I write down what I want from the shop, then that’s all I need to do. Having the list with you is a reassurance, but it’s the act of writing it down that’s the main benefit. But if someone else in the household adds items, then that requires special effort. Similarly, if the items aren’t ‘regular’ ones, then that requires a bit more effort.

I have an atavistic attachment to multiplication tables, but is it really important for anyone to memorize them anymore? A more important skill is that of estimation — where so many people seem to fall down is in not realizing, when they perform a calculation inaccurately, that the answer is unlikely and they’ve probably made an error. More time getting a ‘feel’ for number size would be time better spent.

Does it matter if we can’t remember long poems? Well, I do favor such memorization, but not because failing to remember such things demonstrates “we don’t know how to remember anymore” . I think that memorizing poems or speeches that move us ‘furnishes the mind’, and plays a role in identity and belongingness. But you don’t need , and arguably shouldn’t use, complex mnemonic strategies to memorize them. If you want to ‘have’ them — and it has been argued that it is only by memorizing a text that you can make it truly yours — then you are better spending time with it in a meaningful way. You read it, you re-read it, you think about it, you recite the words aloud because you enjoy the sound of the words, you repeat them to friends because you want to share them, you dwell on them. You have an emotional attachment, and you repeat the words often. And so, they become yours, and you have them ‘in your heart’.

Memorizing a poem you hate because the teacher insists is a different matter entirely! And though you can make the case that children have to be forced to memorize such verse until they realize it’s something they like, I don’t think that’s true. Children ‘naturally’ memorize verse and stories that they like; it’s forced memorization that has engendered any dislike they feel.

Anyway, that’s an argument for another day. Let’s return to the main issue: have we forgotten how to remember?

No.

We remember naturally. We forget naturally too. Both of these are processes that happen to us regardless of our education, of our intelligence, of our tendencies to out-source part of our memory. We have the same instinctive understanding of how to remember that we have always had, and the ability to remember long speeches or sagas is, as it has always been, restricted to those few who want the ability (bards, druids, Roman politicians).

It’s undeniably true that we forget more than our forebears did — but we remember more too. The world’s a different place, and one that puts far greater demands on memory than it ever did. But the answer’s not to pine after a ‘photographic memory’, or the ability to recite the order of a deck of playing cards after seeing them once. For almost all of us, that ability is too hard to come by, and won’t help us with any of the problems we have anyway.

The author of this memoir is reported as saying that the experience taught him “to pay attention to the world around” him, to appreciate the benefits of having a mental repository of facts and texts, to appreciate the role of memory in shaping our experience and identity. These are all worthwhile goals, but you can rest assured that there are better, more enjoyable, ways of achieving them. There are also better ways of improving everyday memory. And perhaps most importantly, better ways of achieving knowledge and expertise in a subject. Mnemonics are an effective strategy for memorizing meaningless and arbitrary information, and they have their place in learning, but they are not the best method for learning meaningful information.

Let me add that by no means am I attacking Joshua Foer’s book, memory championships, or those who participate in them. I’m sure the book is an entertaining and enlightening read; memory championships are fully as worthwhile as any sport championship; those who participate in them have a great hobby. I have merely used this event as a springboard for offering some of my thoughts on the subject.

Here are the links that provoked this post. Two reviews of Joshua Foer’s book:
http://www.theguardian.com/science/2011/mar/13/memory-techniques-joshua…
http://www.nytimes.com/2011/03/08/books/08book.html

An account and a video of a high school team’s winning of the US memory championship (high school division)
http://video.nytimes.com/video/2011/03/09/sports/100000000710149/memory…
http://www.nytimes.com/2011/03/10/sports/10memory.html

Addendum:

After writing this, I discovered another article, this time by Foer himself. He makes a couple of points I’ve made before, but are well worth repeating. Until a few hundred years ago, there were very few copies of any text, and therefore it behooved any scholar, in reading a book, to remember it as well as he could. (In passing, I’d like to note that Foer wins major points with me by quoting Mary Carruthers). Therefore, the whole way readers approached books was very different to how it is for us today, when we value range more than depth. Understandably, when there are so many texts, on so many topics. To constrict ourselves to a few books that we read over and over again is not something we should wish on ourselves. But the price of this is clear; we can all relate to Foer’s comment: “There are books up there [on my bookshelves] that I can’t even remember whether I’ve read or not.”

I was also impressed to learn that he’d taken advice from that expert on expertise, K. Anders Ericsson. And the article has a very good discussion on how to practice, and Ericsson’s work on what he calls deliberate practice (although Foer doesn’t use that name).

Finally, just to reiterate the main point of my post, Foer himself says at the end of this excellent article: “True, what I hoped for before I started hadn’t come to pass: these techniques didn’t improve my underlying memory … Even once I was able to squirrel away more than 30 digits a minute in memory palaces, I seldom memorized the phone numbers of people I actually wanted to call. It was easier to punch them into my cellphone.”

Note that you can also test your memorization abilities with games from the World Memory Championship at http://www.nytimes.com/interactive/2011/02/20/magazine/memory-games.htm

 

Memory is complicated

Recently a “Framework for Success in Postsecondary Writing” came out in the U.S. This framework talked about the importance of inculcating certain “habits of mind” in students. One of these eight habits was metacognition, which they defined as the ability to reflect on one’s own thinking as well as on the individual and cultural processes used to structure knowledge.

The importance of metamemory was emphasized in two recent news items I posted, both dealing with encoding fluency, and the way in which many of us use it to judge how well we’ve learned something, or how likely we are to remember something. The basic point is that we commonly use a fluency heuristic (“it was easy to read/process, therefore it will be easily remembered”) to guide our learning, and yet that is often completely irrelevant.

BUT, not always irrelevant.

In the study discussed in Fluency heuristic is not everyone’s rule, people who believed intelligence is malleable did not use the fluency heuristic. And in one situation this was absolutely the right thing to do, and in the other situation, not so much. Because in that situation, what made the information easy to process did in fact also make it easier to remember.

The point is not that the fluency heuristic is wrong. Nor that it is right. The point is that heuristics (“rules of thumb”) are general guidelines, useful as quick and dirty ways of dealing with things you lack the expertise to deal with better. Heuristics are useful, but they are most useful when you have the knowledge to know when to apply them. The problem is not the use of this heuristic; it is the inflexible use of this heuristic.

Way back, more than ten years ago, I wrote a book called The Memory Key, and in it I said: “The more you understand about how memory works, the more likely you are to benefit from instruction in particular memory skills.” That’s what my books are all about, and that’s what this website is all about.

Learning a “rule” is easy; learning to tell when it’s appropriate to apply it is quite another. My approach to teaching memory strategies is far more complex than the usual descriptions, because learning how to perform a strategy is not particularly helpful on its own. But the reason most memory-improvement books/courses don’t try to do what I do is because explaining how it all works — how memory works, how the strategy works, how it all fits together — is a big task.

But the fact is, learning is a complicated matter. Oh, humans are, truly, great learners. We really do have an amazing memory, when you consider all the things we manage to stuff in there, usually without any great effort or particular intention. But that’s the point, isn’t it? It isn’t about how much you remember. It’s about remembering the things we want to remember.

And to do that, we need to know what makes things hard to remember, or easy to remember. We need to know that this is a question about the things themselves, about the context they’re in, about the way you’re experiencing them, and about the way you relate to them. You can see why this is something that can’t simply be written down in a series of bullet points.

But you don’t have to become a cognitive psychologist either! Expertise comes at different levels. My aim, in my books in particular, and on this website, is to explain as much as is helpful, leaving out most of the minutiae of neuroscience and cognitive theory, trying to find the kernel that is useful at a practical level.

It’s past time I put all these bits together, to describe, for example, exactly when a good mood helps cognition, and when it impairs it; when shifting your focus of attention impairs your performance, and when you need to shift focus to revive your performance; when talking helps, and when it doesn’t; when gesturing helps, and when it doesn’t — you see, there are no hard-and-fast rules about anything. Everything is tempered by task, by circumstance, by individual. So, I will be working on that: the manual for advanced users, you might call it. Let me know if this is something you’d be interested in (the more interest, the more time I’ll spend on it!).

Why asking the right questions is so important, and how to do it

Research; study; learning; solving problems; making decisions — all these, to be done effectively and efficiently, depend on asking the right questions. Much of the time, however, people let others frame the questions, not realizing how much this shapes how they think.

This applies particularly to public debate and communication, even to something that may appear as ‘factual’ as an infographic presenting data. But the data that are presented, and the way they are presented, govern the conclusions you take away, and they depend on the question the designer thought she was supposed to answer, not on the questions you might be interested in. But so much of the time, our thoughts are shaped by the presentation, and we come away having lost sight of our questions.

In research and study, decision-making and problem-solving, the difficulty can be even more insidious, because we ourselves may think we came up with the questions. But asking the right question is crucial, and it should be no surprise that getting it right on the first attempt is not something to be assumed! Moreover, what might be the right question at the beginning of your task may not still be the right question once you’ve acquired more understanding.

In other words, framing questions is not only a first crucial step — it’s also something you need to revisit, repeatedly.

So how do you know if your questions are the most effective ones for your task? How do you test them?

To assess the effectiveness of your questions, you need to be consciously aware of the hierarchy to which they belong. Every question is, explicitly or implicitly, part of a nested set of questions and assumptions. Your task is to make that nesting explicit knowledge.

Here are two examples: an everyday decision-making task, and a learning task.

Because it’s that time of year, let’s look at the common question “Should I go on a diet?” This might be nested in these beliefs (do note I’m simplifying this decision considerably):

  • I’m overweight
  • It’s dangerous to be overweight / Fat is ugly / Other people hate overweight people / I’ll never get that promotion/a job unless I lose weight / I’ll never get a date unless I lose weight …

We’ll ignore the first assumption (“I’m overweight”), because that should be a matter of measurement (although of course it’s not that simple). (I’m also ignoring the issue of whether going on a diet is a good way of losing weight — this is a cognitive exercise, not an advice column!) Let’s instead look at the second set of beliefs. If your question is predicated on the belief that “I’ll never get that promotion/a job unless I lose weight”, then you can see that your question would be better phrased as “Will losing weight improve my chances of getting a job/being promoted?”. This in turn spins off other questions, such as: “How much weight would I need to lose to improve my chances?”; “Is losing weight a better strategy than other strategies that might improve my chances?”; “What other things could I do to improve my chances?”

On the other hand, if your question comes out of a belief that “It’s dangerous to be overweight”, then the question would be better phrased as “Is the amount of excess weight I carry medically dangerous?” — a question that leads to a search of the medical literature, and might end up transforming into: “What are the chances I’ll develop diabetes?”; “What is the most effective thing I can do to reduce my chance of developing diabetes?”

If, however, your question is based on a belief that “Other people hate overweight people”, then you might want to think about why you believe that — is it about societal attitudes that you read about in the media? Is it about the way you think people are looking at you in public? Is it about comments from specific individuals in your life? This can end up quite a deep nesting, leading right down to your beliefs about your self-worth and your relationship with the people in your life.

Let’s look at a learning task: you’ve been asked to write an essay on the causes of the Second World War. This might appear to be a quite straightforward question — but like most apparently straightforward questions, it is an illusion generated by lack of knowledge. The more you know about a subject, the fewer straightforward questions there are!

Any question about causes should make you think of the distinction between proximate causes and deeper causes. The proximate cause of WW2 from the European point of view might be Hitler’s invasion of Sudentenland; for Americans, it might be the Japanese bombing of Pearl Harbor — but these are obviously not the sole cause of the War. There is obviously a long chain of events leading up to the invasion of Sudentenland, and most will date this chain back to the Versailles Treaty, which imposed such harsh penalties on Germany after they lost the First World War. But that, of course, takes us back even further, to the causes of WW1, and so on. Ultimately, you might want to argue that the way civilization rose and developed in ancient Mesopotamia led to the use of war as the principal means of establishing state dominance and power. You might even want to go back further, to primate evolution.

The distinction between proximate and ultimate causes, while useful, is of course a fuzzy one. These are not dichotomous concepts, but ones on a continuum.

All this is a long way of saying that any discussion of causes is always going to be a selected subset of possible causes. It is your (or your teacher’s) decision what subset you choose.

So, given that massive tomes have been written about the causes of WW2, how do you go about writing your comparatively brief essay?

Clearly it depends on the larger goal (we’re back to our nested hierarchy now). Here we must distinguish between two points of view: the instructor’s, and your own.

For example, the instructor might want you to write the essay to show:

  • your grasp of a few essential points covered in class or selected texts
  • your understanding of the complexity of the question
  • your understanding of the nature of historical argument
  • your ability to research a topic
  • your ability to write an essay in a particular format

The tack you take, therefore (if you want good grades!), will depend on what the instructor’s real goal is. It is likely, of course, that the instructor will have more than one goal, but let’s keep it simple, and assume only one.

But the instructor’s purposes aren’t the whole story. Your own goals are important too. As far as you’re concerned, you might be writing the essay:

  • Because the teacher asked for it (and no more)
  • Because you’re interested in the topic
  • Because you want to do well in the class.

Each of these, and the latter two in particular, are only part of the story. Why are you interested in the topic? Because you’re interested in history in general? because you’re interested in war? because a family member was caught up in the events of WW2? Perhaps your interest is in Japan and how it came to that point, or perhaps your interest is in how a society can come to believe that their best interests are served by invading another country.

And these are only some of the possible ways you might be interested. Obviously, there are many many aspects of this very broad question (“What are the causes of WW2?”) that could be discussed. So you need to consider both the instructor’s goals and your own when you re-frame the question in your own words.

Let’s assume that your instructor is interested in your understanding of the complexity of the topic, and you yourself are keen to get good grades although you have no personal interest to shape your approach. How would you frame your initial question?

The simplest question, for the simplest situation, is: What were the causes of WW2 covered in the text?

But if your instructor wants you to reveal your understanding of the complexity of the topic, you’ll probably want to come up with a number of specific questions that can each form the basis for a different paragraph in your essay.

For example:

  • What were the proximate causes of Britain declaring war on Germany?
  • What was the immediate chain of events leading to Germany’s invasion of Poland?
  • What role did the Versailles Treaty play in providing the conditions leading to Germany’s invasion of Poland?
  • What was the immediate chain of events leading to Japan’s invasion of Manchuria?
  • What did the League of Nations do when Japan invaded Manchuria, and how did this affect Germany’s re-occupation of the Rhineland and later invasion of Poland?

Depending on your knowledge of the topic at the beginning, many of those questions may only be revealed once you have answered an earlier question.

If you do, on the other hand, have an interest in a specific aspect of the multiple causes of WW2, you can still satisfy both your teacher’s goals and your own by briefly describing the ‘big picture’ — covering these same questions, but very briefly — and then pulling out one set of questions to answer in more detail, as a demonstration of the complexity of the issue.

Okay, these are bare bones examples (and have still gone on long enough - demonstrating how long it takes when you try and spell out any process!), but hopefully it's enough to show how understanding the questions and assumptions behind the ostensible question helps you frame the right question (and note that questions and assumptions are often just the same thing, framed differently). You can read more about asking questions as a study strategy in my older articles: Asking better questions and Metacognitive questioning and the use of worked examples. I also have a much longer example in my book Effective notetaking, which goes into considerable detail on this subject.

This post has gone on long enough, but let me end by making two last points, to emphasize the importance of asking the right questions. First, the question that starts you off not only shapes your search (for the answer to the problem, or for the right information, or the right decision), it also primes you. Priming is a psychological term that refers to the increased accessibility of related information when a particular item has been retrieved. For example, if you read ‘bread’, you are primed for ‘butter’; if you’ve just remarked on a pastel pink car, you’re more likely to notice other pastel-colored cars.

Second, questions are also an example of another important concept in memory research — the retrieval cue. As I discuss at some length in Perfect Memory Training, your ability to retrieve a memory (‘remember’) depends a lot on the retrieval cue. Retrieval cues (whatever prompts your memory search) are effective to the extent that they set you on the right path to the target memory. For example, the crossword clue “Highest university degree (9 letters)” immediately brought to my mind the answer “doctorate”; I didn’t need any letter clues. On the other hand, the clue “Large marine predator (9 letters)” left me stumped until I generated the right initial letter.

As I say in Perfect Memory Training, when you’re searching for specific information, it’s a good idea to actively generate recall cues (generation strategy), rather than simply rely on a passive association strategy (this makes me think of that, that makes me think of that). Asking questions, and repeatedly revising those questions, is clearly a type of generation strategy, and in some situations it might be helpful to think of it as such.

As in every aspect of improving memory and learning skills, it helps to know exactly what you're doing it and why it works! This is a large topic, but I hope this has helped you understand a little more about the value of asking questions, and how to do it in a way that is most effective.

Why it’s important to work out the specific skills you want to improve

I have spoken before, here on the website and in my books, about the importance of setting specific goals and articulating your specific needs. Improving your memory is not a single task, because memory is not a single thing. And as I have discussed when talking about the benefits of ‘brain games’ and ‘brain training’, which are so popular now, there is only a little evidence that we can achieve general across-the-board improvement in our cognitive abilities. But we can improve specific skills, and we may be able to improve a range of skills by working on a more fundamental skill they all share.

The modularity of the brain is emphasized in a recent study that found the two factors now thought to be operating in working memory capacity are completely independent of each other. Working memory capacity has long been known to be strongly correlated with intelligence, but the recent discovery that people vary not only in the number of items they can hold in short-term memory but also in how clear and precise the items are, has changed our conception of working memory capacity.

Both are measures of information; the clarity (resolution) of the items in working memory essentially reflects how much information about each item the individual can hold. So should our measures of WMC somehow encapsulate both factors? Are they related? It would seem plausible that those who can hold more items might hold less information about each of them; that those who can only hold two or three items might hold far more information on each item.

But this new study finds no evidence for that. Apparently the two factors are completely independent. Moreover, the connection between WMC and intelligence seems only to apply to the number of items, not to their clarity.

Working memory is fundamental to our cognitive abilities — to memory, to comprehension, to learning, to reasoning. And yet even this very basic process (basic in the sense of ‘at the base of everything’, not in the sense of primitive!) is now seen to break down further, into two quite separate abilities. And while clarity may have nothing to do with intelligence, it assuredly has something to do with abilities such as visual imagery, search, discrimination.

It may be clarity is more important to you than number of items. It depends on what skills are important to you. And the skills that are important to you change as your life circumstances change. When you’re young, you want as broad a base of skills as possible, but as you age, you are better to become more selective.

Many people die with brains that show all the characteristics of Alzheimer’s, and yet they showed no signs of that in life. The reason is that they had sufficient ‘cognitive reserve’ —a brain sufficiently well and strongly connected — that they could afford (for long enough) the losses the disease created in their brain. This doesn’t mean they wouldn’t have eventually succumbed to the inevitable, of course, if they had lived longer. But a long enough delay can essentially mean the disease has been prevented.

One of the best ways to fight cognitive decline and dementia is to build your brain up in the skills and domains that are, and will be, important to you. And while this can, and should, involve practicing and learning better strategies for specific skills, it is also a good idea to work on more fundamental skills. Knowing which fundamental skills underlie the specific skills you’re interested in would enable you to direct your attention appropriately.

Thus it may be that while increasing the number of items you can hold in short-term memory might help you solve mathematical problems, remember phone numbers, or understand complex prose, trying to improve your ability to visualize objects clearly might help you remember people’s faces, or where you left your car, or use mnemonic strategies.

Variety is the key to learning

On a number of occasions I have reported on studies showing that people with expertise in a specific area show larger gray matter volume in relevant areas of the brain. Thus London taxi drivers (who are required to master “The Knowledge” — all the ways and byways of London) have been found to have an increased volume of gray matter in the anterior hippocampus (involved in spatial navigation). Musicians have greater gray matter volume in Broca’s area.

Other research has found that gray matter increases in specific areas can develop surprisingly quickly. For example, when 19 adults learned to match made-up names against four similar shades of green and blue in five 20-minute sessions over three days, the areas of the brain involved in color vision and perception increased significantly.

This is unusually fast, mind you. Previous research has pointed to the need for training to extend over several weeks. The speed with which these changes were achieved may be because of the type of learning — that of new categories — or because of the training method used. In the first two sessions, participants heard each new word as they regarded the relevant color; had to give the name on seeing the color; had to respond appropriately when a color and name were presented together. In the next three sessions, they continued with the naming and matching tasks. In both cases, immediate feedback was always given.

But how quickly brain regions may re-organize themselves to optimize learning of a specific skill is not the point I want to make here. Some new research suggests our ideas of cortical plasticity need to be tweaked.

In my book on note-taking, I commented on how emphasis of some details (for example by highlighting) improves memory for those details but reduces memory of other details. In the same way, increase of one small region of the brain is at the expense of others. If we have to grow an area for each new skill, how do we keep up our old skills, whose areas might be shrinking to make up for it?

A rat study suggests the answer. While substantial expertise (such as our London cab-drivers and our professional musicians) is apparently underpinned by permanent regional increase, the mere learning of a new skill does not, it seems, require the increase to endure. When rats were trained on an auditory discrimination task, relevant sub-areas of the auditory cortex grew in response to the new discrimination. However, after 35 days the changes had disappeared — but the rats retained their new perceptual abilities.

What’s particularly interesting about this is what the finding tells us about the process of learning. It appears that the expansion of bits of the cortex is not the point of the process; rather it is a means of generating a large and varied set of neurons that are responsive to newly relevant stimuli, from which the most effective circuit can be selected.

It’s a culling process.

This is the same as what happens with children. When they’re young, neurons grow with dizzying profligacy. As they get older, these are pruned. Gone are the neurons that would allow them to speak French with a perfect accent (assuming French isn’t a language in their environment); gone are the neurons that would allow them to finely discriminate the faces of races other than those around them. They’ve had their chance. The environment has been tested; the needs have been winnowed; the paths have been chosen.

In other words, the answer’s not: “more” (neurons/connections); the answer is “best” (neurons/connections). What’s most relevant; what’s needed; what’s the most efficient use of resources.

This process of throwing out lots of trials and seeing what wins, echoes other findings related to successful learning. We learn a skill best by varying our practice in many small ways. We learn best from our failures, not our successes — after all, a success is a stopper. If you succeed without sufficient failure, how will you properly understand why you succeeded? How will you know there aren’t better ways of succeeding? How will you cope with changes in the situation and task?

Mathematics is an area in which this process is perhaps particularly evident. As a student or teacher, you have almost certainly come across a problem that you or the student couldn’t understand when expressed in one way, and maybe several different ways. Until, at some point, for no clear reason, understanding ‘clicks’. And it’s not necessarily that this last way of expressing / representing it is the ‘right’ one — if it had been presented first, it may not have had that effect. The effect is cumulative — the result of trying several different paths and picking something useful from each of them.

In a recent news item I reported on a finding that people who learned new sequences more quickly in later sessions were those whose brains had displayed more 'flexibility' in the earlier sessions — that is, different areas of the brain linked with different regions at different times. And most recently, I reported on a finding that training on a task that challenged working memory increased fluid intelligence in those who improved at the working memory task. But not everyone did. Those who improved were those who found the task challenging but not overwhelming.

Is it too much of a leap to surmise that this response goes hand in hand with flexible processing, with strategizing? Is this what the ‘sweet spot’ in learning really reflects — a level of challenge and enjoyability that stimulates many slightly different attempts? We say ‘Variety is the spice of life’. Perhaps we should add: ‘Variety is the key to learning’.

How to Revise and Practice

References

Kwok, V., Niu Z., Kay P., Zhou K., Mo L., Jin Z., et al. (2011). Learning new color names produces rapid increase in gray matter in the intact adult human cortex. Proceedings of the National Academy of Sciences.

The most effective learning balances same and different context

I recently reported on a finding that memories are stronger when the pattern of brain activity is more closely matched on each repetition, a finding that might appear to challenge the long-standing belief that it’s better to learn in different contexts. Because these two theories are very important for effective learning and remembering, I want to talk more about this question of encoding variability, and how both theories can be true.

First of all, let’s quickly recap the relevant basic principles of learning and memory (I discuss these in much more detail in my books The Memory Key, now out-of-print but available from my store as a digital download, and its revised version Perfect Memory Training, available from Amazon and elsewhere):

network principle: memory consists of links between associated codes

domino principle: the activation of one code triggers connected codes

recency effect: a recently retrieved code will be more easily found

priming effect: a code will be more easily found if linked codes have just been retrieved

frequency (or repetition) effect: the more often a code has been retrieved, the easier it becomes to find

spacing effect: repetition is more effective if repetitions are separated from each other by other pieces of information, with increasing advantage at greater intervals.

matching effect: a code will be more easily found the more the retrieval cue matches the code

context effect: a code will be more easily found if the encoding and retrieval contexts match

Memory is about two processes: encoding (the way you shape the memory when you put it in your database, which includes the connections you make with other memory codes already there) and retrieving (how easy it is to find in your database). So making a ‘good’ memory (one that is easily retrieved) is about forming a code that has easily activated connections.

The recency and priming effects remind us that it’s much easier to follow a memory trace (by which I mean the path to it as well as the code itself) that has been activated recently, but that’s not a durable strength. Making a memory trace more enduringly stronger requires repetition (the frequency effect). This is about neurobiology: every time neurons fire in a particular sequence, it makes it a little easier for it to fire in that way again.

Now the spacing effect (which is well-attested in the research) seems at odds with this most recent finding, but clearly the finding is experimental evidence of the matching and context effects. Context at the time of encoding affects the memory trace in two ways, one direct and one indirect. It may be encoded with the information, thus providing additional retrieval cues, and it may influence the meaning placed on the information, thus affecting the code itself.

It is therefore not at all surprising that the closer the contexts, the closer the match between what was encoded and what you’re looking for, the more likely you are to remember. The thing to remember is that the spacing effect does not say that it makes the memory trace stronger. In fact, most of the benefit of spacing occurs with as little as two intervening items between repetitions — probably because you’re not going to benefit from repeating a pattern of activation if you don’t give the neurons time to reset themselves.

But repeating the information at increasing intervals does produce better learning, measured by your ability to easily retrieve the information after a long period of time (see my article on …), and it does this (it is thought) not because the memory trace is stronger, but because the variations in context have given you more paths to the code.

This is the important thing about retrieving: it’s not simply about having a strong path to the memory. It’s about getting to that memory any way you can.

Let’s put it this way. You’re at the edge of a jungle. From where you stand, you can see several paths into the dense undergrowth. Some of the paths are well-beaten down; others are not. Some paths are closer to you; others are not. So which path do you choose? The most heavily trodden? Or the closest?

If the closest is the most heavily trodden, then the choice is easy. But if it’s not, you have to weigh up the quality of the paths against their distance from you. You may or may not choose correctly.

I hope the analogy is clear. The strength of the memory trace is the width and smoothness of the path. The distance from you reflects the degree to which the retrieval context (where you are now) matches the encoding context (where you were when you first input the information). If they match exactly, the path will be right there at your feet, and you won’t even bother looking around at the other options. But the more time has passed since you encoded the information, the less chance there is that the contexts will match. However, if you have many different paths that lead to the same information, your chances of being close to one of them obviously increases.

In other words, yes, the closer the match between encoding and retrieval context, the easier it will be to remember (retrieve) the information. And the more different contexts you have encoded with the information, the more likely it is that one of those contexts will match your current retrieval context.

A concrete example might help. I’ve been using a spaced retrieval program to learn the basic 2200-odd Chinese characters. It’s an excellent program, and groups similar-looking characters together to help you learn to distinguish them. I am very aware that every time a character is presented, it appears after another character, which may or may not be the same one it appeared after on an earlier occasion. The character that appeared before provides part of the context for the new character. How well I remember it depends in part on how often I have seen it in that same context.

I would ‘learn’ them more easily if they always appeared in the same order, in that the memory trace would be stronger, and I would more easily and reliably recall them on each occasion. However in the long-term, the experience would be disadvantageous, because as soon as I saw a character in a different context I would be much less likely to recall it. I can observe this process as I master these characters — with each different retrieval context, my perception of the character deepens as I focus attention on different aspects of it.

What babies can teach us about effective information-seeking and management

Here’s an interesting study that’s just been reported: 72 seven- and eight-month-old infants watched video animations of familiar fun items being revealed from behind a set of colorful boxes (see the 3-minute YouTube video). What the researchers found is that the babies reliably lost interest when the video became too predictable – and also when the sequence of events became too unpredictable.

In other words, there’s a level of predictability/complexity that is “just right” (the researchers are calling this the ‘Goldilocks effect’) for learning.

Now it’s true that the way babies operate is not necessarily how we operate. But this finding is consistent with other research suggesting that adult learners find it easier to learn and pay attention to material that is at just the right level of complexity/difficulty.

The findings help explain why some experiments have found that infants reliably prefer familiar objects, while other experiments have found instead a preference for novel items. Because here’s the thing about the ‘right amount’ of surprise or complexity — it’s a function of the context.

And this is just as true for us adults as it is for them.

We live in a world that’s flooded with information and change. Clay Shirky says: “There’s no such thing as information overload — only filter failure.” Brian Solis re-works this as: “information overload is a symptom of our inability to focus on what’s truly important or relevant to who we are as individuals, professionals, and as human beings.”

I think this is simplistic. Maybe that’s just because I’m interested in too many things and they all tie together in different ways, and because I believe, deeply, in the need to cross boundaries. We need specialists, sure, because every subject now has too much information even for a specialist to master. But maybe that’s what computers are going to be for. More than anything else, we need people who can see outside their specialty.

Part of the problem as we get older, I think, is that we expect too much of ourselves. We expect too much of our memory, and we expect too much of our information-processing abilities. Babies know it. Children know it. You take what you can; each taking is a step; on the next step you will take some more. And eventually you will understand it all.

Perhaps it is around adolescence that we get the idea that this isn’t good enough. Taking bites is for children; a grown-up person should be able to read a text/hear a conversation/experience an event and absorb it all. Anything less is a failure. Anything less is a sign that you’re not as smart as others.

Young children drive their parents crazy wanting the same stories read over and over again, but while the stories may seem simple to us, that’s because we’ve forgotten how much we’ve learned. Probably they are learning something new each time (and quite possibly we could learn something from the repetitions too, if we weren’t convinced we already knew it all!).

We don’t talk about the information overload our babies and children suffer, and yet, surely, we should. Aren’t they overloaded with information? When you think about all they must learn … doesn’t that put our own situation in perspective?

You could say they are filtering out what they need, but I don’t think that’s accurate. Because they keep coming back to pick out more. What they’re doing is taking bites. They’re absorbing what they need in small, attainable bites. Eventually they will get through the entire meal (leaving to one side, perhaps, any bits that are gristly or unpalatable).

The researchers of the ‘Goldilocks’ study tell parents they don’t need to worry about providing this ‘just right’ environment for their baby. Just provide a reasonably stimulating environment. The baby will pick up what they need at the time, and ignore the rest.

I think we can learn from this approach. First of all, we need to cultivate an awareness of the complexity of an experience (I’m using this as an umbrella word encompassing everything from written texts to personal events), being aware that any experience must be considered in its context, and that what might appear (on present understanding) to be quite simple might become less so in the light of new knowledge. So the complexity of an event is not a fixed value, but one that reflects your relationship to it at that time. This suggests we need different information-management tools for different levels of complexity (e.g., tagging that enables you to easily pull out items that need repeated experiencing at appropriate occasions).

(Lucky) small children have an advantage (this is not the place to discuss the impact of ‘disadvantaged’ backgrounds) — the environment is set up to provide plenty of opportunities to re-experience the information they are absorbing in bites. We are not so fortunate. On the other hand, we have the huge advantage of having far more control over our environment. Babies may use instinct to control their information foraging; we must develop more deliberate skills.

We need to understand that we have different modes of information foraging. There is the wide-eyed, human-curious give-me-more mode — and I don’t think this is a mode to avoid. This wide, superficial mode is an essential part of what makes us human, and it can give us a breadth of understanding that can inform our deeper knowledge of specialist subjects. We may think of this as a recreational mode.

Other modes might include:

  • Goal mode: I have a specific question I want answered
  • Learning mode: I am looking for information that will help me build expertise in a specific topic
  • Research mode: I have expertise in a topic and am looking for information in a specific part of that domain
  • Synthesis mode: I have expertise in one topic and want information from other domains that would enrich my expertise and give me new perspectives

Perhaps you can think of more; I would love to hear other suggestions.

I think being consciously aware of what mode you are in, having specific information-seeking and information-management tools for each mode, and having the discipline to stay in the chosen mode, are what we need to navigate the information ocean successfully.

These are some first thoughts. I would welcome comments. This is a subject I would like to develop.

Retraining the brain

A fascinating article recently appeared in the Guardian, about a woman who found a way to overcome a very particular type of learning disability and has apparently helped a great many children since.

As a child, Barbara Arrowsmith-Young had a brilliant, almost photographic, memory for information she read or heard, but she had no understanding. She managed to progress through school and university through a great deal of very hard work, but she always knew (although it wasn’t recognized) that there was something very wrong with her brain. It wasn’t until she read a book (The Man with a Shattered World: The History of a Brain Wound - Amazon affiliate link) by the famous psychologist Luria that she realized what the problem was. Luria’s case study concerned a soldier who developed mental disabilities after being shot in the head. His disabilities were the same as hers: “he couldn't tell the time from a clock, he couldn't understand bigger and smaller without drawing pictures, he couldn't tell the difference between the sentences ‘The boy chases the dog’ and ‘The dog chases the boy’.”

On the basis of enriched-environment research, she started an intensive program to retrain her brain — 8-10 hours a day. She found it incredibly exhausting, but after 3-4 months, she suddenly ‘got it’. Something had shifted in her brain, and now she could understand verbal information in a way she hadn’t before.

The ‘Arrowsmith Program’ is now available in 35 schools in Canada and the US, and the children who attend them have often, she claims, been misdiagnosed with ADD or ADHD, dyslexia or dysgraphia. She has just published a book about her experience (The Woman Who Changed Her Brain: And Other Inspiring Stories of Pioneering Brain Transformation - Amazon affiliate link).

I can’t, I’m afraid, speak to the effectiveness of her program, because I can’t find any independent research in peer-reviewed journals (this is not to say it doesn’t exist), although there are reports on her own website. But I have no doubt that intensive training in specific skills can produce improvement in specific skills in those with learning disabilities.

There are two specific things that I found interesting. The first is the particular disability that Barbara Arrowsmith-Young suffered from — essentially, it seems, a dysfunction in integrating information.

This disjunct between ‘photographic memory’ and understanding is one I have spoken of before, but it bears repeating, because so many people think that a photographic memory is a desirable ambition, that any failure to remember exactly is a memory failure. But it’s not a failure; the system is operating exactly as it is meant to. Remembering every detail is counter-productive.

I was reminded of this recently when I read about something quite different: an “inexact” computer chip that’s 15 times more efficient, “challenging the industry’s 50-year pursuit of accuracy”. The design improves efficiency by allowing for occasional errors. One way it achieved this was by pruning some of the rarely used portions of digital circuits. Pruning is of course exactly what our brain does as it develops (infancy and childhood is a time of making huge numbers of connections; then as the brain matures, it starts viciously pruning), and to a lesser extent what it does every night as we sleep (only some of the day’s events and new information are consolidated; many more are discarded).

The moral is: forgetting isn’t bad in itself. Memory failure comes rather when we forget what we want or need to remember. Our brain has a number of rules and guidelines to help it work out what to forget and what to remember. But here’s the thing: we can’t expect an automatic system to get it right all the time. We need to provide some direct (conscious) management.

The second thing I was taken with was this list of ‘learning dysfunctions’. I believe this is a much more useful approach than category labels. Of course we like labels, but it has become increasingly obvious that many disorders are umbrella concepts. Those with dyslexia, for example, don’t all have the same dysfunctions, and accordingly, the appropriate treatment shouldn’t be the same. The same is true for ADHD and Alzheimer’s disease, to take two very different examples.

Many of those with dyslexia and ADHD have shown improvement as a result of specific skills training, but at the moment we’re still muddling around, not sure of the training needed (a side-note for those who are interested — Scientific American has a nice article on how ADHD behavioral therapy may be more effective than drugs in long run). So, because there are several different problems all being lumped into a single disorder, research finds it hard to predict who will benefit from what training.

But the day will come, I have no doubt, when we will be able to specify precisely what isn’t working properly in a brain, and match it with an appropriate program that will retrain the brain to compensate for whatever is damaged.

Or — to return to my point about choosing what to forget or remember — the individual (or parent) may choose not to attempt retraining. Not all differences are dysfunctional; some differences have value. When we can specify exactly what is happening in the brain, perhaps we will get a better handle on that too.

In the meantime, there is one important message, and it is, when it comes down to it, my core message, underlying all my books and articles: if you (or a loved one, or someone in your care) has any sort of learning or memory problem, whatever the cause, think very hard about the precise difficulties experienced. Then reflect on how important each one is. Then try and discover the specific skills needed to deal with those difficulties that matter. That will require not only finding suggested exercises to practice, but also some experimentation to find what works for you (because we haven’t yet got to the point where we can work this out, except by trial and error). And then, of course, you need to practice them. A lot.

I’m not saying that this is the answer to everyone’s problems. Sometimes the damage is too extensive, or in just the wrong place (there are hubs in the brain, and obviously damage to a hub is going to be more difficult to work around than damage elsewhere). But even if you can’t fully compensate for damage, there are few instances where specific skills training won’t improve performance.

Sharing what works is one way to help us develop the database needed. So if you have any memory or learning problems, and if you have experienced any improvement for whatever reason, tell us about it!

Finding the right strategy through perception and physical movement

I talk a lot about how working memory constrains what we can process and remember, but there’s another side to this — long-term memory acts on working memory. That is, indeed, the best way of ‘improving’ your working memory — by organizing and strengthening your long-term memory codes in such a way that large networks of relevant material are readily accessible.

Oddly enough, one of the best ways of watching the effect of long-term memory on working memory is through perception.

Perception is where cognition begins. It’s where memory begins. But here’s the thing: it is only in the very beginning, as a newborn baby, that this perception is pure, uncontaminated by experience.

‘Uncontaminated’ makes it sound bad, but of course the shaping of perception by experience is vital. Otherwise we’d all be looking around wide-eyed, wondering what was going on. So we need to shape our perception.

For example, if we’re searching for a particular object, we have a mental picture of what we’re looking for, and that helps us find it quicker. Such predictive templates have recently been shown to exist for smell as well.

‘Predictive templates’ are the perceptual version of cognitive schemas. I have mentioned schemas before, in the context of expertise and reading scientific text. But schemas aren’t restricted to such intellectual pursuits; we use schemas constantly, every day of our lives. Schemas, or mental models or scripts, are mental representations you’ve formed through your experiences, that tell you what to expect from a given situation. This means we don’t have to think too hard when we come up against a familiar situation; we know what to expect.

That also means that we often don’t notice things that don’t fit in with our expectations.

I could talk about that for some time, but what I want to emphasize today is this point that thought begins with perception — and perception begins with the body.

For example, it probably won’t surprise anyone that an educational program for young children, “Moved by Reading”, has been found to help young elementary school children understand texts and math word problems by getting them to manipulate images on a computer screen in accordance with the story. Such virtual ‘acting out’ helped the children understand what was going on in the story and, in the case of the math problems, significantly reduced their attention to irrelevant information in the text. (You can read the journal article (pdf) on this; those who are registered at Edweek can also read the article that brought this to my notice.)

More surprisingly, at the Dance Psychology Lab at the University of Hertfordshire, they’ve apparently discovered that different sorts of dancing help people with different sorts of problem-solving. Improvised dance apparently helps with divergent thinking, where there are multiple answers to a problem. Very structured kinds of dance help with convergent thinking, where you’re looking for the single answer to a problem. The researchers also claim that improvised dance can help those with Parkinson's disease improve their divergent thinking skills. (I’m using the words ‘apparently’ and ‘claim’ because I haven’t seen any research papers on this — but I wanted to mention it because it’s a nice idea, and you can read an article about it and listen to the head of the Dance Lab talk about it in a 20-minute video).

We can readily see how acting out text can reveal details that in reading we might gloss over, and it’s only one step from this to accept that gesturing might help us solve problems and remember them (as I’ve reported repeatedly). But the idea that dancing in different ways might affect how we think? Not so easily believed. But in a recent news report, I talked about two experimental studies that demonstrated how moving your hands makes you less inclined to think of abstract solutions to problems (or, conversely, that moving your hands helps you solve problems physically), and holding your hands close to the object of your perception helps you see details, but hinders you from abstracting commonalities.

This idea that the way you hold or move your body can affect what we might term your level of perception — specific detail vs global — is perhaps echoed (am I drawing too long a bow here?) in a recent observation I made regarding face-blindness (prosopagnosia). That it may be, along with perfect pitch and eidetic memory, an example of what happens when your brain can’t abstract the core concept.

Our own personal experience, supported in a recent study of scene perception, indicates that we can’t do both. At any one time you must make the choice: to focus on details, or to focus on the big picture. So this is contextual, but it’s also individual — some people will be more inclined to a detail strategy, others to a global strategy. Interestingly, this may change with age. And also experience.

One aspect of cognitive flexibility is being able to control your use of detail and global perception. This applies across the board, in many different circumstances. You need to think about which type of perception is best in the context.

In the realm of notetaking, for example, (as I discuss in my book Effective notetaking), your goal makes a huge difference to the effectiveness of your notetaking. The more specific the goal, the fewer notes you need take, and the more targeted they are. Generally speaking, also, the more specific your goal, the faster you can read/select.

But of course there’s a downside to being fast and targeted (there’s always a downside to any strategy!) — you are likely to miss information that isn’t what you’re after, but is something you need to know in a different or wider context.

There’s something else interesting about speed of processing: we associate faster processing speeds with higher intelligence, and we associate concentration with faster processing speeds. That is, when we’re concentrating, we can read/work faster. Contrariwise, I believe (though I don’t think there’s any research on this — do tell me if you know of any), if we can force ourselves into a faster mode of operation, our concentration will be better.

So fast is good, but risks missing relevant information — implying that sometimes slow is better. Which leads me to a thought: is another way of looking at Csikszentmihalyi’s famous “flow” the idea that flow is achieved when you get the speed just right? And can you therefore help yourself achieve that flow state through physical means? (Inevitably leading me to think of t’ai ch’i.)

Some thoughts for the day!

When are two (or more) heads better than one?

We must believe that groups produce better results than individuals — why else do we have so many “teams” in the workplace, and so many meetings. But many of us also, of course, hold the opposite belief: that most meetings are a waste of time; that teams might be better for some tasks (and for other people!), but not for all tasks. So what do we know about the circumstances that make groups better value?

A recent study involving some 700 people, working on a wide variety of tasks in small groups (two to five), found that much of the difference between groups’ performance (specifically, around 40% of the variation in performance) could be explained by a measure called “collective intelligence”.

It was called that (I assume) on the basis that it was such an important factor in predicting performance on such a wide range of tasks (from visual puzzles to negotiations, brainstorming, games and complex rule-based design assignments). But the intriguing thing about this collective intelligence is that it didn’t seem to reflect the individual intelligence of the groups’ members. Instead, the most important factor in a group’s collective intelligence appeared to be how well its members worked together.

There were two (or three) factors that seem particularly important for this. The main one is the “social sensitivity” of the members — meaning how well the individuals perceive each others’ emotions. The number of women in the group also enhanced collective intelligence, but this may not be a separate factor — it may simply reflect the tendency for women to be more socially sensitive.

The other factor was the extent to which everyone contributed — groups where one person dominated were less collectively intelligent. This fits in with a review of workplace teams that found that teams that spent time sharing new information performed better overall in their tasks — even though a lot of the information was already known by everyone in the group. (Although it must be added that bringing in new information was even better!). It also fits in with the same review’s finding that teams whose members had more similar backgrounds tended to share more information than those with greater diversity.

That’s a depressing finding, but it’s not insoluble.

The review (which looked at studies totally some 4800 groups, involving over 17,000 people) also found that teams communicate better when they engage in tasks where they are instructed to come up with a correct / best answer rather than a consensual solution.

Previous research (see reports here and here) has suggested that brainstorming actually produces fewer ideas than would be produced by the same individuals working individually, and that groups working together to remember something recall more poorly than the same individuals would working on their own. One big reason for these findings, it is thought, is that hearing other people's ideas disrupts your own retrieval strategy. However, this is less likely to occur in a structured situation, where turns are taken.

So groups can have inhibitory effects (which are apparently worse when the information being recalled is more complex), and it seems likely that is one of the problems that social sensitivity helps fight against. And indeed, previous research has indicated that, when the meeting is unstructured, with everyone chipping in as they feel like it, the specificity of the suggestions is important — with this being affected by how well the group members know each other. (If turns are taken, on the other hand, it is waiting time that’s important.)

Another recent report (which I reported on a few weeks ago, which is what triggered this post), found that although two people working together can make better decisions than either one could make alone, this is only true when the participants were able to accurately judge their level of confidence in their information. If one of them is working off inaccurate information and doesn’t realize that it is inaccurate, then (unsurprisingly!) the one with accurate information is better off without him.

Again, we can surmise that a group where members know each other well is one where they have a good understanding of the confidence they can put in each other’s judgments and claims.

Perhaps relatedly, another study indicates that problems can be exacerbated when information is shared, if the people have different viewpoints. People mentally organize information in different ways, and cues that help one person recall may inhibit another.

So where does all this leave us?

How effective a group is depends a lot on how attuned its members are to each other’s emotions and capabilities. Information sharing is a positive process that enhances group productivity even when the information is already familiar to members, and therefore strategies to encourage information sharing are useful.

There are three classes of strategies that could be used:

  • Providing structure to the discussions (e.g. taking turns, setting time limits, having a moderator that encourages suggestions to be specific and novel)
  • Providing instruction in how to become more socially sensitive (e.g. learning about physical cues to emotion)
  • Encouraging informal conversation and “team-building” exercises that help team members become more familiar with each other (bearing in mind that the point of such exercises is to help members become more aware of each other’s emotions and capabilities, and designing them accordingly).

The first of these strategies is most important for groups who have come together for a specific occasion, or only meet rarely. The second of these is useful for individuals who are (as most everyone is!) going to sometimes work collaboratively — that is, it is not dependent on a particular group. The third class of strategies is useful for long-term groups.

In all these cases, such strategies are most needed when a group contains more diverse members, who are not well-known to each other.

There is also a fourth class of strategy, that relates to assessing the effectiveness of group collaboration for particular tasks. For example, the difficulty or complexity of the task is an important factor — more complex tasks are more efficiently learned or processed by groups, while low-complexity tasks are better left to individuals. But greater complexity also requires a group that works well together.

Type of task is another likely factor. For example, organizational or memory retrieval tasks may be best left to individuals or small, similarly-inclined groups in the early stages, because our ways of approaching these tasks is quite idiosyncratic and can be hampered by contrary approaches. Of course, diversity is needed at a later stage to ensure thoroughness and/or wide applicability.

References

Bahrami, B., Olsen K., Latham P. E., Roepstorff A., Rees G., & Frith C. D. (2010).  Optimally Interacting Minds. Science. 329(5995), 1081 - 1085.

Basden, B.H., Basden, D.R., Bryner, S. & Thomas, R.L. III (1997). A comparison of group and individual remembering: Does collaboration disrupt retrieval strategies? Journal of Experimental Psychology: Learning, Memory and Cognition, 23, 1176-1189.

Kirschner, F., Paas F., & Kirschner P. A. (2010).  Task complexity as a driver for collaborative learning efficiency: The collective working-memory effect. Applied Cognitive Psychology. n/a-n/a - n/a-n/a.

Mesmer-Magnus, J. R., & DeChurch L. A. (2009).  Information Sharing and Team Performance: A Meta-Analysis. Journal of Applied Psychology. 94(2), 535 - 546.

Ormerod, T. 2005. The way we were: situational shifts in collaborative remembering. Research project funded by the Economic and Social Research Council (ESRC).
https://www.eurekalert.org/news-releases/702378

Weldon, M.S. & Bellinger, K.D. (1997). Collective and individual processes in remembering. Journal of Experimental Psychology: Learning, Memory and Cognition, 23, 1160-1175.

Woolley, A. W., Chabris C. F., Pentland A., Hashmi N., & Malone T. W. (2010).  Evidence for a Collective Intelligence Factor in the Performance of Human Groups. Science. science.1193147 - science.1193147.