Skip to main content

classroom learning

Some Surprising Findings About Learning in the Classroom

  • The quality of the teacher doesn't affect how much students learn (that doesn't mean it doesn't affect other factors — e.g., interest and motivation).
  • Low ability students learn just as much as high ability students when exposed to the same experiences.
  • More able students learn more because they seek out other learning opportunities.
  • Tests, more than measuring a student’s learning, reflect the student’s motivation.

I want to talk to you this month about an educational project that’s been running for some years here in New Zealand. The Project on Learning spent three years (1998-2000) studying, in excruciating detail, the classroom experiences of 9-11 year olds. The study used miniature videocameras, individually worn microphones, as well as trained observers, to record every detail of the experiences of individual students during the course of particular science, maths, or social studies units. The students selected were a randomly chosen set of four, two girls, two boys, two above average ability, two below average ability. 16 different classrooms were involved in the study.

On the basis of this data, the researchers came to a number of startling conclusions. Here are some of them (as reported by Emeritus Professor Graham Nuthall on national radio):

* that students learn no more from experienced teachers than they learn from beginning teachers

* that students learn no more from award-winning teachers than teachers considered average

* that students already know 40-50% of what teachers are trying to teach them

* that there are enormous individual differences in what students learned from the same classroom experiences — indeed, hardly any two students learned the same things

* that low ability students learn just as much as high ability students when exposed to the same experiences

This is amazing stuff!

We do have to be careful what lesson we draw from this. For example, I don’t think we should draw the conclusion that it doesn’t matter whether a teacher is any good or not. For a start, the study didn’t use bad teachers (personally, I had one university lecturer who actually put my knowledge of the subject into deficit — I started out knowing something about the subject (calculus), and by the time I’d spent several months listening to him, I was hopelessly confused). Secondly, there are lots of other aspects to the classroom experience than simply what the student learns from a particular study unit.

Nevertheless, the idea that a student learns as much from an okay teacher as from a great one, is startling. Here’s a quote from Professor Nuttall: “Teachers like the rest of us are concerned for student learning and assume that learning will flow naturally from interesting and engaging classroom activities. But it does not.” !

It’s not so surprising that different students learn different things from the same experiences — we all knew that — but we perhaps didn’t fully appreciate the degree to which that is true. But of course the most surprising thing is that low ability students learn just as much as high ability students when exposed to the same experiences. That, is no doubt the finding that most people will find hardest to believe. Clearly the more able students are learning more than the less able, so how does that work?

According to the researchers, “a significant proportion of the critical learning experiences for the more able students were those that they created for themselves, with their peers, or on their own. The least able students relied much more on the teacher for creating effective learning opportunities.”

This does in fact fit in with my own experiences: marveling at my son’s knowledge of various subjects, on a number of occasions I have questioned him about the origins of such knowledge. Invariably, it turns out that his knowledge came from books he had read at home, rather than anything he was taught at school. (And please believe I am not knocking my son’s schools or his teachers; I have been reasonably happy, most of the time, with these).

In this interview, Professor Nuthall mentioned another finding that has come out of the research — that tests, more than measuring a student’s learning, reflect the student’s motivation. “When a student is highly motivated to do the best they can on a test, then that test will measure what they know or can do. When that motivation is not there (as it is not for most students most of the time) then the test only measures what they can be bothered to do.”

Thought-provoking!

Professor Nuthall’s research studies were cited in the 3rd edition of the Handbook of Research on Teaching (the “bible” for teaching research) as one of the five or six most significant research projects in the world. The research team of Professor Nuthall and Dr Adrienne Alton-Lee (who invented the techniques used in the Project on Learning) was cited in the most recent edition as one of the leading research teams in the history of research on teaching.

[see below for some of the academic publications that report the findings of the Project on Learning (plus an early article on the techniques used in the Project)]

The wider picture

An OECD report on learning cites that, for more than a century, one in six have reported that they hate/hated school, and a similar number failed to achieve sufficient literacy and numeracy skills to be securely employable. The report asks the question: “Maybe traditional education as we know it inevitably offends one in six pupils?”

In a recent special report on education put out by CNN, it is claimed that, in the U.S., charter schools (publicly financed schools that operate largely independent of government regulation) now count nearly 700,000 students. And, most tellingly, recent figures put the number of children taught at home at more than a million, a 29% jump from 1999. (To put this in context, there are apparently some 54 million students in the U.S.).

One could argue that the rise in people seeking alternatives to a traditional education is a direct response to the (many) failings of public education, but this is assuredly a simplistic answer. Public education has always had major problems. At different times and places, these problems have been different, but a mass education system will never be suitable for every child. Nor can it ever, by its nature (basically a factory system, designed to instil required skills in as many children as possible), be the best for anyone.

Indeed, we are closer to a system that endeavors to approach students as individuals than we have ever been (we still have a long way to go, of course).

I believe the increased popularity of alternatives to public education reflects many factors, but most particularly, the simple awareness that there ARE alternatives, and the increased lack of faith in professionals and experts.

Impaired reading skills are found in some 20% of children. No educational system in the world has mastered the problem of literacy; every existing system produces an unacceptably high level of failures. So, we cannot point to a particular program of instruction and say, this is the answer. Indeed, I am certain that such an aim would be foredoomed to failure - given the differences between individuals, how can anyone believe that there is some magic bullet that will work on everyone?

Having said that, we have a far greater idea now of the requirements of an effective literacy program. [see Reading and Research from the National Reading Panel]

These articles originally appeared in the August and September 2004 newsletters.

References

Project on Learning references

  • Nuthall, G. A. & Alton-Lee, A. G. 1993. Predicting learning from student experience of teaching: A theory of student knowledge acquisition in classrooms. American Educational Research Journal, 30 (4), 799-840.
  • Nuthall, G. A. 1999. Learning how to learn: the evolution of students’ minds through the social processes and culture of the classroom. International Journal of Educational Research, 31 (3), 139 – 256.
  • Nuthall, G. A. 1999. The way students learn: Acquiring knowledge from an integrated science and social studies unit. Elementary School Journal, 99, 303-341.
  • Nuthall, G. A. 2000. How children remember what they learn in school. Wellington: New Zealand Council for Educational Research.
  • Nuthall, G. A. 2001. Understanding how classroom experiences shape students’ minds. Unterrichtswissenschaft: Zeitschrift für Lernforschung, 29 (3), 224-267.

Homework revisited

At the same time as a group of French parents and teachers have called for a two-week boycott of homework (despite the fact that homework is officially banned in French primary schools), and just after the British government scrapped homework guidelines, a large long-running British study came out in support of homework.

The study has followed some 3000 children from preschool through (so far) to age 14 (a subset of around 300 children didn’t attend preschool but were picked up when they started school). The latest report from the Effective Pre-school, Primary and Secondary Education Project (EPPSE), which has a much more complete database to call on than previous studies, has concluded that, for those aged 11-14, time spent on homework was a strong predictor of academic achievement (in three core subjects).

While any time spent on homework was helpful, the strongest effects were seen in those doing homework for 2-3 hours daily. This remained true even after prior self-regulation was taken into account.

Of course, even with such a database as this, it is difficult to disentangle other positive factors that are likely to correlate with homework time — factors such as school policies, teacher expectations, parental expectations. Still, this study gives us a lot of data we can mull over and speculate about.

For example, somewhat depressingly, only a quarter of students (28%) said they were sometimes given individualized work, and many weren’t impressed by the time it took some teachers to mark and return their homework (only 68% of girls, and 75% of boys, agreed that ‘Most teachers mark and return my homework promptly’), or with the standards of the work required (49% of those whose family had no educational qualifications, 34% of those whose family had school or vocational qualifications, and 30% of those whose family had higher qualifications, agreed with the statement that ‘teachers are easily satisfied’ — suggesting among other things that teachers of less privileged students markedly underestimate their students’ abilities). Also depressingly, over a third (36%) agreed with the statement that ‘pupils who work hard are given a hard time by others’ (again, this breaks down into quite different proportions depending on the student’s background, with 46% of those in the lowest ‘Home Learning Environment’ agreeing with the statement, decreasing steadily through the ranks to finally reach 27% (still too high!) among those in the highest HLE).

One supposed benefit of homework that has been much touted, especially by those who are in the ‘homework for the sake of homework’ camp, is that of teaching self-regulation (although it can, and has, be equally argued that, by setting useless homework, teachers weaken self-regulation). While the present study did find social-behavioral benefits associated with homework, which would seem to support the former view, these benefits were only seen in relation to behavior at age 14, not to any changes between 11 and 14. In other words, homework wasn’t affecting change over time. This would seem to argue against the idea that doing homework teaches children how to manage their own learning.

Another interesting (of the many) key findings of the report concerns children who ‘succeed against the odds’ — that is, they do better than expected considering their socioeconomic or personal circumstances. Parents of these children tend to engage in ‘active cultivation’ — reading and talking to them when young, providing them with many and wide-ranging learning experiences throughout their childhood, supporting and encouraging their learning. Such support tended to be lacking for those children who did not transcend their circumstances, whose parents often felt helpless about parenting and about education.

In view of my last blog post, I would also like to particularly note that ‘good’ students tended to have a strong internal locus of control, while ‘poor’ students tended to feel helplessness, and had the belief that the ability to learn was an inborn talent (that they didn’t possess).

But education providers shouldn’t simply blame the parents! Teachers, too, are important, and those students who succeeded against the odds also attributed part of their success to supportive and empowering teachers, while those disadvantaged students who didn’t succeed mentioned the high number of supply teachers and disorganized lessons.

There is also a role for peers, and for extracurricular activities — families with academically successful children tended to value extracurricular activities, while those with less successful students viewed them, dismissively, as ‘fun’, rather than of any educational value.

You can download the full report at https://www.education.gov.uk/publications/standard/publicationDetail/Page1/DFE-RR202  or see the summary at http://www.ioe.ac.uk/newsEvents/62517.html

There’s a lot of controversy about the value of homework, for understandable reasons. And the inconsistent findings of homework research point to the fact that we can’t say, simplistically, that all children of [whatever age] should do [so many] hours of homework. Because it rests on the quality and context of the homework, and the interaction with the individual. Homework may be an effective strategy, but it is one that is all too often carried out ineffectively.

Homework for the sake of homework is always a bad idea, and if the teacher can’t articulate what the purpose of the homework is (or that purpose isn’t a good one!), then they shouldn’t set it.

So what are good purposes for homework?

The most obvious is to perform tasks that can’t, for reasons of time or resources, be accomplished in the classroom. But this, of course, is less straightforward than it appears. Practice, for example, would seem to be a clear contender, but optimally distributed retrieval practice (i.e., testing — see also this news report and this) is usually best done in the classroom. Projects generally require time and resources beyond the classroom, but parts of the project may well require school resources or group activity or teacher feedback.

Maybe we should turn this question around: what are classrooms good for?

Contrary to popular practice, the simple regurgitation of information, from teacher to student, is not what classrooms are best used for. Such information is more efficiently absorbed from texts or videos or podcasts — which students can read/watch/listen to as often as they need to. No, there are five main activities for which classrooms are best suited:

  • Group activities (including class discussion)
  • Activities involving school resources (such as science experiments — I am using ‘classroom’ broadly)
  • Praxis (as seen in the apprenticeship model — a skill or activity is modeled by a skilled practitioner for students to imitate; the practitioner provides feedback)
  • Motivation (the teacher engages and enthuses the students; teacher and peer feedback provides on-going help to stay on-task)
  • Testing (not to put students under pressure to perform on tests that will decide their future, but because retrieval practice is the best strategy for learning there is — that is, testing needs to be done in a completely different way, and with students and teachers understanding that these tests are for the purposes of learning, not as a judgment on ability)

All of this is why the flipped classroom model is becoming so popular. I’m a great fan of this, although of course it needs to be done well. Here’s some links for those who want to learn more about this:

An article on flipped classrooms, what they are and some teachers’ and students’ experiences. http://www.azcentral.com/news/articles/2012/03/31/20120331arizona-school-online-flipping.html

A case study of ‘flipped classroom’ use at Byron High School, where math mastery has jumped from 30% in 2006 to 74% in 2011 according to the Minnesota Comprehensive Assessments. http://thejournal.com/articles/2012/04/11/the-flipped-classroom.aspx

A brief interview with high school chemistry teacher Jonathan Bergmann, who now helps other teachers ‘flip’ their classrooms, and is co-author of a forthcoming book on the subject. http://www.washingtonpost.com/local/education/the-flip-classwork-at-home-homework-in-class/2012/04/15/gIQA1AajJT_story.html

But there's one reason for all the argument on the homework issue that doesn't get a lot of airtime, and that is that there is no clear consensus on what school is for and what students should be getting out of it. And maybe part of the reason for that is that, for some people (some teachers, some education providers and officials), they don’t want to articulate what they believe school is all about, because they know many people would be outraged by their opinions. But if you think some people are going to be appalled, maybe you should rethink your thoughts!

Now of course different individuals are going to want different things from education, but until all parties can front up and lay out clearly exactly what they think school is for, then we’re not going to be able to construct a system and a curriculum that teaches effectively and reliably across the board.

Which is not to say I think we'd all agree. But if people openly and honestly put their agenda on the table, then we could openly state what particular schools are for, and different guidelines and assessment tools could be used appropriately.

But first and and most important: everyone (students, teachers, and parents) needs to realize that, notwithstanding the role of genes, intelligence and learning ‘talents’ are far from fixed. ((I’ve talked about this on a number of occasions, but if you want to read more about this, and the importance of self-regulation, from another source, check out this blog post at Scientific American.) If a child is not learning, it is a failure of a number of aspects of their situation, but it is not (absent severe brain damage), because the child is too stupid or lazy. (On which subject, you might like to read a great article in the Guardian about 'Poor economics'.)

What I think about homework is that we should get away completely from this homework/classwork divide. What we need to do is decide what work the student needs to do (to fulfil the articulate purpose), and then divide that into work that is most effectively (given the student's circumstances) done in the classroom and work that is best done in the student's own time and at their own pace.

So what do you think?

Desirable difficulty for effective learning

When we are presented with new information, we try and connect it to information we already hold. This is automatic. Sometimes the information fits in easily; other times the fit is more difficult — perhaps because some of our old information is wrong, or perhaps because we lack some of the knowledge we need to fit them together.

When we're confronted by contradictory information, our first reaction is usually surprise. But if the surprise continues, with the contradictions perhaps increasing, or at any rate becoming no closer to being resolved, then our emotional reaction turns to confusion.

Confusion is very common in the learning process, despite most educators thinking that effective teaching is all about minimizing, if not eliminating, confusion.

But recent research has suggested that confusion is not necessarily a bad thing. Indeed, in some circumstances, it may be desirable.

I see this as an example of the broader notion of ‘desirable difficulty’, which is the subject of my current post. But let’s look first at this recent study on confusion for learning.

In the study, students engaged in ‘trialogues’ involving themselves and two animated agents. The trialogues discussed possible flaws in a scientific study, and the animated agents took the roles of a tutor and a student peer. To get the student thinking about what makes a good scientific study, the agents disagreed with each other on certain points, and the student had to decide who was right. On some occasions, the agents made incorrect or contradictory statements about the study.

In the first experiment, involving 64 students, there were four opportunities for contradictions during the discussion of each research study. Because the overall levels of student confusion were quite low, a second experiment, involving 76 students, used a delayed manipulation, where the animated agents initially agreed with each other but eventually started to express divergent views. In this condition, students were sometimes then given a text to read to help them resolve their confusion. It was thought that, given their confusion, students would read the text with particular attention, and so improve their learning.

In both experiments, on those trials which genuinely confused the students, those students who were initially confused by the contradiction between the two agents did significantly better on the test at the end.

A side-note: self-reports of confusion were not very sensitive, and students’ responses to forced-choice questions following the contradictions were more sensitive at inferring confusion. This is a reminder that students are not necessarily good judges of their own confusion!

The idea behind all this is that, when there’s a mismatch between new information and prior knowledge, we have to explore the contradictions more deeply — make an effort to explain the contradictions. Such deeper processing should result in more durable and accessible memory codes.

Such a mismatch can occur in many, quite diverse contexts — not simply in the study situation. For example, unexpected feedback, anomalous events, obstacles to goals, or interruptions of familiar action sequences, all create some sort of mismatch between incoming information and prior knowledge.

However, all instances of confusion aren’t necessarily useful for learning and memory. They need to be relevant to the activity, and of course the individual needs to have the means to resolve the confusion.

As I said, I see a relationship between this idea of the right level and type of confusion enhancing learning, and the idea of desirable difficulty. I’ve talked before about the ‘desirable difficulty’ effect (see, for example, Using 'hard to read' fonts may help you remember more). Both of these ideas, of course, connect to a much older and more fundamental idea: that of levels of processing. The idea that we can process information at varying levels, and that deeper levels of processing improve memory and learning, dates back to a paper written in 1972 by Craik and Lockhart (although it has been developed and modified over the years), and underpins (usually implicitly) much educational thinking.

But it’s not so much this fundamental notion that deeper processing helps memory and learning, and certain desirable difficulties encourage deeper processing, that interests me as much as idea of getting the level right.

Too much confusion is usually counter-productive; too much difficulty the same.

Getting the difficulty level right is something I have talked about in connection with flow. On the face of it, confusion would seem to be counterproductive for achieving flow, and yet ... it rather depends on the level of confusion, don't you think? If the student has clear paths to follow to resolve the confusion, the information flow doesn't need to stop.

This idea also, perhaps, has connections to effective practice principles — specifically, what I call the ‘Just-in-time rule’. This is the principle that the optimal spacing for your retrieval practice depends on you retrieving the information just before you would have forgotten it. (That’s not as occult as it sounds! But I’m not here to discuss that today.)

It seems to me that another way of thinking about this is that you want to find that moment when retrieval of that information is at the ‘right’ level of difficulty — neither too easy, nor too hard.

Successful teaching is about shaping the information flow so that the student experiences it — moment by moment — at the right level of difficulty. This is, of course, impossible in a factory-model classroom, but the mechanics of tailoring the information flow to the individual are now made possible by technology.

But technology isn't the answer on its own. To achieve optimal results, it helps if the individual student is aware that the success of their learning depends on (or will at least be more effective — for some will be successful regardless of the inadequacy of the instruction) managing the information flow. Which means they need to provide honest feedback, they need to be able to monitor their learning and recognize when they have ‘got’ something and when they haven’t, and they need to understand that if one approach to a subject isn’t working for them, then they need to try a different one.

Perhaps this provides a different perspective for some of you. I'd love to hear of any thoughts or experiences teachers and students have had that bear on these issues.

References

D’Mello, S., Lehman B., Pekrun R., & Graesser A. (Submitted). Confusion can be beneficial for learning. Learning and Instruction.

Social factors impact academic achievement

A brief round-up of a few of the latest findings reinforcing the fact that academic achievement is not all about academic ability or skills.Most of these relate to the importance of social factors.

Improving social belonging improves GPA, well-being & health, in African-American students

From Stanford, we have a reminder of the effects of stereotype threat, and an interesting intervention that ameliorated it. The study involved 92 freshmen, of whom 49 were African-American, and the rest white. Half the participants (none of whom were told the true purpose of the exercise) read surveys and essays written by upperclassmen of different ethnicities describing the difficulties they had fitting in during their first year at school. The other subjects read about experiences unrelated to a sense of belonging. The treatment subjects were then asked to write essays about why they thought the older college students' experiences changed, with illustrations from their own lives, and then to rewrite their essays into speeches that would be videotaped and could be shown to future students.

The idea of this intervention was to get the students to realize that everyone, regardless of race, has difficulty adjusting to college, and has times when they feel alienated or rejected.

While this exercise had no apparent effect on the white students, it had a significant impact on the grades and health of the black students. Grade point averages went up by almost a third of a grade between their sophomore and senior years, and 22% of them landed in the top 25% of their graduating class, compared to about 5% of black students who didn't participate in the exercise.

Moreover, the black students in the treatment group reported a greater sense of belonging compared to their peers in the control group; they were happier, less likely to spontaneously think about negative racial stereotypes, and apparently healthier (3 years after the intervention, 28% had visited a doctor recently, vs 60% in the control group).

Source: http://news.stanford.edu/news/2011/march/improve-minority-grades-031711…

Protecting against gender stereotype threat Stereotype threat is a potential factor for gender as well as ethnicity.

I’ve reported on a number of studies showing that reminding women or girls of gender stereotypes in math results in poorer performance on subsequent math tests. A new study suggests that women could be “inoculated” against such effects if their math / science class is taught by a woman. Although in these experiments, women’s academic performance didn’t suffer, their engagement and commitment to their STEM major was significantly affected.

In the first study, 72 women majoring in STEM subjects were given several tests measuring their implicit and explicit attitudes towards math vs English, plus a short but difficult math test. Half the students were (individually) tested by a female peer expert, supposedly double majoring in math and psychology, and half by a male peer. Those with a male showed negative implicit attitudes towards math, while those tested by a female showed equal liking for math and English on an implicit attitudes test. Similarly, women implicitly identified more with math in the presence of the female expert. On the math test, women who met the female attempted more problems (an average of 7.73 out of 10 compared to 6.39). There was no effect on performance — but because of the difficulty of the test, there was a floor effect.

In the second study, 101 women majoring in engineering were given short biographies of 5 engineers, who were either male or female, or descriptions of engineering innovations (control condition). Again, women presented with female engineers showed equal preference for math and English in the subsequent implicit attitudes test, while those presented with male engineers or innovations showed a significant implicit negative attitude to math. However, implicit identification with math wasn’t any stronger after reading about female engineers. However, those who read about female engineers did report greater intentions to pursue an engineering career, and this was mediated by greater self-efficacy in engineering. Again, there was no effect on explicit attitudes toward math.

In the third study, the performance of 42 female and 49 male students in introductory calculus course sections taught by male (8 sections) and female instructors (7 sections) were compared. Professors were yoked to same-sex teaching assistants.

As with the earlier studies, female students implicitly liked math and English equally when the teacher was a women, but had a decidedly more negative attitude toward math when their instructor was a man. Male students were unaffected by teacher gender. Similarly, female showed greater implicit identification with math when their teacher was a woman; male students were unaffected. Female students also expected better grades when their teacher was a woman; male students didn’t differ as a function of teacher gender (it should be noted that this wasn’t because they thought the women would be more generous markers; marking was pooled across all the instructors, and the students knew this). There was no effect of teacher gender on final grade (but there was a main effect of student gender: women outperformed men).

In other words, the findings of the 3rd study confirmed the effects on implicit attitudes towards STEM subjects, and demonstrated that male students were unaffected by the interventions that affected female students.

Now we come to engagement. At the beginning of the semester, female students were much less likely than male students (9% vs. 23%) to respond to questions put to the class, but later on, female students in sections led by women were much more likely to respond to such questions than were women in courses taught by men (46% vs 7%). Interestingly, more male students also responded to questions posed by female instructors (42% vs 26%). That would seem to suggest that male instructors are much more likely to engage in strategies that discourage many students from engaging in the class. But undeniably, women are more affected by this.

Additionally, at the beginning of the courses, around the same number of female students approached their instructors, regardless of their gender (12-13%). But later, while this percentage of female students approaching female instructors stayed constant, none of them approached male instructors. This could be taken to mean male instructors consistently discouraged such behavior, but male students did not change (an average of 7% both at Time 1 and Time 2).

The number of students who asked questions in class did not vary over time, or by student gender. However it did vary by teacher gender: 22% of both male and female students asked questions in class when they were taught by women, while only 15% did so in courses taught by men.

Some of these effects then seem to indicate that male college instructors are more inclined to discourage student engagement. What the effects of that are, remains to be seen.

Source: http://www.insidehighered.com/news/2011/03/03/study_suggests_role_of_ro…

Social and emotional learning programs found to boost student improvement

A review of 213 school programs that enhance students' social and emotional development, has found that such programs not only significantly improved social and emotional skills, caring attitudes, and positive social behaviors, but also resulted in significant improvement on achievement tests (although only a small subset of these programs actually looked at this aspect, the numbers of students involved were very large).

The average improvement in grades and standardized-test scores was 11 percentile points —an improvement that falls within the range of effectiveness of academic interventions.

Source: http://www.physorg.com/news/2011-02-social-emotional-boost-students-ski…

http://www.edweek.org/ew/articles/2011/02/04/20sel.h30.html

Boys need close friendships

Related to this perhaps (I looked but couldn’t find any gender numbers for the SEL programs), from the Celebration of Teaching and Learning Conference in New York, developmental psychologist Niobe Way argues that one reason why boys are struggling in school is that they are experiencing a "crisis of connection." Stereotypical notions of masculinity, that emphasize separation and independence, challenge their need for close friendships. She's found that many boys have close friendships that are being discouraged by anxiety about being seen as gay or effeminate.

Way says that having close friendships is linked to better physical and mental health, lower rates of drug use and gang membership, and higher levels of academic achievement and engagement. When asked, she encouraged teachers to allow boys to sit next to their best friends in class.

Source: http://blogs.edweek.org/teachers/teaching_now/2011/03/psychologist_boys…

High rate of college students with unrecognized hearing loss

On a completely different note, a study involving 56 college students has found that fully a quarter of them showed 15 decibels or more of hearing loss at one or more test frequencies — an amount that is not severe enough to require a hearing aid, but could disrupt learning. The highest levels of high frequency hearing loss were in male students who reported using personal music players.

Source: http://www.physorg.com/news/2011-03-college-students.html

References

Walton, G. M., & Cohen G. L. (2011). A Brief Social-Belonging Intervention Improves Academic and Health Outcomes of Minority Students. Science. 331(6023), 1447 - 1451.

Stout, J. G., Dasgupta N., Hunsinger M., & McManus M. A. (2011). STEMing the tide: using ingroup experts to inoculate women's self-concept in science, technology, engineering, and mathematics (STEM). Journal of Personality and Social Psychology. 100(2), 255 - 270.

Durlak, J. A., Weissberg R. P., Dymnicki A. B., Taylor R. D., & Schellinger K. B. (2011). The Impact of Enhancing Students’ Social and Emotional Learning: A Meta-Analysis of School-Based Universal Interventions. Child Development. 82(1), 405 - 432.

Le Prell, C. G., Hensley B. N., Campbell K. C. M., Hall J. W., & Guire K. (2011). Evidence of hearing loss in a ‘normally-hearing’ college-student population. International Journal of Audiology. 50(S1), S21-S31 - S21-S31.

Maybe it has nothing to do with self-control

A Scientific American article talks about a finding that refines a widely-reported association between self-regulation and academic achievement. This association relates to the famous ‘marshmallow test’, in which young children were left alone with a marshmallow, having been told that if they could hold off eating it until the researcher returns, they would get two marshmallows. The ability of the young pre-school children to wait has been linked to subsequent achievement at school, and indeed has been said to be as important as IQ.

The finding relates to other factors that might be involved in a child’s decision not to wait — specifically, children who live in an environment where anything they had could be taken away at any time, make a completely rational choice by not waiting.

Another recent study makes a wider point: the children in the classical paradigm don’t know how long they will have to wait. This, the researchers say, changes everything.

A survey of adults asked to imagine themselves in a variety of scenarios, in which they were told the amount of time they had been at an activity such as watching a movie, practicing the piano, or trying to lose weight, were asked how long they thought it would be until they reached their goal or the end. There were marked differences in responses depending on whether the scenario had a relatively well-defined length or was more ambiguous.

Now, this in itself is no surprise. What is a surprise is that, rather than the usual feeling that the longer you’ve waited the closer you are to the end, when you don’t know anything about when the outcome will occur, the reverse occurs: the longer you wait the more you think you’re getting farther and farther away from that outcome.

The researchers suggest that this changes the interpretation of the marshmallow test — not in terms of predicting ability to delay gratification, but in terms of the mechanism behind it. Rather than reflecting two opposing systems fighting it out (your passionate id at war with your calculating super-ego), waiting for a while then giving in may be perfectly rational behavior. It may not be about ‘running out’ of will-power at all.

According to this model, which fits the observed behavior, and which I have to say makes perfect sense to me, there are three factors that influence persistence:

  • beliefs about time — which in this context has to do with how the predicted delay changes over time, i.e., do you believe that the remaining length of time is likely to be the same, shorter, or longer;
  • perceived reward magnitude — how much more valuable the delayed reward is to you than the immediate reward;
  • temporal discount rate — how much shorter time is valued.

A crucial point about temporal beliefs is that they can change as time passes. So, if you’re waiting for a bus, then the reasonable thing to believe is that, the longer you wait, the less time you will have left to wait. But what about if you’re waiting at a stop very late at night? In that case, the longer you wait, the more certain you might become that a bus will not in fact be coming for many hours. How about when you text someone? You probably start off expecting a reply right away, but the longer you wait the longer you expect to wait (if they’re not answering right away, it might be hours; they might not even see your text at all).

Another important aspect of these factors is that they are subjective (especially the last two), and will vary with an individual. This places ‘failures’ on differences in an individual’s temporal discount rate and perceived reward magnitude, rather than on poor self-control.

But what about the evidence that performance on this test correlates with later academic achievement? Well, temporal discount rate also appears to show ‘trait-like stability over time’, and has also been found to correlate with cognitive ability. Temporal discount rate, it seems to me, has a clear connection to motivation, and I have talked before about the way motivation can make a significant impact to someone’s IQ score or exam performance.

So maybe we should move away from worries about ‘self-control’, and start thinking about why some people put a higher value on short waiting times than others — how much of this is due to early experiences? what can we do about it?

We also need to think very hard about the common belief that persistence is always a virtue. If you’re waiting for a bus that hasn’t come after an hour, and it’s now one in the morning, your best choice is probably to give up and find some other means home.

Although persistence is often regarded as a virtue, misguided persistence can waste time and resources and can therefore defeat one's chances of success at superordinate goals . . . Rather than assuming that persistence is generally adaptive, the issue should be conceptualized as making judgments about when persistence will be effective and when it will be useless or even self-defeating. (Baumeister & Scher, 1988, pp. 12–13)

All of which is to say that, as with all human behavior, persistence (sometimes equated to ‘will-power’; sometimes to 'self-regulation') is a product of both the individual and the environment. If some children are doing well and others are not, perhaps you shouldn’t be attributing this to stable traits of the children, but to the way different children perceive the situation.

Nor is it only in the academic environment that these things matter. Our ability to delay gratification and our motivation are attributes that underlie our behavior and our success across our lives. If we turn these ‘attributes’ around and, instead of seeing them as personal traits, rather see them as dynamic attributes that reflect situational factors that interact with personal attributes, then we have a better chance of getting the results we want. If we can pinpoint perceived reward and temporal discount rate as critical factors in this individual — environment interaction, we know exactly what variables to consider and manipulate.

We are built to like simple solutions — a number, a label that we can pin on ourselves or another — but surely we have become sufficiently sophisticated that we can now handle more complex information? We need to move from considering people, whether ourselves or others, as independent agents acting in a vacuum, to considering them as part of an indissoluble organism — environment interacting unit. Let’s get away from a fixation on IQ scores, or SAT scores, or even complex multi-factorial scores, and realize those, even the most predictive ones, are only ever one part of the story. No one is the same person at every moment, and it’s time we took that point more seriously.

References

McGuire, J. T., & Kable, J. W. (2013). Rational Temporal Predictions Can Underlie Apparent Failures to Delay Gratification. Psychological Review, 120(2), 395–410. doi:10.1037/a0031910

Baumeister, R. F., & Scher, S. J. (1988). Self-defeating behavior patterns among normal individuals: Review and analysis of common self-destructive tendencies. Psychological Bulletin, 104, 3–22. doi:10.1037/ 0033-2909.104.1.3

Should learning facts by rote be central to education?

Michael Gove is reported as saying that ‘Learning facts by rote should be a central part of the school experience’, a philosophy which apparently underpins his shakeup of school exams. Arguing that "memorisation is a necessary precondition of understanding", he believes that exams that require students to memorize quantities of material ‘promote motivation, solidify knowledge, and guarantee standards’.

Let’s start with one sturdy argument: "Only when facts and concepts are committed securely to the working memory, so that it is no effort to recall them and no effort is required to work things out from first principles, do we really have a secure hold on knowledge.”

This is a great point, and I think all those in the ‘it’s all about learning how to learn’ camp should take due notice. On the other hand, the idea that memorizing quantities of material by rote is motivating is a very shaky argument indeed. Perhaps Gove himself enjoyed doing this at school, but I’d suggest it’s only motivating for those who can do it easily, and find that it puts them ‘above’ many other students.

But let’s not get into critiquing Gove’s stance on education. My purpose here is to discuss two aspects of it. The first is the idea that rote memorization is central to education. The second is more implicit: the idea that knowledge is central to education.

This is the nub of the issue: to what extent should students be acquiring ‘knowledge’ vs expertise in acquiring, managing, and connecting knowledge?

This is the central issue of today’s shifting world. As Ronald Bailey recently discussed in Reason magazine, Half of the Facts You Know Are Probably Wrong.

So, if knowledge itself is constantly shifting, is there any point in acquiring it?

If there were simple answers to this question, we wouldn’t keep on debating the issue, but I think part of the answer lies in the nature of concepts.

Now, concepts / categories are the building blocks of knowledge. But they are themselves surprisingly difficult to pin down. Once upon a time, we had the simple view that there were ‘rules’ that defined them. A dog has four legs; is a mammal; barks; wags its tail … When we tried to work out the rules that defined categories, we realized that, with the exception of a few mathematical concepts, it couldn’t be done.

There are two approaches to understanding categories that have been more successful than this ‘definitional’ approach, and both of them are probably involved in the development of concepts. These approaches are known as the ‘prototypical’ and the ‘exemplar’ models. The key ideas are that concepts are ‘fuzzy’, hovering around a central (‘most typical’) prototype, and are built up from examples.

A child builds up a concept of ‘dog’ from the different dogs she sees. We build up our concept of ‘far-right politician’ from the various politicians presented in the media.

Some concepts are going to be ‘fuzzier’ (broader, more diverse) than others. ‘Dog’, if you think about St Bernards and Chihuahuas and greyhounds and corgis, has an astonishingly diverse membership; ‘banana’ is, for most of us, based on a very limited sample of banana types.

Would you recognize this bright pink fruit as a banana? Or this wild one? What about this dog? Or this?

I’m guessing the bananas surprised you, and without being told they were bananas, you would have guessed they were some tropical fruit you didn’t know. On the other hand, I’m sure you had no trouble at all recognizing those rather strange animals as dogs (adored the puli, I have to say!).

To the extent that you’ve experienced diversity in your category members, the concept you’ve built will be a strong one, capable of allowing you to categorize members quickly and accurately.

In my article on expertise, I list four important differences between experts and novices:

  • experts have categories

  • experts have richer categories

  • experts’ categories are based on deeper principles

  • novices’ categories emphasize surface similarities.

How did experts develop these deeper, richer categories? Saying, “10,000 hours of practice”, may be a practical answer, but it doesn’t tell us why number of hours is important.

One vital reason the practice is important is because it grants the opportunity to acquire a greater diversity of examples.

Diverse examples, diverse contexts, this is what is really important.

What does all this have to do with knowledge and education?

Expertise (a word I use to cover the spectrum of expertise, not necessarily denoting an ‘expert’) is rooted in good categories. Good categories are rooted in their exemplars. Exemplars may change — you may realize you’ve misclassified an exemplar; scientists may decree that an exemplar really belongs in a different category (a ‘fact’ is wrong) — but the categories themselves are more durable than their individual members.

I say it again: expertise is rooted in the breadth and usefulness of your categories. Individual exemplars may turn out to be wrong, but a good category can cope with that — bringing exemplars in and out is how a category develops. So it doesn’t matter if some exemplars need to be discarded; what matters is developing the category.

You can’t build a good category without experiencing lots of exemplars.

Although, admittedly, some of them are more important than others.

Indeed, every category may be thought of as having ‘anchors’ — exemplars that, through their typicality or atypicality, define the category in crucial ways. This is not to say that they are necessarily ‘set’ exemplars, required of the category. No, your anchors may well be different from mine. But the important thing is that your categories have such members, and that these members are well-rooted, making them quickly and reliably accessible.

Let’s take language learning as an example (although language learning is to some extent a special case, and I don’t want to take the analogy too far). There are words you need to know, basic words such as prepositions and conjunctions, high-frequency words such as common nouns and verbs. But despite lists of “Top 1000 words” and the like, these are fewer than you might think. Because language is very much a creature of context. If you want to read scientific texts, you’ll want a different set of words than if your interest lies in reading celebrity magazines, to take an extreme comparison.

What you need to learn is the words you need, and that is specific to your interests. Moreover, the best way of learning them is also an individual matter — and by ‘way’, I’m not (for a change) talking about strategies, which is a different issue. I’m talking about the contexts in which you experience the words you are learning.

For example, say you are studying genetics. There are crucial concepts you will need to learn — concepts such as ‘DNA’, ‘chromosomes’, ‘RNA’, epigenetics, etc — but there is no such requirement concerning the precise examples (exemplars) you use to acquire those concepts. More importantly, it is much better to cover a number of different examples that illuminate a concept, rather than focus on a single one (Mendel’s peas, I’m looking at you!).

Genetics is changing all the time, as we learn more and more. But that’s an argument for learning how to replace outdated information (an area of study skills sadly neglected!), not an argument for not learning anything in case it turns out to be wrong.

To understand a subject, you need to grasp its basic concepts. This is the knowledge part. To deal with the mutability of specific knowledge, you need to understand how to discard outdated knowledge. To deal with the amount of knowledge relevant to your studies and interests, you need skills in seeing what information is important and relevant for your studies and interests and in managing the information so that it is accessible when needed.

Accessibility is key. Whether you store the information in your own head or in an external storage device, you need to be able to lay hands on it when you need it. And here’s the nub of the problem: you need to know when you need it.

This problem is the primary reason why internal storage (in your own memory) is favored by many. It’s only too easy to file something away in external storage (physical files; computer documents; whatever) and forget that it’s there.

But what all this means is that what we really need in our memory is an index. We don’t need to remember what a deoxyribose sugar is if we can instantly look it up whenever we come across it.

Or do we?

This is the point, isn’t it? If you want to study a subject, you can’t be having to look up every second word in the text, you need to understand the concepts, the language. So you do need to have those core concepts well understood, and the technical vocabulary mastered.

So is this an argument for rote memorization?

No, because rote memorization is a poor strategy, suitable only for situations where there can be no understanding, no connection.

We learn by repetition, but rote repetition is the worst kind of repetition there is.

To acquire the base knowledge you need to build expertise, you need repetition through diverse examples. This is the art and craft of good instruction: providing the right examples, in the right order.

The changing nature of literacy. Part 2: Lecturing

This post is the second part in a four-part series on how education delivery is changing, and the set of literacies required in today’s world. Part 1 looked at the changing world of textbooks. This post looks at the oral equivalent of textbooks: direct instruction or lecturing.

There’s been some recent agitation in education circles about an article by Paul E. Peterson claiming that direct instruction is more effective than the ‘hands-on’ instruction that's so popular nowadays. His claim is based on a recent study that found that increased time on lecture-style teaching versus problem-solving activities improved student test scores results (for math and science, for 8th grade students). Above-average students appeared to benefit more than below-average, although the difference was not statistically significant.

On the other hand, a college study found that a large first-year physics class taught in a traditional lecture style by an experienced and highly rated professor performed more poorly on several measures than another class taught only by engaging in small-group problem-solving tasks. Attendance improved by 20% in the experimental class, and engagement (measured by observers and "clicker" responses) nearly doubled. Though the experimental class didn’t cover as much material as the traditional class, dramatically more students showed up for the unit test, and they scored significantly better (average score of 74% vs 41%).

It must be noted, however, that this experiment only ran for a week (3 hours instruction).

But the researchers of the middle-grade study did not conclude that lecturing was superior, or that their results applied to college students. Their very reasonable conclusion was that “Newer teaching methods might be beneficial for student achievement if implemented in the proper way, but our findings imply that simply inducing teachers to shift time in class from lecture-style presentations to problem solving without ensuring effective implementation is unlikely to raise overall student achievement in math and science. On the contrary, our results indicate that there might even be an adverse impact on student learning.”

The whole issue reminds me of the phonics debate. I don’t know what it is about education that gets people so polarized, when it seems so obvious that there are no simple answers. What makes an effective strategy is not simply the strategy itself, but how it is carried out, who is using it, and when they are using it.

In this case, the quality and timing of these ‘problem-solving activities’ is perhaps central. The rule of thumb that twice as much time should be allocated to problem-solving activities as to direct instruction is perhaps being applied with too little understanding about the role and usefulness of specific activities.

But it’s obvious that there are going to be strong developmental differences. The ‘best’ means of teaching 18-year-olds is not going to suit 5-year-olds, and vice versa. So we can’t conclude anything about middle school by looking at college studies, or college by looking at middle school studies.

So, bearing in mind that a discussion of college lecturing has little to do with direct instruction in schools, let’s look a little further into college lectures, given that this is the predominant method of instruction at this level.

First of all, we must ask what students are doing during lectures. Given many teachers’ distress at their students’ activity on phones and laptops during class, it’s worth noting the findings of two recent studies that spied on college students in class rather than relying on self-reporting.

The first study involved 45 students who allowed monitoring software to be installed. Distinguishing between “productive” applications (Microsoft Office and course-related websites) from “distractive” ones (e-mail, instant messaging, and non-course-related websites), the researchers found that non-course-related software was active about 42% of the time. However only one type of these distractive applications was significantly correlated with poorer academic performance: instant messaging. This despite the fact that IM windows had the shortest average duration. (It’s also worth noting that instant-messaging use was massively under-estimated by students (by 40% vs, for example, 7% for email use)).

It seems likely that this has to do with switching costs. For those who read my recent blog post on working memory, you might recall that switching focus from one item to another has high costs. Moreover, it seems that the more frequently (and thus briefly) you switch focus, the higher the cost.

The other study used human observers rather than spyware, with obvious drawbacks. But the finding I found interesting was the dramatic jump between first-year and second- and third-year law students: more than half of the latter who came to class with laptops used them for non-class purposes more than half the time, compared to 4% of first-year students. While the teacher took this as a signal to ban laptops in his upper-year courses, perhaps he should have rather taken it as evidence that his students had become more discerning about what was relevant. We need to know how this laptop use mapped against performance before drawing conclusions.

But not all teachers are reflexively against distractive technology. The banning of cellphones from classrooms, and general distress about social media, is starting to be offset by teachers setting up “backchannels” in their classes. These digital channels are said to encourage shy and overwhelmed students to ask questions and make comments during class.

Of course, most teachers are still anti, and a lot of that may be driven by a fear of losing control of the class, or being unable to keep up with the extra stream of information (particularly in the face of the students’ facility in multitasking).

And maybe some teachers are so antagonistic toward distractive technology because they feel it’s insulting. It implies they’re boring.

Well, unfortunately, many students do find a lot of their lectures boring. A 2009 study of student boredom suggested that almost 60% of students find at least half their lectures boring, of which half found most or all of their lectures boring.

But I don’t think the answer to this is to remove their toys. Do you think they’ll listen if they don’t have anything else to do? The study found bored students daydream (75% of students), doodle (66%), chat to friends (50%), send texts (45%), and pass notes to friends (38%).

It’s not that teachers have to entertain them! Granted it’s easier to hold students’ attention if you’re doing explosive chemistry experiments, but students really aren’t so shallow that you have to provide spectacles. They are there (at college level at least) because they want to learn. But you do have to present the information in a way that facilitates learning.

One of the main contributors to student boredom is apparently the (bad) use of PowerPoint.

But even practical sessions, supposedly more engaging than lectures, appear to bore students. Lab work and computer sessions achieved the highest boredom ratings in the study.

Because boredom is not as simple a concept as it might appear. Humans are designed for learning. This is our strength. Other animals may be fast, may be strong, may have sharp claws or teeth, or venom. Humans are smart, and curious, and we know that knowledge is power. Humans like to learn. So what goes wrong during the education process?

Well, one of the problems is that there’s a cognitive “sweet spot”. If you make something too difficult, most people will be put off. If you make something too easy, they won’t bother. The sweet spot of learning is that point where the amount of cognitive effort is not too little and not too great — of course you have to find that point, and a complicating factor is that this varies with individuals.

One area where creators have had a lot of success in finding that sweet spot (because they try very hard) is video games.

How can we harness the power that video games seem to have? A book called "Reality Is Broken: Why Games Make Us Better and How They Can Change the World" points out that creating Wikipedia has so far taken about 100 million hours of work, while people spend twice that many hours playing World of Warcraft in a single week.

Some of the features of good games that researchers believe are important are: instant feedback, small rewards for small progress, occasional unexpected rewards, continual encouragement from the computer and other players, and a final sense of triumph. Most of this is no news to educationalists, but there’s a quote I really love: “One of the most profound transformations we can learn from games is how to turn the sense that someone has ‘failed’ into the sense that they ‘haven’t succeeded yet.” (Tom Chatfield, British journalist and author)

That quote is a guide to how to find that sweet spot.

Providing motivation, of course, as we all know, is crucial. Where’s the relevance? Traditionally, it may have been enough to simply tell students that they needed to know something, and they’d believe you. But it’s not just that students have become cynical and less respectful (!) — the fact is, they have good reason to question whether traditional content and traditional strategies have any relevance to what they need to know.

Here’s a lovely example of the importance of motivation and relevance. In India Bollywood musicals are madly popular. For nine years, these movies have had karaoke-style subtitles. The first state to broadcast the subtitles was Gujarat. Because viewers were so keen to sing along, they paid attention to these captions, often copying them out to learn. As a consequence, literacy has improved. Newspaper reading in one Gujarat village has gone up by more than 50% in the last decade; women, who can now read bus schedules themselves, have become more mobile, and more children are opting to stay in school. Viewers in India have shown reading improvement after watching just eight hours of subtitled programming over six months.

This has apparently worked in more literate nations as well. Finland (and we all know how well it scores in education rankings) attributes much of its educational success to captions. For several decades now, Finland has chosen to subtitle its foreign language television programs (in Finnish) instead of dubbing over them. And Finnish high school students read better than students from European countries that dub their TV programs, and are more proficient at English.

But songs, it seems, are better for this than dialog.

Of course this strategy is only useful at a certain stage — when learners have basic skills, but are having trouble moving beyond.

This is the point, isn’t it? Different situations (a term encompassing the learners, their prior knowledge, and their goals, as well as the content and its context) require different strategies. For example, I recently read a discussion on Education Week prompted by a teacher being forced by his/her institution to use PowerPoint in a class for ESL students to improve their English speaking skills.

Powerpoint slides can be very effective, but far too many aren’t. Similarly, lab sessions can be true learning experiences, or simply “paint-by-numbers” events for which the result is known. Lectures can be a complete waste of time, or true learning experiences.

Consider marathon oral readings of famous texts. A recent article on Inside Higher Ed said that such “events help convey messages, engage students, and foster community on their campuses in ways that reading alone cannot do”. And there was a nice quote from a student: "Until you hear another student read it in his or her own voice, you don't really understand the vast possibilities for interpretation."

What’s the difference between this and a lecture? Well, in one sense none. Both depend on delivery and presentation. I’ve been to some very engaging and inspirational lectures, and some readings can be flat and uninspiring. But the critical difference is that one is literature (a story) and the other is expositional. To make instructional text engaging, you have to work a lot harder. And this is true regardless of the mode of delivery — lectures and textbooks are the oral and written variants of linear exposition.

Is it fair to dismiss a strategy just because some people perform it badly? Is it smart to require a strategy because in some circumstances it is better than another?

We need a better understanding of the situations in which different strategies are effective, and the different variables that govern when they are effective. And we need more flexibility in delivery.

Which brings us to computer learning, which I’ll discuss in the next post.

[Update: Please note that some links have been removed as the articles on other sites are no longer available]

Have benefits of a growth mindset been overstated?

  • A review of growth mind-set research has found the correlation between growth mind-set and academic achievement was very weak, and may be restricted to some groups of students.

In the education world, fixed mind-set is usually contrasted with growth mind-set. In this context, fixed mind-set refers to students holding the idea that their cognitive abilities, including their intelligence, are set at birth, and they just have to accept their limitations. With a growth mind-set, however, the student recognizes that, although it might be difficult, they can grow their abilities.

A growth mind-set has been associated with a much better approach to learning and improved academic achievement, but new research suggests that this difference has been over-stated.

A recent meta-analysis of growth mind-set research found that

  • over half the effect sizes weren't significantly different from zero (157 of 273 effect sizes),
  • a small number (16) actually found a negative association between growth mind-set and academic achievement, and
  • a little over a third (100) were significant and positive.

Overall, the study found the correlation between growth mind-set and academic achievement was very weak.

Perhaps unsurprisingly, one important factor was age — children and teenagers showed significant effects, while adults did not. Interestingly, neither academic risk status nor socioeconomic status was a significant factor, although various studies have suggested that growth mind-set is much more important for at-risk students.

A second, smaller meta-analysis was carried out to investigate whether growth-set interventions made a significant impact on academic achievement. Such interventions are designed to increase students' belief that intelligence (or some other attribute) can be improved with effort.

The study found that

  • 37 of the 43 effect sizes (86%) were not significantly different from zero,
  • one effect size was negative, and
  • five were positive.

Age was not a factor, nor was at-risk status. However, socioeconomic status was important, in that students from low-SES households were significantly impacted by a growth mind-set intervention, while those from higher-SES households were not.

The type of intervention was important: just reading about growth mind-set didn't help; doing something more interactive, such as writing a reflection, did. The number of sessions didn't have an effect. Oddly, the way the intervention was presented made a difference, with materials presented by computer or by a person not being effective, while print materials were. Interventions administered during regular classroom activities were not effective, but interventions that occurred outside regular activities did have a significant effect.

Taken overall, the depressing conclusion is that mind-set interventions are not the revolution some have touted them as. The researchers point out that previous research (Hattie et al 1996) found that the meta-analytic average effect size for a typical educational intervention on academic performance is 0.57, and all the meta-analytic effects of mind-set interventions in this study were smaller than 0.35 (and most were null).

All this is to say, not that mind-set theory is rubbish, but that it is not as straightforward and miraculous as it first appeared. Mind-set itself is more nuanced than has been presented. For example, do we really have a definite fixed mind-set or growth mind-set? Or is it that we have different mind-sets for different spheres? Perhaps we believe that our math ability is fixed, but our musical ability is something that can be developed. That we can develop our problem-solving ability, but our intelligence is set in stone. That our 'natural talents' can be grown, but our 'innate weaknesses' cannot.

Why would low-SES and high-risk students benefit from a growth mind-set intervention, while higher-SES students did not? An obvious answer lies in the beliefs held by such students. For example, it may be that many higher-SES students are challenged by the idea of a growth mind-set, because they're invested in the idea of their own natural abilities. It is their confidence in their own abilities that enables them to do well, just as other students are undermined by their lack of confidence. Given this different starting point, it would not be in any way surprising if such students responded differently to mind-set interventions.

References

Sisk, V. F., Burgoyne, A. P., Sun, J., Butler, J. L., & Macnamara, B. N. (2018). To What Extent and Under Which Circumstances Are Growth Mind-Sets Important to Academic Achievement? Two Meta-Analyses. Psychological Science, 29(4), 549–571. http://doi.org/10.1177/0956797617739704

Hattie, J., Biggs, J., & Purdie, N. (1996). Effects of learning skills interventions on student learning: A meta-analysis. Review of Educational Research, 66, 99–136.

 

Memorizing the Geological Time Scale

In the following case study, I explore in depth the issue of learning the geological time scale — names, dates, and defining events. The emphasis is on developing mnemonics, of course, but an important part of the discussion concerns when and when not to use mnemonics, and how to decide.


The Geological Time Scale

Phanerozoic Eon 542 mya—present

  Cenozoic Era 65 mya—present

    Neogene Period 23 mya—present

Holocene Epoch 8000 ya—present

Pleistocene Epoch 1.8 mya—8000ya

Pliocene Epoch 5.3 mya—1.8 mya

Miocene Epoch 23 mya—5.3 mya

   Paleogene Period 65 mya—23 mya

Oligocene Epoch 34 mya—23 mya

Eocene Epoch 56 mya—34 mya

Paleocene Epoch 65 mya—56 mya

  Mesozoic Era 250 mya—65 mya

    Cretaceous Period 145 mya—65 mya

    Jurassic Period 200 mya—145 mya

    Triassic Period 250 mya—200 mya

  Paleozoic Era 542 mya—250 mya

    Permian Period 300 mya—250 mya

    Carboniferous Period 360 mya—300 mya

    Devonian Period 416mya—360 mya

    Silurian Period 444 mya—416 mya

    Ordovician Period 488 mya—444 mya

    Cambrian Period 542mya—488 mya

Precambrian 4560 mya—542 mya

 Proterozoic Eon 2500 mya—542 mya

 Archean Eon 3800 mya—2500 mya

 Hadean Eon 4560 mya—3800 mya


How do we set about learning all this? Let’s look at our possible strategies.

Memorizing new words, lists and dates

Acronyms

A common trick to help remember the geological time scale is to use a first-letter acronym, such as the classic:

Camels Often Sit Down Carefully; Perhaps Their Joints Creak? Persistent Early Oiling Might Prevent Permanent Rheumatism.

(This begins with the Cambrian Period and moves forward in time; note that in this traditional mnemonic the Holocene Epoch is here thought of by its older name of “Recent Epoch”.)

What’s the problem with this, as a way of remembering the geological scale?

It assumes we already know the names.

The principal (and often, only) purpose of an acronym is to remind you of the order of items that you already know.

A common problem with acronyms (first-letter by definition) is that there are often repeats of initials, causing confusion. A more useful strategy (though far more difficult) might be to use the first two or preferably three letters of the words. This not only distinguishes more clearly between items, but also provides a much better cue for items that are not hugely familiar. For example, here’s one I came up with for the geological time-scale:

Hollow Pleadings Plight Miosis;

Olive Eons Pall Creation; (or Olive Eons Palm Credulous, for a slight rhyme)

Juries Trick Perplexed Carousers;

Devils Silence Ordered Campers.

Because it is extremely difficult to make a meaningful sentence with these restraints (largely because of rare combinations such as Eo- and Mio- and to a lesser extent, Pli, Oli, and Jur), I have used rhythm to group it into a verse. There’s a slight rhyme, but it’s amazing how much power rhythm has to facilitate memory on its own.

It is easier, of course, to construct a sentence with these items if you are allowed to include a few “insignificant” words (i.e., not nouns or verbs) to hold them all together. Here’s a possible sentence, this time starting from the oldest and moving forward to the most recent:

Campers Order Silver Devils to Carry Persons Tricking Jurisprudent Cretins in Palmy Eons of Olive Milk and Pliant Pleadings for Holidays

The problem with both this and the “verse” is that they are too long, given their difficulty, to be readily memorable. The answer to this is organization, and later we’ll discuss how to use organization to reduce the mnemonic burden. But first, let’s deal with another problem.

Although the use of three-letter acronyms lessens the need for such deep familiarity with the items to be learned, you do still need to know the items. With names as strange as the ones used in the geological time-scale, the best strategy is probably the keyword mnemonic (or at least a simplified version).

Looking for meaning

But let’s start by considering the origin of the names. If they’re meaningful, if there is a logic to the naming that we can follow, our task will be made incomparably easier.

Unfortunately, in this case there’s not a lot of logic to the naming. Some of the periods are named after geographical areas where rocks from this period are common, or where they were first found — these are probably the easiest to learn. The epochs in particular, however, are problematic, as they are very similar, being based on ancient Greek (in which few students are now trained), and, most importantly of all, being essentially meaningless.

Let’s look at them in detail. The common cene ending comes from the Greek for new (ceno).

  • Holocene is from holos meaning entire
  • Pleistocene is from pleistos meaning most
  • Pliocene is from pleion meaning more
  • Miocene is from meion meaning less
  • Oligocene is from oligos meaning little, few
  • Eocene is from eos meaning dawn
  • Paleocene is from palaois meaning old

So we have

  • Holocene: entire new
  • Pleistocene: most new
  • Pliocene: more new
  • Miocene: less new
  • Oligocene: little new
  • Eocene: dawn new
  • Paleocene: old new

You could find this helpful (remember that we’re moving backward in time, so that the Holocene is indeed the newest of these, and the Paleocene is the oldest), but the naming is really too arbitrary and meaningless to be of great help.

Better to come up with associations that have more meaning, even if that meaning is imposed by you. Here’s some words you could use:

  • Holocene: holy; hollow; hologram; holly
  • Pleistocene: plasticine; plastic
  • Pliocene: pliable; pliant; pliers
  • Miocene: my; milo; myopic
  • Oligocene: oligarchy; olive; oliphaunt (! Notice that the words don’t have to be familiar to the whole world, even the dictionary-makers; the important thing is that they have significance to you)
  • Eocene: eon; enzyme; obscene (note that it is not necessary for the word to begin with the same letter(s) — a particularly difficult task in this instance; what’s important is whether the word will serve as a good link for you)
  • Paleocene: palace; palatial; paleolithic

To tie your chosen word to the word to be learned, you must form an association (that’s why it’s so important to choose a word that’s good for you — associations are very personal). For example, you could say:

  • Holograms are very recent (the Holocene is the most recent epoch)
  • Glaciers are plastic or My glaciers are made of plasticine (the Pleistocene was the time of the “Great Ice Age”)
  • The pliant Americas joined together or Pliable hominids arose (Hominidae began in the Pliocene, and North and South America joined up)
  • Mild weather saw Africa collide with Asia (the Miocene was warmer than the preceding epoch; during this time Africa finally connected to Eurasia)
  • Elephants become oligarchs! (during the Oligocene mammals became the dominant vertebrates)
  • Continents obscenely separate (Laurasia, the northern supercontinent, began to break up at the beginning of the Eocene; Gondwanaland, the southern supercontinent, continued its breakup)
  • Pale from the disaster, we pull ourselves together (the Paleocene marks the beginning of a new era, after the K-T boundary event (thought by many to be an asteroid impact) in which the dinosaurs and so much other life died)

Now this is not, of course, in strict accordance with the keyword method. According to this method, we should choose a word as phonetically similar to the word-to-be-learned as possible, and as concrete as possible, and then form a visual image connecting the two. While this is fine with learning a different language (the most common use for the keyword method, and the one for which it was originally designed), it is clearly very difficult to create an image for something as abstract and difficult to visualize as a period of time.

It’s also often difficult to find keywords that are both phonetically similar and concrete. We must improvise as best we may. What you need to bear in mind is that you are searching for an association that will stick in your mind, and link the unfamiliar (the information you are learning) to the familiar (information already well established in your mind).

With this in mind, look again at the suggested associations. This time, think in terms of whether you can make a picture in your mind.

Holocene mnemonic imageInstead of “Holograms are very recent”, you might want to form an image of someone falling into a hole (tying the Holocene to the “Age of Humans”).

 

 

Glaciers made of plasticine might stand.Pleistocene mnemonic image

 

 

 

 

Pliocene mnemonic imageIf you can visualize very limber (perhaps in distorted postures) ape-like humans, Pliable hominids might be satisfactory, or you may need to fall back on the pliers — perhaps an image of pliers bringing North and South America together.

 

Miocene mnemonic imageMild weather isn’t terribly imageable; you might like to imagine milk pouring from the joint where Africa and Eurasia have collided.

 

 

Oligocene mnemonic imageOligarchs is likewise difficult, but you could visualize elephants under olive trees, eating the olives.

 

 

 

Eocene mnemonic imageAnd now of course, we come to the most difficult — the Eocene. Here’s a thought, for those brought up with Winnie the Pooh. If you have a clear picture of Eeyore, you could use him in this image. Perhaps Eeyore is standing on one part of the separating Laurasia (looking appropriately disconsolate).

Paleocene mnemonic image

 

 

The Paleocene might best be associated with a palace, if we’re looking for something imageable — perhaps dinosaurs sheltering in a palace as the asteroid comes down and destroys it.

 

You see from this that the demands of visual associations are often quite different from those of verbal associations. Both are effective. Whether you use verbal or visual associations should depend not only on your personal preference (some people find one easier, and some the other), but also on what the material best affords — that is, what is easiest, what comes more readily to mind, and also, which association will be less easily forgotten.

But mnemonics only take you so far. While very useful for learning new words, and for learning lists, they are not a good basis for developing an understanding of a subject — and unlike the situation of learning a language, a scientific topic definitely requires a more holistic approach. Mnemonics here are very much an adjunct strategy, not a complete solution. So before using mnemonics to fix specific hard-to-remember details in my brain, I would begin by organizing the information to be learned, with the goal of cutting it into meaningful chunks.

 

Excerpted from Mnemonics for Study

Mnemonics for Study

Reading for Study

Reading is a deceptive skill, for it is not a single process, but a number of processes. Thus, while you might be a fluent reader, in that you can swiftly and easily decode the letter-markings, and quickly access the meaning of the words, that doesn't mean you're a skilled reader of informational texts.

Reading effectively for information or instruction, unlike reading a story, needs to be a very active process, for comprehension is far more difficult than it is in the familiar format of a story. That's why so-called 'speed reading' can be so problematic.

Reading "actively" involves:

  • thinking about what you’re reading
  • asking yourself questions about it
  • trying to relate it to information you already know.

How well you do this depends in part on your understanding of the topic. Thus, you may be a skilled reader of philosophy texts, but be completely at a loss when confronted by a physics text.

Nor is it only a matter of content knowledge. How you go about your active reading also depends in part on the subject you're reading in. Reading scientific texts, for example, is very different from reading a history text; both require a different approach — different skills — compared to reading an economics text. And reading in a foreign language is, of course, different again.

Reading for study is difficult to separate from note-taking, for the active processing you need to do is helped considerably by note-taking strategies. The two go hand in hand, and more so the more difficult the text is.

Improving your reading skills, then, involves not simply improving reading skills themselves, but also:

  • recognizing the different processes involved in reading, so that you can accurately pinpoint the source of your comprehension difficulties (for example, it may be simply a jargon issue - unfamiliarity with the specialist vocabulary used)
  • increasing your knowledge and understanding of the topic
  • improving your note-taking skills, so that you know the best way to approach different types of text, to organize the information for better understanding.