Thursday, July 28, 2011

"To study for ten years at a cold window..."

The title of this entry is a line from a 900-year-old poem by the Chinese poet Liu Qi, often quoted by Chinese parents to children whose academic performance has been less-than-ideal.  I heard it at a meeting with officials from East China Normal University, who were explaining Chinese attitudes towards learning and education.  The point of the quote is simply that success doesn't come quickly or easily:  only sustained effort, quite possibly unpleasant, produces results.

How different from the U.S. perception of math education!  A Google search on "no good at math" produces over 724,000 results, and while some are actually echoes of the Chinese perspective (such as this thoughtful piece by my high school friend Jean Marie Linhart), the majority (at least on the first pages) seem to be people saying that they themselves are "no good at math."  Well, of course not!  The last thing most people who believe themselves "no good at math" seem to do is more math, which is pretty much the only way I know to get better at it.

I had a student several years ago who is still legendary for how much of our courses he figured out, on his own, before we could teach it to him.  His math contest results were phenomenal.  But what most people who didn't work closely with Alex didn't realize was that he did math for hours every week:  at least 10 or 15 more than required for class and math team, so about 12-17 more hours each week, at least, than most "ordinary" math students at my school.  Over four years, Alex spent at least 2400 more hours thinking and working about math than the typical math student at my school.  Of course he was good at it!  He'd have had to have--quite literally--some kind of learning defect if that much work hadn't paid off.

I'm not saying that there's no such thing as talent or giftedness in mathematics.  But I don't think focusing on talent or giftedness is useful, for two reasons:
  1. We're not very good at discerning talent, unless it comes packaged in very recognizable shapes, sizes, and containers.  Quick at arithmetic, good at spotting patterns, doesn't need to show work--these are all recognizable, and so we spot those relatively easily, especially when the person is someone whom, because of our own biases, we are likely to think of as good at math (e.g. white/asian, male, etc.).  But: other mathematical talents, such as the ability to generalize and synthesize different results, to construct arguments containing multiple ideas, to sift through different approaches for the most fruitful one--these don't show up as early and are harder to see.  In fact, by the time a student is old enough to have encountered a mathematical situation for which these talents are valuable, he or she may already have been "tracked out" of the highest-level mathematics.  And again, we're even less likely to spot a student with these talents who's African-American or Hispanic, or not a boy.
  2. Math is too hard for sheer talent to get most people very far.  Alex worked on his ideas and chased them down until they made sense.  Discovering and writing a good proof can take hours even when the problem is just a college-level exercise--much less a Putnam problem or (gasp!) an actual theorem.  The people who make it to the ends of such journeys aren't the ones who start out the fastest or the furthest ahead; they're the ones who don't give up because of their passion for the subject and their determination to keep going.  That's where the ten years comes in--whether it's in a study in Princeton, or by a cold window in Xiangsu province.
Marzano points out that there are four ways to explain success:  either the task was easy, or I was lucky, or I was talented, or I worked hard.  Only the fourth of these leads to behaviors that increase the chances of future success:  if my past successes were all based on ease, or luck, or talent, then when I encounter a genuine challenge, I've got nothing.  The Nurtureshock authors go even further, citing research that telling kids they're smart actually decreases future success.

Chinese parents don't tell their kids that they're smart; they tell them that they're smart enough to be successful if they work hard.  And that's a message I think all of our kids should hear.

== pjk

Thursday, July 21, 2011

Something I found interesting

I would like to suggest that teachers will find the current issue of The Atlantic (July-August) particularly interesting, appropriate, and thought-provoking.

The article on the cult of self-esteem makes a lot of sense to me; it ought to be brought to the attention of anyone who is involved in the life of children.

"The World's Schoolmaster" is also noteworthy.

Perhaps the most interesting article for me is "The Brain on Trial." Although the article focuses on criminal behavior, I couldn't help but think about P.J's earlier blog about the students he just doesn't reach. I also thought about many of the students I just couldn't understand.

As well, there is a short piece on the public employee as the new enemy that many of us will find interesting.

There are also lots of other fascinating articles that did not relate directly to mathematics education but were otherwise interesting; I hope you find it as thought provoking as I did.

Saturday, July 16, 2011

The impact of data

There is conclusive evidence that smoking causes cancer, yet people still smoke. The evidence from probability is overwhelming that it is foolish to purchase a lottery ticket, yet millions of tickets are sold. There are hundreds of other situations similar to this, so I don't see why P.J. is surprised that teachers do not pay much attention to the data--teachers are people too. (If you did not take the time to read the article indicated in the comment to P.J.'s last blog, I suggest you read it as well.

http://youarenotsosmart.com/2011/06/10/the-backfire-effect/

It is a very interesting take on why and how we resist data that tells us things that run counter to what we want to believe and on what happens when we encounter data that provides evidence for something we do not want to believe.)

I have two initial thoughts about this data-resistance.

First: let us get to the heart of what we do as teachers. We wish to provide our students tools so that they can make intelligent decisions as they live their lives. We want them to learn to think clearly about little things and big things, so they can contribute to society at large and so they can make good personal decisions regarding their well being and safety. I hope everyone agrees with that. As a result, we teach students how to collect, analyze and draw inferences from data. The assumption is that if they understand what is true, then they will make informed decisions. It appears that we are wrong. There is considerable evidence that people will ignore data and continue acting according to old habits. Even intelligent, well-informed mature people will ignore the research if it provides an "inconvenient truth." Why is this, what can be done about it, and what are the consequences for education?

It seems true that people trust their own experiences much more than they trust research, most likely because that--experience--is the way we learned to learn. Children try to walk: they fail; they try again until they get it. They learn that the way to learn is to try and to trust experience. Not only does the child learn to walk, talk, and generally function in the world, but the child learns a good way to learn, namely: try, and then trust personal experience.

Second: most classroom teaching is still approached in one way: there is stuff to be learned; it is the stuff that the curriculum wants students to be learned; we have an obligation to have our students learn it. The students usually do not experience the curriculum. That means they might not believe it, because it is not something they personally experienced. If we want students to believe data and live by it, then students need to experience it. That means that we need lessons, beginning at a very young age, in which students state their beliefs, students collect data, the data contradicts those beliefs, and then the students have a way to test their beliefs against the data. The best example I can think of has to do with physics and falling objects. I think of these because they confronted my beliefs when I was a child, and I still remember being proved wrong. There was an exhibit at the museum where one ball was dropped and another was projected from the same height at the same time, and both balls hit the ground at the same time. I had to see the demonstration several times before I could accept the new information about how gravity worked.

The problem is further complicated by the way our culture celebrates people who just have a "gut feeling" that they should try something, and when it works they are celebrated as heroes. A particular instance of this is found in the book Moneyball, where the baseball culture ridiculed and ignored evidence that ran counter to long standing beliefs about baseball, even though considerable amounts of money were at stake.

Even the stories of great scientists seem to celebrate hard work and good fortune rather than a careful analysis of data. Perhaps we need some more stories, plays, films and TV shows about the importance of careful analysis of data.

There may be no better reason to reform mathematics education than to restructure what we do so that our students learn to look at data and make intelligent decisions accordingly.

Thursday, July 7, 2011

Resistant to Data

Many bytes have been spilled (can we say that?) about how Americans in general, and even teachers, have trouble interpreting data and reasoning quantitatively.  But there's another syndrome that I think infects education, and that's data-resistance.  Data-resistance is when people--educators, parents, students, whatever--are so gripped by their preconceptions that they overlook or ignore data that suggest a different general picture from the one they are used to.

Here's an example.  Several years ago, Chicago Public Schools created a new, free, voluntary summer program for entering ninth graders.  The program lasted four weeks, and -- as implemented at my school -- was well thought-out, addressing academic, social, and emotional needs, with support in particular content areas (math and English) as well as general discussion and practice of academic strategies.  I helped work on it, and I was really excited about the program.  Everyone agreed it was a terrific success.

The following academic year, I followed up by gathering data.  For each student, a counselor and I collected first semester GPA and number of courses failed,and we looked for differences between the 100-or-so students who attended the program and the 100-or-so students who didn't.  We couldn't find any. We went back to the database and sorted students by F/R lunch status and by score on the qualifying entrance exam, and in no subgroup did students who attended the summer program do better -- either higher GPA, or fewer failures -- than students who skipped it.  Statistically, there was no difference between students who attended the program and students who didn't.  In face, we couldn't find a single academic measure on which there was a difference.

As we started discussing whether and how to implement a similar program the following summer, I took my data back to the planning team and said "Look, we all thought this program was terrific, but in fact it doesn't seem to have any impact."  The response was uniform:  it's a great program; we should do it again; we shouldn't make major changes.

That's what I call resistance to data.

Now it's possible that there are other ways in which the program was helpful.  Maybe students who attended the program found the first weeks of school easier, or more enjoyable; maybe they got more involved in extracurriculars.  But nobody suggested that there were data supporting any of these claims, and -- one could argue -- if these outcomes don't show any ultimate academic impact, there might be easier and less-expensive ways to attain them.  Here's another hypothesis: maybe having half the class attend inculcated a culture that "infected" the whole class and boosted everyone's performance, sort of like how "herd immunity" can those who don't get vaccinated.  But nobody suggested that, either; in fact, the consensus at the meeting was that it was important to get more kids to attend.  And we have.

To be fair, kids love the program, and for one group of our students, we've made it extremely helpful: kids who haven't finished a full year of Algebra I can do so and start in Geometry--an option we created before Freshman Connection, but incorporated into the program when it began.  Nobody would say that Freshman Connection hurts anyone, and everyone who attends is glad they did.  But I'm still skeptical that it actually improves academic or even socio-emotional outcomes for our students.

I'd say that too often in education we make decisions like the way we made these:  without gathering data, or in the face of data that contradict our intuitions and preconceptions.  The worst is when we teach based on what worked for us as individuals.  Teachers lecture for 45 minutes with a little guided practice, then assign 1-47 odds for homework, despite mountains of research about what kinds of tasks, in what quantities, make for effective independent work.  In many cases, teachers have heard of this research or been told about it, but at some level they don't believe it:  lecture + practice + 1-47 odds worked for them when they were kids.  Doing "what worked for me" is particularly harmful in math, where we tend to forget that those of us who are comfortable with math now are really "math survivors".  Imagine if we taught kids to swim by dropping 100 kids at a time into a shark-infested pool; the two or three who made it out would later replicate that method, saying "it worked for me".

Even when the practices are not demonstrably harmful, using "what worked for me" or "what makes sense" as a metric ignores the fact that today's kids are growing up in environments that visibly and palpably different from the ones we grew up in.  Recently, I posted this article (on Facebook) about how schools in Indiana will be giving up cursive instruction (yay!) but teaching keyboarding instead; I asked why bother teaching keyboarding either.  An astonishing number of my friends commented that typing class had been incredibly useful for them, and how would kids learn to type well otherwise?  My response:  yes, but that was when typing was a specialized activity that you only did when preparing final drafts of papers; kids today type all the time and get immediate feedback about the quality of their typing; my students have learned to text blind quickly and accurately, without any formal instruction.  And ... my friends repeated their arguments: it's important to learn to type quickly and accurately, and they only learned to type quickly and accurately in typing class.

I'm not saying that we should totally jettison personal experience and common sense when we teach.  I'm just saying that we forget that both of those are anecdotal, and what memory and common sense tell us about our own experiences is not necessarily relevant to the population-at-large 30 years later.  It took me a little over a year of teaching to realize that, although math homework was mostly irrelevant to my own learning of high school mathematics, it could really help my students--so I shouldn't make it optional.  And it's that kind of skepticism, and openness to data, that we all could adopt more often.

Sunday, July 3, 2011

What do we want students to do?

As I finished up last week's post, I was wondering about different kinds of tasks we could ask students to do besides solve traditional or not-so-traditional problems (numerical, proofs, whatever).   Of course, few math tests consist primarily of "problem solving" in the narrower, Polya-type sense of "attacking novel mathematical situations or questions"; most of the time, tests and quizzes ask students to "solve problems" in a somewhat-more-general sense that includes working out answers to exercises similar to ones done in class or on homework.  But in general, prompts look something like
Find all values of w such that...
Draw the graph of a function with the following properties ...
 At what time will ...
 If P, Q, R, and S are points on quadrilateral ABCD such that ... prove that ... 
Find a polynomial with the following properties, or prove that no such polynomial exists.
 Compute the probability that ...
As I was thinking about assessing mathematical knowledge, I began to wonder why virtually all test and quiz questions are of these types.  It's not that hard to dream up different types of prompts.  For example:
Write an explanation of how to determine the degree of a polynomial based on its graph, including any uncertainty in the final answer.
Place isosceles trapezoids in the quadrilateral hierarchy drawn below, and explain your choice.
Explain the following statement: Given enough trials, any event with probability greater than 0, no matter how small, eventually occurs.
 An ironing board has two legs of fixed length, one of which is free to slide along the length of the board, but both legs are attached at their midpoints as shown in the diagram below.  Explain why this setup guarantees that the board is parallel to the floor.
What is the derivative of a function?  What is its importance and how is it computed?  What information about a function can you get by examining values of its derivative?  Explain using symbolic, numeric, and graphical examples, and including one example of motion.
All of these prompts demand a fair amount of mathematical knowledge, and the ability to construct coherent arguments.  And--unlike most traditional math prompts--they get at skills and concepts in a way that might actually be useful to someone not involved in mathematics at a professional level.  But -- okay, I'm contradicting myself slightly here -- coming up with these prompts is not trivial: it takes creativity, insight into what students will be able to do, time on the exam for students to work on them (so, not the 29th question on a 30-question hour exam), and time for thoughtful grading and assessment.  Yet they are worthwhile:  the last question is part of the best test I ever gave, the final exam for a two-week calculus course I taught for elementary school teachers (as part of the SESAME program at the University of Chicago), and what made it great was that it simultaneously allowed every participant to demonstrate at least some knowledge, while differentiating between teachers who knew a lot and teachers who knew only a little.

So here's the question.  Why we almost-entirely-confine ourselves to the first kind of "problem"?  Is it because
  • We think that the most important thing for students to do is to work out these kinds of questions, or
  • We want students to do other kinds of reasoning, but we don't know how to assess it, or
  • [my sinking gut feeling] We haven't really thought carefully about what we might want students to be able to do besides "solve problems," or whether this is the only (or even most important) set of skills and outcomes for students?
Thoughts, please, in the comments.  And guys, I know it's summer, but Blogger tells me we have substantially more followers than I can count on one hand.  So please do comment.

== pjk