On Rhetoric

“We must speak by the card, or equivocation will undo us.” – Hamlet

Some years ago – 46 years ago to be precise – Jeanne Chall noted that the “great debate” about teaching reading was increasingly becoming the hunting ground of people who were not themselves practitioners, and that this had complicated the debate without necessarily bringing progress. Just a few days ago, Frank Furedi illustrated that Chall’s observation is still relevant.

In the interest (sometimes even excitement) generated by Research Ed 2013, my impression from blogs and twitter was that the prospect of teachers taking ownership of their profession through increased integration of research with practice was seen as a good thing. As the days have passed, however, the political impetus to keep teachers in the thrall of rhetoric rather than knowledge has begun to assert itself.  There have been four main strategies.

First, generalisations about the utility of research have been made without distinguishing between different types of research or the standards which can be applied to ensure reliable and useful findings. For example, Professor Robert Coe said in his address at the conference: “the quality of educational research is very, very variable, but a lot of it really isn’t very good.” While Coe did not say all research is useless, to any teacher not familiar with evaluating educational research this could be alarming and disappointing (or, if he or she was at all threatened by the prospect of pedagogical change, perhaps reassuring: it turns out it’s all been a lot of fuss about nothing, and we can get back to doing what we have always been doing in the classroom). Such sweeping statements have far more power than one might expect, precisely because they provide what many people, including politicians, want to hear. Professor Coe is not a politician, of course, he is himself an educational researcher, so he ought to know what he is talking about. I am sure he does, which is why it is odd that he did not discuss which types of studies which are more beneficial for advancing knowledge in the profession and those which are not. Such standards exist, are well established, and can be applied if there is the will at both an academic and a political level. Coe also worried that research is “misused or misinterpreted”. Surely that is an argument for ensuring that teachers are equipped with the tools to distinguish between good and poor quality research, and between valid and invalid interpretations?

Secondly, and with some grounds, warnings have been sounded about the ways in which policy tends to shape the findings of research, rather than the other way around. Furedi uses the term “scientism” to refer to the way in which policy is lent authority by compliant science, which may or may not be of good quality. To the extent that politicians, as funders, and their policies must be held in tension with what the profession knows (or is learning) to be good practice, scepticism is warranted. Such problems have always existed. When we hear the words “evidence-based” we would do well to examine the evidence, and if necessary challenge the policy. It does not strike me, though, as a particularly good reason for not collecting evidence. What else do people propose to use instead? The issue is the quality of the evidence, and that is a debate about standards of research and the extent of evidence required before it is allowed to influence, or support, policy.

Thirdly, doubts have been cast on the appropriateness of the classroom as a place for research. The main arguments for this are that children are not guinea pigs, that there are ethical issues, and that experimentation is not appropriate when we are dealing with children’s futures. Arguments on these grounds are evidence of naïveté. An examination of the ethical aspects of a research project is a standard part of the process for approval. Questions of identification, confidentiality, consent, and potential harm must all be examined. In other words, saying we should not do research because it might be unethical or harmful is like saying we should not conduct medical research because it might be unethical or harmful. Which brings us to the fourth point.

Ben Goldacre and others have argued that some practices from medicine, such as the use of randomised controlled trials, could help to identify genuinely productive approaches, and to distinguish them from those which are not helpful or cost-effective. Frank Furedi states that it is inappropriate to import the practices of one discipline to another. I say ‘states’ rather than “argues” because he does not provide arguments to support his position. Furedi adds: “In an ideal world, methodologies for exploring the usefulness or otherwise of certain approaches should emerge from within the specific disciplinary field itself, rather than being imported from other disciplines.” He does not give a reason why this should be so, nor any evidence of why it might be harmful; nor does the reference to “an ideal world” clarify anything. In fact, inter-disciplinary “cross-pollination” may not be appropriate in some circumstances, but it is certainly appropriate in others. Statistical procedures in mathematics have proven useful in a variety of fields, including Professor Furedi’s own. Discoveries from genetics and biology have become important in medicine. More prosaically, the engineering of non-stick frying pans in our kitchens derived from NASA research.

The only “argument” that I have seen raised against RCTs is not in fact an argument but rhetoric. Furedi warns against a “medicalised model” which develops “interventions” or “treatments”, as this creates a system in which children are regarded as “patients” and in which they are regarded as having a deficit which the “treatment” will correct. This assertion relies on two elements: first, a selective interpretation of the definition. Furedi defines “intervention” as: “any measure whose purpose is to improve health or alter the course of disease”. However, he only talks about the “disease” aspect, not the “improve health” aspect. Secondly, the rhetorical power lies in the connotations of the words in general usage, not their meaning as scientific research terms – terms which have been in use for decades, very often by researchers whose findings indicate that it is not the children who are deficient, but the teaching they endure. Educational research has shown, for example, that many “learning problems” labelled as disabilities can be “cured” through improving students’ fluency (i.e. they just weren’t being given enough good practice). Is this “medicalising” the problem? On the contrary, it is challenging a culture of medicalising and labelling which is already deeply embedded in the psychological substrata of the profession. The deficit model is not a “re-imagining” of education, as Furedi calls it; it already exists, in billion pound quantities, and the best hope of challenging it is with good quality research that shows we can have higher expectations of what children can achieve.

Far from creating a warped and patronising view of children, good quality research builds on useful research questions, follows rigorous procedures and has wide applicability to education, developing what Dr John Church has called “a science of learning and a technology of teaching.” I use the present tense because so much good quality research already exists, and before the profession engages in a distracting and costly exercise of new classroom-based research projects, we would do well to become familiar with what is already there. That is why good studies begin with a review of relevant literature in the field as well as a clear explanation of the research question and why it is important. Being able to distinguish strong from weak research is the first step towards a more mature profession, a profession able to see and respond to the mischievous rhetoric , and to challenge selective arguments.

Furedi makes a potentially useful point, though he uses it destructively: “The key question asked by educators should be, ‘What do children need to know?’ ” Unfortunately, insted of exploring his, he uses it as a distraction, first arguing that asking “what works” distracts us from the curriculum – and then, in the very next sentence, that the same question leads to continual overhaul of the curriculum. Such self-contradiction is not surprising. It is almost inevitable when we eschew logic for rhetoric, and reason for sentiment.

It is time – long past time – for our profession to see through the rhetoricians, and to get on with working out how we can serve our children better. We should be familiar with the research that already exists: not so much to inform policy, as to inform ourselves, so that we will not waste students’ time re-inventing solutions which have already been found. At the end of the day, regardless of the politicians, what happens when that classroom door closes is my responsibility. I study research because it makes me a more useful teacher to my students.  I encourage my colleagues to do the same.

Sources referred to in this post:

Learning to Read: The Great Debate (1967) Chall, J.

Keep the scourge of scientism out of schools. (Retrieved 15.9.2013) Furedi, F.

Practice & Research in Education: How Can We Make Both Better, and Better Aligned? (Retrieved 14.9.2013) Coe, R.

About these ads
This entry was posted in Education, Uncategorized and tagged , , , , , , , . Bookmark the permalink.

3 Responses to On Rhetoric

  1. Neil Brown says:

    A very interesting post, thanks. To contest one point, you say “…it is odd that [Coe] did not discuss which types of studies which are more beneficial for advancing knowledge in the profession and those which are not. Such standards exist, are well established, and can be applied if there is the will at both an academic and a political level.”

    I think this makes the process sound more straightforward than it is. In general, it is not simply that some types of studies are good or bad — instead the fine details make the difference between good and bad. Obvious issues include sample size or lack of a control group, but there are also more subtle issues such as using the wrong statistical tests, using measures that do not accurately reflect what you are looking for, fishing post-hoc in the data, not ruling out effects like regression to the mean and so on. These issues often require careful picking through of the study, and history suggests that peer review does not perform this function adequately to produce only high-quality research.

    While technically we should be able to circumvent this problem, to do it properly would likely require a massive change in the culture: better training for all, and more time (and thus money) for someone (academics? something like Cochrane?) to assess and review studies more thoroughly. I think this might be what Coe’s EEF is looking to do, but I haven’t yet had time to read up on it properly.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s