Elements of Risk: Natural and synthetic pesticides and fertilisers

Vanilla cultivation

Essentially, the decision to buy organic foods over those grown with synthetic pesticides and fertilisers comes down to whether you prefer to take risks with unknown products or stick to the products you know. Personally, I would rather consume the products that are known about and regulated, than the products that are unknown and unregulated (as ever, I reserve the right to change my mind). Either way, I’m washing most of these products off when I rinse my fruit and veggies, before I prepare and consume them. But it does come down to a personal choice.

Producers and advocates of organic foods rest their arguments on the Appeal to Nature fallacy: that because the fertilisers and pesticides used to grow organic foods are natural, they are therefore healthy, and healthier than synthetic fertilisers and pesticides.

It doesn’t annoy me (much) that the term ‘organic’ as its used to describe the ways by which food is grown is so different to its meanings in chemistry or biology; language evolves and meanings change, and English makes one word do the work of many in other cases as well as this one. What annoys me is that in this context, the term really means “unregulated”. We know the effects of synthetic chemicals that are mixed and diluted and used in regulated quantities to fertilise your fruit or vegetables or keep the bugs away with pesticides; these are often used in conjunction with other methods of promoting plant growth and reducing the capacity for bugs to demolish the crop. We can monitor their use, build on them or replace them with better and more effective products as they are developed, and we can predict their effects on the vast majority of the population and local ecosystems. In fact, synthetic products of these kinds were developed from what is known about the “natural” products, in a similar way to that Aspirin was developed from what was observed of the effects of tea made from the bark of a willow tree. But on organic farms, the use of impure “natural” pesticides and fertilisers is not regulated, and as a results, not all their effects can be predicted.

There’s evidence that organic produce is no healthier, no less covered in pesticides, no tastier, and no better for the environment than non-organic produce. Christie Wilcox over at Scientific American has written a great blog post describing this evidence, with links to the original work.

Of course, we could reduce the use of both with more GM technologies, and therefore decrease the effects both on us and the environment, but that’s a debate for another day!

Another issue I take with these “natural” products are that they are actually impure. Let’s take vanilla for example. At the shops you can buy synthetic vanilla essence or natural vanilla extract. In both products, the vanilla molecule looks like this:

Vanilla molecule (Vanillin)
Vanilla molecule (Vanillin)

It doesn’t matter whether it has come from a vanilla bean or synthesised from the petrochemical guaiacol, it is made of the same constituent atoms in the same structure.

So why does natural vanilla extract taste so different to the synthetic essence? It’s because of the 200+ other molecular compounds present in the vanilla extract. Do you know what they are? Do you know what they do? No, you probably don’t, unless you’re a food chemist working directly with the product (and if you are, I’d love to hear from you!). The synthetic vanilla, on the other hand, is much more pure, with significantly fewer additional compounds present.

Is the fact that we have consumed vanilla extract for hundreds of years good enough evidence that the product is safe for you?

What other evidence would you require?

How much do you want to know about your food?

Why would we apply different standards for what we require to declare trace amounts of pesticides and fertilisers to be safe for consumption than to declare vanilla extract or essence safe for consumption?

What would you prefer to consume: something “natural” with unknown consequences, or something synthetic with predictable results? Why?

I’m not a chemist. I seek information from all sources; primary and secondary (try reading a chemistry paper sometime; it’s a tough slog!). For accessible chemistry information, I love the Compound Interest infographics and blog. I enjoyed this article and video about natural and synthetic flavours (via Ketan Joshi). I thank John Holman for his lecture back at the STEM in Education Conference in 2010, which prompted me to explore how vanilla extracts and synthetic essences are derived.

#dressgate and the fallibility of personal observation

“So where does this leave us? With a lot of evidence that erroneous beliefs aren’t easily overturned, and when they’re tinged with emotion, forget about it. Explaining the science and helping people understand it are only the first steps. If you want someone to accept information that contradicts what they already know, you have to find a story they can buy into. That requires bridging the narrative they’ve already constructed to a new one that is both true and allows them to remain the kind of person they believe themselves to be.” – Christie Aschwanden, FiveThirtyEight

There are plenty of optical illusions out there for you to check out (just Google it), but last Friday, this one broke the internet. For those of you who don’t have Tumblr, Twitter, and somehow missed it on Facebook¹, tell me this: is it white and gold, or blue and black?²

I see white and gold; for the life of me I can't see blue and black.
White and gold, or blue and black?

A Buzzfeed poll found that a majority of people saw the dress as white and gold, although this could be due to expectation and framing effects. The original photographer has revealed that it was blue and black. A Reddit user uploaded these survey results³ suggesting that the colour scheme options are not strictly dichotomous. Buzzfeed explained why we saw the dress in different ways with cognitive scientist Cedar Riener and cognitive neuroscientist John Borghi sharing their expertise. Wired, The New York Times and a number of other agencies also had a go at sharing an explanation for the differences between us in viewing the dress. Randall at XKCD expressed the debate in comic form:

This white-balance illusion hit so hard because it felt like someone had been playing through the Monty Hall scenario and opened their chosen door, only to find there was unexpectedly disagreement over whether the thing they'd revealed was a goat or a car.
Dress Color by XKCD

And SciShow had a go at a video explanation:

Mind Hacks complains, and rightly so, that none of these explanations are satisfactory, with a reasonable argument against the colour constancy explanation!

I think the thing that many people are missing is that “the dress” is an example of how fallible our senses are (all eight or nine of them, depending on how you define a sense). In short, as Hank from SciShow says, different people perceive things differently.

So if our senses are so fallible, how do we trust any of our experiences to inform us about the natural world and how it works? Well, to me, that’s the very reason that we do scientific research.

Science is a process by which we can test our observations, repeatedly, in different ways, or sometimes the exact same way. Robust and rigorous scientific methodologies generate quite valid and reliable evidence by which we can generate further develop our explanations for events. Interpretation of evidence is informed by past research, so we can be more and more confident in our theories about the world. Complete resolution isn’t the outcome; an improved understanding is. And through this continual practice we can overcome the biases, shortcuts, and errors that our brains can make when interpreting personal observational data. The results and interpretations will never be perfect, or be stated as “100% certain”, but decisions made using good science are more reliable than those made on the basis of our own experiences alone.

Unfortunately, just telling you this is unlikely to help, nor are the explanations for why we see the dress one way or another likely to help us change our minds about the dress. However, there is emerging research in ways that we can educate students to think critically about claims put to them. Which leads me to present you with this article, by Christie Aschwanden at FiveThirtyEight, that nicely discusses why your brain is primed to reach false conclusions, and with a hint about what can be done about them.

  1. Or living under a rock.
  2. Some people are reporting that they see other combinations. That’s cool, the world isn’t either/or. ;)
  3. From where? What survey? Who was included? Is the sample representative? Without these details (and others), how can we evaluate the results for ourselves?!

researchED Sydney 2015: A commentary, part 2

This is part 2 of a post about my experience at researchED Sydney. Have you read researchED Sydney 2015: A commentary, part 1?

Session 4: Why the 21st Century teacher should be research engaged

This session, presented by Tom Bennett, was essentially an argument for teachers to be research literate and engaged. To me, he was arguing for research scepticism, but without using that word…

My summary of his story is this: Tom went to university to become a teacher. At university, Tom learned to use particular strategies and techniques that he was told were research-based, including Brain Gym and learning styles.

Once out and teaching, he found for himself that these strategies were a waste of his and his students’ time, and he became angry at this “research” and at his teacher preparation program. He felt he had been lied to. Familiar story? Tom sounds like many of us… So Tom began exploring the research himself, wrote some books and articles, and started researchED.

The goal of researchED is to bring academics, researchers, teachers (Tom calls teachers ‘practitioners’), and policy-makers together for a discussion about what works and how we know. From my own experience, I know this isn’t always easily achieved. I’d like to write another post about the intersection between teaching and research, and importantly, the ways in which we can work together to ensure that teachers are choosing best practices and researchers are sharing their work with teachers, either through collaborating with them in the research itself, or simply communicating findings following investigations. Teachers already do great work, and have fantastic networks that they are setting up from the ground up: teacher associations, PLNs, teachmeets, and podcasts like TERPodcast, just to name a few. Many researchers aren’t even aware of these networks, or of events like researchED, but we need to be, or we risk irrelevance.

One thing that annoyed me initially was that Tom referred to programs such as Brain Gym and learning styles, as if there actually is robust, valid, reliable, peer-reviewed research into them, when there isn’t. The “research” that supports these programs is none of these things. Not all research is created equal, and a sceptical consumer will recognise this and qualify their statements about such research appropriately. It sounds like his teacher preparation program let him down quite seriously (and thereby his students too). To represent the “research” into these programs as equal to the research into, say, argumentation in science, misconceptions in science, or even mindsets (a theory that is growing from and contributing to a large body of evidence, as good science does), is somewhat disingenuous.

Sweeping statements that educational research is like using leeches to cure medical ills in the fourteenth century are also unhelpful. Thankfully, Tom clarified:

And in this, Tom is absolutely correct. It’s a major part of being sceptical: asking for research or evidence when someone makes a claim. As teachers, we need to be doing this often, and not just in the cases where we disagree with the claim, but in all cases. Then we need the time, skills and knowledge to evaluate that evidence for ourselves. Many teachers do not have the time, the skills or the knowledge to do this. I think that researchED can be a part of the solution to this problem, and I would like to help if I can.

One roadblock is easily identified:

There are of course, many classroom situations that can’t be easily informed by educational research (but perhaps some psychological theory can help): what do you do when a student swears at you? What if he did this in part because his grandad died yesterday? and he didn’t have any breakfast this morning? and he has to work three shifts this week on top of his homework and looking after his little brother?

Tom briefly discussed the roles that teachers can play in research, including carrying out research for themselves. Events like researchED can showcase and spread such research. Twitter helps too!

He had some excellent arguments for becoming research literate:

Then Cameron Paterson had an interesting question:

And others on Twitter had some responses.

Tom had some great ideas for bringing research culture into schools. Some are expensive:

And if anyone wants to hire me for this position, please get in touch! Or maybe, if you don’t have the money:

And there are others who can help:

Or even this twist on the idea:

But perhaps teachers could start with a journal club, where teachers read and discuss a relevant article each fortnight/month, either in school cohort or on Twitter (much like @sciteachjc, although that has fallen over a little due to a lack of time on the part of the organisers).

Or schools could carry out their own research, just as Corinne described in Session 3:

Tom’s seminar was delivered with passion and enthusiasm, presenting some good arguments for teachers to increase their research literacy, and prompting productive discussion about when, why and how schools and teachers could engage with research and researchers.

Session 7: Reggio-Inspired: Exploring Group Learning and Documentation

I missed Sessions 5 and 6 due to a commitment to another task that I shall write about another time. The last session I attended was presented by Cameron Paterson.

I went into the session with some assumptions, expecting to hear about the Reggio-Emilia approach (philosophy) of teaching, and the research that had come out of it. Cameron didn’t talk much about this at all, but launched straight into an engaging group activity. This meant I had to put down the phone and couldn’t tweet as much!

In groups, we were assigned the role of “documenter” or “learner”. The learners’ task was to achieve the goal of the lesson (design, test and make an airplane that can fly a set distance carrying a set weight), while the documenters were to use any means or form to record the learning activities of the learners. In our group, myself and two others became learners, while Corinne and another participant kept a written record of our language, activities, interactions and any other interesting things they observed. Cognisant of the short but necessary time limit of the activity, I got to work right away.

At the end of the activity, the documenters shared with us their observations, and the learners responded. I’d been identified as a leader (hah!) early on. The documentation process was fascinating; a good reflection and opportunity to receive feedback about the learning process. It is something I would like to do with my pre-service teachers, to make visible their various learning processes. But the next question was trickier: what had we actually learned about aerodynamics from the activity?

Well, for me… nothing. In fact, despite my science content knowledge and conceptual understandings around the ideas of flight, I’d made several errors in the airplane design, and under the time pressure, and perhaps as a result of group dynamics, we’d not had time to test our designs. There’d been no time to learn from others (like searching on the internet for design ideas, or for physical principles to consider), or discuss these things with other groups. Perhaps I’d rushed into the activity, or missed something. Other groups announced that they’d learned about thrust, or wing span; ours had no such new ideas to contribute. Perhaps I missed it when Cameron articulated the aims of the learning?

Cameron had some great suggestions for incorporating the process into the classroom, and some anecdotes about how this practice had enhanced student learning in his classroom. I look forward to trying it with my own classes, occasionally.

It's really tricky to photograph.
‘Research’ by Tom Bass, Circular Quay, Sydney

What did I miss?

As always, there were sessions I missed out on that I would have loved to attend.

Professor Stephen Dinham argued for the need for a strong evidence base for teaching, school leadership and educational change, exploring the “fads, fashions and misconceptions concerning teaching”. According to the précis, Dinham proposed to explore the evidence base for these approaches. This sounds like it would have made a great keynote; an introduction to and modeling of how to approach research for teachers unused to the practice. It sounds analogous to the approach I often use to teaching students about decision-making in science, and I’m keen to see how others do this.

Dr Kerry Hempenstall presented a session on Direct Instruction literacy programs as evidence-based practice, and while I’m not sure that DI is the answer to every problem in education (in fact, I’m almost certain it’s not an answer to most problems in education), I would like to learn more about it–maybe he could change my mind!

Professor Kevin Wheldall and Dr Robyn Wheldall presented a session on positive psychology in the classroom. Again, this is something I am still sceptical of, mostly due to not knowing too much about it (also, and mini-rant here: sometimes I’m annoyed by people reinventing the wheel and giving it a fancy name. Good behaviour management is good behaviour management, and why can’t we just call it that? It’s like the need to name every “diet” when actually, there’s such a thing as a healthy diet that just involves being balanced, paying attention to your body, and making evidence-based decisions. Why does it have to have a fancy name? If it turns out I’ve actually been using ‘Positive Teaching’ all my teaching life, does that mean I have to call it that?). [Edit 3/3/15: Kevin Wheldall has been in touch and promises a blog post about this, which I will link to here when it’s posted.]

Simon Townley, whose organisation, Gorilla Learning, run professional development workshops about educational neuroscience for schools and teachers, gave a workshop on (you guessed it) educational neuroscience. Red flags fly at me like to a bull in Madrid when I see the term “educational neuroscience”. It’s not because there’s nothing neuroscience can offer teachers – quite the opposite. It’s because neuroscience is not the magic bullet for teaching and learning, when the contexts in which teaching and learning take place are myriad and complex. It’s also because it’s the kind of term that the makers of Brain Gym and learning styles would have loved to have thought of and used in their promotional materials. So I would be interested in hearing what was discussed in this workshop.

Other thoughts and questions

Teachers are researchers. Every day a teacher enters his or her classroom with a new lesson to try, a new strategy to test, a new thought about how to manage young Harry’s distractibility or Neville’s anxieties or help Ginny understand a difficult Herbology concept or develop Hermione’s broomstick flying skills. Teachers with better research skills, who are critically reflexive, and who look outside their own experience too, find and evaluate possible solutions to teaching and classroom issues more quickly and efficiently, and thus are more effective. Looking outside to what others have done is a central part of this. The constant trial and error that teachers undertake to improve their classroom teaching is barely spoken about or shared. Usually, it’s undertaken independently, and the results a quiet accomplishment. Sometimes, it’s done collaboratively, and the results are shared with the community of students, or of families. Occasionally, research is undertaken more formally, purposefully, with a broad goal of improving school wide policies or processes.

I’d expected the conference would be full of best examples of this.Where was the actual evidence? Scepticism? Research process? Questioning of assumptions? Other than Corinne’s presentation, I didn’t see too much of this in application. I’d hoped for workshops for how to find research, how to read and review research, how to engage in research and develop research projects in schools. Perhaps it’s the workshops I chose?

Some questions… [Edit 3/3/15: Tom Bennett has helpfully answered these questions in the comments below.]

Why were there so few people at the conference?

  • Were people really that turned off by Kevin Donnelly?
  • Was the promotion on Twitter alone? That severely limits the potential audience, if so.

Why was the conference so cheap? Where did funding come from, to fly Tom to Australia, fly speakers to Sydney, etc? It was only about $40 to attend… and why did I pay in pounds??

  • The conference appears to have been sponsored by the Shore School, TES, The Education Partners, and CfBT Education Trust, according to the program.
    Shore School… well it’s lovely to have lots of money. They have beautiful facilities, and it’s great that they were able to offer them for the conference. Apparently they also undertook some of the printing etc. I’m sure there’ll be plenty of benefit in the promotion of the school, but more importantly, in the opportunities some of the Shore staff would have had for professional development.
    TES are British and Tom Bennett blogs for them. They do have an Australian site, but this wasn’t mentioned in the material. TES provides/sells resources, advertises teaching jobs, etc.
    The Education Partners are difficult to find out much about. Their website is undergoing rebuilding. Without knowing who they are – and such a vague name means I cannot find anything else about them online – it’s hard to know what they gain from involvement in the event.
    CfBT are a “non-profit provider of educational services” with a website that looks amazingly similar to researchED. Apparently they are a “world authority on school inspections”…
  • I think I paid in pounds because the event was ticketed through Eventbrite UK?

Ultimately, the conference was thought-provoking, in a different way to the CONASTAs or STAQ conferences I’d attended (and occasionally organised or presented at), even though these are also events at which both academics and teachers volunteer their time and present their work. There was more of an emphasis on research rather than lesson ideas or content knowledge, which tends to be the focus of primary teaching conferences. The opportunity to meet passionate educators and researchers was excellent. What’s more, it caused me to reflect on the myriad and complex reasons why I loved teaching, but then moved into research; further, I considered the multiple roles that I can hold and the ways that I can help when this Damned PhD(TM) is finally finished. I wish I’d been able to stay to the end. Thanks Tom, Shore School staff including Dr Tim Wright and Cameron Paterson, and all those involved in bringing the conference to Australia. I look forward to the next one! Brisbane, you say…?

Many thanks to Angelique Howell and Alom Shaha for sharing their thoughts and feedback on this and the previous post before publication.

researchED news, reviews and reflections by others

Tom Bennett – Letter from Australia, part 2: researchED Sydney and beyond

Pamela Snow – researchED Sydney – Some puzzling thoughts

Chris Munro – A story of practice, theory & sense-making: researchEd Sydney 2015

Greg Ashman – Being part of it: researchED Sydney

Dr Gary Jones – researchED Sydney – Some initial reflections

John Kennedy – researchED – Researchers meet teachers

Stephen Exley – ‘Seductive’ teaching trends damage pupils, warns leading academic

Debs (@debsnet) – Research and education: A match made in the conference room? #rEDSyd

Jonathan Pugh – researchED Sydney 2015: A day of reflecting on research in rducation

researchED Sydney 2015: A commentary, part 1

Sydney from Taronga Zoo
Sydney from Taronga Zoo

On Saturday I joined 100 or so teachers, researchers and other education stakeholders at Shore School in Sydney for Australia’s first researchED conference. I had heard a lot about researchED before; I follow several UK teachers who have presented at or attended one of the three researchED conferences that have already been held in the UK. Many researchED Sydney participants have written positive and thoughtful reviews and reflections on the conference (listed below), and it’s wonderful to see so many people feeling inspired and motivated by the experience. I really did enjoy the day, but was left feeling that there was more enthusiasm than substance. Was there something I missed?

This post is long, so following the advice of a respected reviewer, I am splitting it into two parts.

Why researchED?

I attended researchED for several reasons:

  • I subscribe to the expressed aims and values of researchED. To me, researchED presented an opportunity for teachers and researchers to meet, discuss, and value the experiences of both teachers and educational researchers (sometimes one person is both!). As a teacher, I was often frustrated at the disconnection between research and “chalkface” teaching. As an academic (nearly?), I am keen to prevent others feeling the same way.
  • I wanted to learn about the research that teachers were carrying out in their classrooms and schools. Teachers are researchers; I expected that the conference would be full of good examples of formal, documented research that teachers are carrying out. In future, when I am qualified (with my doctorate) and in a position to do so, I would love to support and mentor teachers interested in undertaking action research, so I hoped to learn what sort of support might be most useful for me to offer.
  • I wanted to meet some interesting people I’d only ever spoken to on Twitter, including researchED organiser Tom Bennett (@tombennett71), primary educator and prolific blogger and Teachers’ Education Review podcaster Corinne Campbell (@corisel), who also has a hand in the @Edutweetoz account, and a whole host of twits including @cpaterso (Cameron Paterson) and @capitan_typo (Cameron Malcher, who with Corinne produces the TER Podcast). I was also very lucky to meet more educators, both with and without Twitter accounts, who are passionate about education and about finding best practices and using research to do so when possible.

Session 1: Welcome to Country and Opening of the Conference

The day began with a very well written welcome to country delivered beautifully and proudly by one of Shore School’s newest students, Levi Nichaloff.

Unfortunately, this was the last reference to our land’s original inhabitants I would hear all day. There are some wonderful Indigenous educators and researchers who would have been an excellent inclusion to the program. If the organisers were after a controversial figure for their panel (to replace Dr Donnelly; see my write up of the panel, below), they could invite Noel Pearson, who has been instrumental in the return to direct instruction techniques to Indigenous communities in far north Queensland (pretty much on the basis of “it worked for me back in the 60s”). Personally, I would prefer to hear from someone with actual teaching experience and work in research; Dr Chris Sarra would have filled the role admirably, and I hope is included in future events. Other Indigenous researchers and educators are around, and their voices are required in any broad discussion of education in Australia that aims to progress the practice of teaching.

We also heard briefly from Dr Tim Wright, Principal at Shore School, about the work that the school does to bring a research culture into the teaching staff. It’s lovely what lots of money can allow you to do.

Finally, Tom Bennett himself welcomed us to the conference. He offered some wonderful advice that I’ve heard before from Peter Ellerton: go and listen to people you disagree with. This wasn’t hard to achieve; the first opportunity to do this came in Session 2.

Session 2: Panel

There was one reason not to go to the conference, which was enough to deter a number of teachers and researchers I know: the inclusion of Dr Kevin Donnelly in a panel during Session 1, discussing “What is the role of educational research in policy making?”

On the surface, Donnelly’s role in the review of the Australian Curriculum certainly justifies his inclusion in the panel. However, Donnelly is a known ideologue, apparently uninterested in engaging with true research or research evidence, chasing platforms to promote his view (or at the very least, to promote controversy). I doubted his attendance would add anything to the panel, or that engagement with him would change his mind about any of his views (e.g. all children should learn Catholic values to the exclusion of other religious or non-religious beliefs and values; corporal punishment should be allowed in schools; homosexuality is a choice; multiculturalism is unnecessary as white Australian culture is superior to all others). This close-mindedness, to my understanding, is the very antithesis of the purposes of researchEd. I would have preferred the platform be given to someone interested in engaging with alternative views, whether or not I agreed with their own views.

The organisers’ arguments that the conference would present a forum to challenge his opinions are not enough. The original concern was that it was a waste of time to listen to the speaker; yet more time would be wasted arguing with someone who is unwilling or uninterested in changing his or her mind.

Donnelly did suggest a couple of things that I agreed with: that primary teachers in particular are drowning in administrative tasks, paperwork, and testing. He also said that governments were trapped in short term cycles and political aims. Hear hear!

However, two outliers do not a dataset make. Despite a posture and position on stage that screamed disinterest (seats removed from the other panelists; constantly scanning the ceiling and empty stage to his left; refusal to look at the audience), he did manage to express his belief that a conspiracy of “seven or eight” people who are totally removed from the classroom run educational policy. Donnelly did not include himself in this group; perhaps he still identifies as a teacher rather than a researcher. This would explain his disdain for educational research, which he was completely dismissive of, stating that teaching is a craft that educational research cannot help. Great start to a conference about educational research!

The second panelist was Professor Stephan Dinham, from The University of Melbourne. Dinham is actually involved in research about educational policy, and had a few helpful and relevant things to say, well-argued and with some evidence to boot. Dinham suggested that research has a big role to play in school policies, as policies at this level were most likely to make a difference for students and teachers. As such, he argued, it is imperative that teachers and school administrators become critical consumers of research (YES!). Dinham also had a few things to say about the myth of learning styles, which I definitely agreed with, but I think he was largely preaching to the converted. Professor Dinham also warned us about going down the path of school autonomy, for-profit schools, and expecting schools to enter the free market; good advice when considering the negative impacts of these policies on equity in educational systems, and the role of inequity in a large range of social and economic issues in the long-term.

The final panelist was Emeritus Professor Kevin Wheldall, from Macquarie University; instrumental in the Macquarie MUSEC and the MULTILIT programs. I’ve found the MUSEC briefings to be very useful in the past, but I know very little about MULTILIT. At one stage, Wheldall suggested that quantitative research was useful but that qualitative research had no value; because findings are so contextualised, they cannot be generalised to the broader population of students. While I might concede that some qualitative research methods are perhaps less robust than other methods of research, I wouldn’t throw the baby out with the bathwater and say it’s useless! I would also argue that limiting researchers to only examine constructs that can be measured quantitatively is unhelpful; not all outcomes of education are quantifiable. I think this is a false dichotomy; a bad argument. Essentially, this argument endorses a test culture in which only that which can be measured is worth teaching. This encourages the sorts of practices around NAPLAN that have decimated time spent on other valuable activities in schools, particularly in primary classrooms.

Audience questions were thoughtful and probing, and elicited brief discussions on Twitter about the role of context in Pedagogical Content Knowledge (note to self: I must write a post on this).

From the audience, a participant asked about the multiple unfortunate references to learning styles made in the AITSL Professional Standards. Professor Dinham suggested that this is one situation in which evidence has not informed policy extensively, and that it’s a slow work removing these references from the Standards as much as it is from teaching culture. In response to a question about whether learning should be teacher-focused or student-centred, he argued that good teaching is both, and they are not dichotomous.

Another participant asked Dr Donnelly what he thought we should rely on if not educational research. Donnelly backtracked and suggested that educational research could be useful, but that researchers need to engage more with teachers, and perhaps support teachers to carry out their own research. Wait, wasn’t that a major aim of the conference? Also, and I can’t remember who suggested this, I’m sorry, but it was argued that teachers needed to be trained in how to read and interpret research critically. I think this is something we do to an extent with our pre-service teachers. However, if the recently released review of teacher education programs is accurate, there are some universities out there that are not doing this. I’m also not sure of anything that’s out there for in-service teachers (MOOC-time? I’m up for it!). It is a skill that is developed with practice and lost when not practiced, so establishing a research culture in education is important. Anyway, with more questions but no time remaining, we were hurried out the door and into the next session.

Was the question “What is the role of educational research in policy making?” resolved? Not really, but I’m not sure what can be resolved in 45 minutes. There were some interesting statements about the various levels at which policy sits and where teachers and researchers can contribute to policy. The discussion was valuable in that people’s beliefs were heard, and to an extent, evaluated, but the discussion was not particularly deep. There were few ideas for moving forward or progressing the agenda of including teachers or researchers in policy-making.

What do you notice about the panelists? As I would have asked my fifth graders: whose voices are we listening to? Whose voices are we missing? When women vastly outnumber men in both the teaching and researching worlds, surely there are some women of calibre/merit/whatever-neoliberal-speak-is-currently-used amongst us who could be invited to speak. Also, more Indigenous people, and practising teachers engaged with research, please, researchED organisers. With a question like “What is the role of educational research in policy making?” there is plenty of scope to open up the panel to more variety.

Session 3: Rethinking Primary School Homework

Session 2 was everything I had hoped for from researchED. Corinne Campbell, primary teacher and Assistant Principal at a state school in Sydney, thoughtfully described a research project her school had undertaken over the past few years.

Corinne had read a number of headlines about the “uselessness” of homework, and not wanting to further waste the time of her students, their parents, or teachers who set and mark the homework, she undertook some research. She found a lot of research that demonstrated that homework during the primary years had little to no impact on student achievement.

There were some serious concerns about homework tasks and outcomes. When children became confused about how to carry out a task, inexpert guidance could make the issue worse rather than better. Further, mistakes made at the beginning of the week – for example, in spelling routines (say it with me everyone: look, say, cover, write, check, groan) – would be practised throughout the week and learned as correct.

Another finding, unsurprising to me as a primary school teacher, is that reading logs were often faked! Parents, worried about the school’s judgment of their child’s reading habits, or simply trying to make quota, would flesh out the logs with extra books. Conversely, many parents/children simply forgot to fill them in, but were reading quite extensively. Ultimately though, teachers were worried that without including the log in the homework, there would be no record at all of what or how much children were reading, so the school decided to keep it.

But while there is no evidence that homework increases school performance, it can be enriching in other ways, and following stakeholder interviews, Corinne and her team determined that banning it was not the answer after all. In just a week, 190 of Corinne’s 300 school parents responded to a homework survey…

Some parents didn’t want their children to do homework at all. “Just let my children play!” argued some, while other parents complained that after music lessons, swimming club, and dinner, homework just kept their children up too late.

Corinne’s team also identified some benefits to homework.

They recognised that all children need to know their times tables, for example, so these were kept as a part of the homework.

Even so, changes to other aspects of the homework caused parents some consternation!

And differentiation was still required, along with appropriate scaffolding, to meet the needs for all learners.

Corinne asked “how do we accommodate expectations and requirements of all stakeholders in homework policy, while still considering evidence?” This question is relevant to all schools, even though the answer might differ between them. The outcomes of this project work for Corinne’s school community, but context is everything, and other schools might need to make different decisions. And the homework policy can continue to evolve in response to feedback.

Corinne found that the outcomes of the research were more than just improved outcomes of homework.

Well done Corinne – that was brilliant. This session included everything I was expecting from researchED.

 

Part 2 of the commentary can be found here.

Defining Criteria for Assessing Student Work in Technologies

According to Google, a criterion (noun) is “a principle or standard by which something may be judged or decided.”

We use criteria every day when we make a decision about what to wear, what to have for breakfast, and so on. Often this criteria is implicit; criteria are not expressed or even conscious.

When deciding what to wear, for example, criteria might include (but not be limited to, and may for some be the complete opposite of or different to):

  • clothes, that:
    • I already own
    • match in colour and style (according to subjective personal preferences)
    • suitable to the weather (temperature, humidity, sun exposure, precipitation)
    • are gender conventional
    • are appropriate for public wear
    • demonstrates values that I agree with (through slogans or images)
    • etc

(Recently a scientist was criticised for wearing a shirt that was not considered appropriate for his appearance on international television. Perhaps he had not carefully considered the criteria for choosing his clothes that day.)

As teachers, we define and use criteria to make judgements of student work. Because our students cannot read our minds, but their academic success is dependent on such judgements, we need to be explicit with the criteria we are using to make judgements. The criteria needs to be well communicated; not just proposed but also understood by students themselves.

In the first phase of the Technology Process, Investigation, we should collaborate with students to define the criteria for judging their products. We will also have criteria in mind for what good investigation, design, and production looks like, and should discuss these too.

The criteria for making a stop-motion animation, for example, might be:

  • meets purpose
  • can be used in intended context
  • meets length specifications (e.g. 2 minutes long)
  • meets other specifications defined in investigation phase
  • worked with constraints (e.g. budget, time, materials)

Product:

  • smooth transitioning of active objects between images
  • smooth frame rate
  • appropriate framing, lighting and position of active and focus objects
  • appropriate focus of images
  • engaging scripting
  • effective use of display elements (onscreen text, borders, etc)
  • engaging music, sound mixing or special effects

Content:

  • engaging content
  • effective sequencing of content (plot events or presentation of messages or ideas)
  • effective transmission of content (messages or storyline consistent between audience members)
  • accurate portrayal of events, ideas or people, demonstrating understanding of intended knowledge learning outcomes

Process:

  • comprehensive investigation
  • well-communicated plans (plans may change)
  • independent problem-solving
  • effective communication and collaboration with others
  • demonstrated skills for using film editing program
  • safe and appropriate use of equipment

As we work with students to develop criteria, they become more aware of the requirements for success in school and in other contexts. They can think consciously about criteria when making decisions.

So if this criteria is met, does that mean a student therefore deserves a high grade? No, in addition to criteria, we can use standards.

I appreciate the SOLO Taxonomy for assessing student work, because it addresses quality and complexity rather than surface level checklists of requirements.

A pre-structural performance by a student would be evidenced by a product or design solution that does not address the technological challenge presented.

  • In the case of the development of a stop-motion, this might be a basic stop-motion production that does not meet the criteria above, and therefore does not address the technological challenge.

A uni-structural performance by a student would be evidenced by a product or design solution that addresses a single problem, without consideration of the broader problem, context, or alternative solutions.

  • In the case of the development of a stop-motion, this might be a basic stop-motion production that meets the criteria above, satisficing the requirements without consideration of other elements, ideas or opportunities.

A multi-structural performance by a student would be evidenced by a range of proposed solutions, and the development of a single product that meets the technological challenge. There is broad understanding of the challenge. There is consistency between the rationale (purpose, context, specifications and constraints), criteria, design ideas and product.

  • In the case of the development of a stop-motion, this might present as a stop-motion production that addresses the requirements, with consistency between the established rationale and the generated product.

A relational performance by a student would be evidenced by a range of proposed solutions, each of which is extensively investigated. Final decision(s) regarding the best option(s) are justified. Design ideas are linked with the rationale and product.

  • In the case of the development of a stop-motion, a relational performance is demonstrated by the presentation of multiple design options, with justification for decisions made.

An extended abstract performance by a student would be evidenced by knowledgeable and justified problem-solving, using creative ideas and mature design development concepts. Constructive criticism of the product is presented, and alternatives and improvements suggested if not pursued. The technology process is understood to be iterative.

  • In the case of the development of a stop-motion, this might be evidence by a creative and justified product. Justification for final development is presented, but constructive criticism and suggestions for improvements are made. The student may have developed multiple videos, or experimented extensively before finalising the product.

Criteria and standards, expressed here using the SOLO Taxonomy (Biggs & Collis, 1992), both take careful and considered design by the teacher, and must be communicated effectively with students.