Indulging in some meta-reflection: evaluating how we evaluate

A mentor once offered me this nugget during one of many iterations of my quarter-life crisis: ‘find where head and heart intersect…and stay there’. A rather lofty guiding principle. But it’s possible that through conducting evaluation research for Go-Givers, I have.

I’ve spent a good part of the last few months poring over children’s mindmaps, or in the technical language of our recently completed evaluation report: ‘pupils’ pre and post topic assessments’. Using these humble representations of their minds, I have been trying to read into the pathways and cognitive activity of 4 to 11 year olds, but with little expertise on what constitutes trajectories of progression in social, emotional and moral literacy. My intuitive self thinks that the 10-year old grappling with issues of scapegoating, for example, who states that,

‘scape goating is where someone blam[e]s a person or a animal because someone who looked like them or is in the same religion or is the same animal did something wrong, so they blam[e] it on the other person/animal that didn’t do anything wrong’,

probably has a sound understanding of the stereotyping that underlies its more violent counterpart, scapegoating. Ok, I admit he shows some confusion with the charming allegory in the Go-Givers lesson that features a character who happens to be well, a goat. But his literal interpretation indicates to me this pupil has internalised the concept, rather than regurgitating a soundbite from his teacher. I can tell, for example, that he has progressed further along than his classmate who states:

‘I have learned that the goats got blamed for what the sheep did’,

who doesn’t appear to have moved from the specific (fictitious) examples to realise the bigger societal picture. But perhaps this response from another child does signal that shift:

‘scapegoat mean that if 1 child is bad everyone think all children are bad’

Certainly, this pupil has extrapolated from the story about the goat who is a victim of scapegoating to another vulnerable group in society. Perhaps she has herself felt the unfairness of this particular strain of stereotyping. None of these pupils were answering a specific question (as on a test-based assessment), but via free associations, all of them volunteered something about scapegoating that suggests varied levels of conceptual understanding.

This is the kind of intuitive ‘leveling’ that has guided my qualitative analysis of the ‘data’ in this evaluation, that is, the mindmaps that were completed before and after pupils engaged with Go-Givers resources around themes ranging from ‘anti-bullying’ to ‘rights and responsibilities’ to ‘sustainability’, among others. (See a sample of pupil mindmaps [pdf])

Teachers probably make these kind of intuitive judgements every day to determine if their pupils are progressing. But here in the office, far removed from the ‘field’ and the constant observable clues, our cerebral selves are supposed to take over. We are bound by sector imperatives to ‘demonstrate impact’, ‘measure social value’, ‘assess outcomes’, and so on. I don’t need to be convinced that if we are in the business of creating change, it is important to know if we are doing it, and doing it right. It’s equally important to know if that change would happen anyway (deadweight in eval speak) and whether that change is indeed due to our intervention (attribution)? In short: do we need to exist?

But does a fixation with neatly standardised data to provide an evidence-based justification for the work we do compromise our affective sense of what comprises change? Conducting an in-house evaluation certainly is rife with issues about objectivity. But at the same time, with our institutional knowledge of our users, our acute awareness of the roadblocks to participation (in the programme and in its evaluation), and perhaps somewhat childlike sensibilities that allow us to get ‘inside’ the heads of children, we (a team of ex-teachers excepting myself) may be the best equipped to apply the wealth of evaluation best practices and make them age appropriate, and of educational value, for our cognitively specific stakeholders of primary school children.

Our evaluative process has been riddled with questions I feel we have not satisfactorily answered: How do we know change has occurred? How do we know pupils’ intention will translate into action? How do we know the impact will be long-lasting? How can we be rigorous and flexible in methodological design? How can we make evaluation meaningful for the participants? But we have made some (qualitative) progress towards negotiating these tensions. (For a fuller discussion of the methodology, see p.5 of the full report)

Even though we may not be able to quantify the social value we generate, or scale and score the non-standardised data with any confidence that this would be meaningful, we may at the very least have come up with a way to visualise internal changes in our beneficiaries. The best we can do is observe, interpret and make judgements that justify the content and opportunities we are providing through our intervention, as teachers so often do.

Perhaps evaluating outcomes in children is precisely about being affective and analytical at the same time, about trusting our intuitive sense of what qualifies progress. Perhaps, as a reflective activity for us as practitioners, it is ultimately where head and heart intersect.

Views expressed on this blog are not necessarily those of the Citizenship Foundation.

Leave a Reply