What use are Shakespeare’s plays? Back in the day, when my wife and I were dirt-poor arty types and lived in a hovel that declined the profligacy of doors, a two-volume hard-back edition of his collected works proved a handy anchor for the bedroom curtain.
Later, when I became a graduate student, I found the single edition paperback version a convenient support for my laptop. Also I once landed a job because my boss had an enthusiasm for Shakespeare’s history plays and when asked at my job interview what I was reading I could say “the history plays”. So: a doorstop, a computer rest, a facilitator of gainful employment.
But such applications of Shakespeare are probably not what most consider his obvious use. His dramas are among the most profound and astonishing artworks we possess, and it is these qualities that cause us to value him. If he supplies handy visuals for holiday postcards or quotes for crossword puzzles, all well and good. But the main use of Shakespeare’s plays inheres in our experience of them as plays.
How to measure the use of those plays? How to measure the use of any creative arts? This issue, incidental for most people most of the time, is a matter of intense interest in certain quarters some of the time.
Measuring the creative arts
The recent, inaugural conference of the Australian Council of Deans and Directors of Creative Arts (DDCA) “the new national organisation, representing learning, teaching and research in the creative arts in Australia, with a membership of more than 22 universities and other higher education institutions”.
A day talking about the issues facing creative arts research in Australia saw persistent themes emerge. As every academic knows, the 2015 national research assessment exercise (ERA) is now underway. The introduction of new assessment categories in 2010 allowed “non-traditional outputs” to be put forward for the first time.
ERA has not yet replaced the Higher Education Research Data Collection (HERDC), an index which captures only some traditionally-published outputs. But ERA now clearly overshadows it, being more comprehensive and hopefully, supposedly, a better measure of the totality of the research undertaken by tertiary institutions all over the country.
There were some impressive speakers at the conference and the tone was optimistic without being blithe or over-emphatic.
Paul Gough, a new Pro Vice-Chancellor at RMIT University, and a veteran of research assessment in the UK, gave a thoughtful, astute keynote laying out the features of contemporary indices and their likely future development. Tim Cahill, Director of Research Evaluation at the Australian Research Council (ARC), gave a thoughtful, astute response to Professor Gough.
Later Professor Margaret Sheil, a chemist, made a celebrity-like appearance. Creative arts researchers have good reason to be grateful to the author of Reflections on the Development of Orthogonal Acceleration Time-of-Flight Mass Spectrometry, since it was Sheil in her capacity as CEO of the Australian Research Council when ERA was established, who ushered in the new assessment categories.
Sheil is the artist’s ideal of a scientist: no-nonsense, impatient with “the reduction of everything down to short-term utility”, committed to intrinsic discovery, “quality projects done in a high quality way”. Hard not to cheer her on. I should have bought my football rattle.
There is no doubt the creative arts have done well out of ERA, that they now have a seat at the research table. What is more impressive is that a largely instrumental exercise – one about ranking and funding – has nevertheless promoted a greater degree of cross-disciplinary awareness.
Not only do the creative arts now “count” – their contribution is better understood throughout universities as a result of being counted. At least potentially. It all depends on how an assessment exercise is undertaken, the context of its application, and the values it serves.
Here it is important to proceed with some caution. Anyone who heard Cambridge University Professor Stefan Collini on ABC Radio recently should feel justifiable suspicion about the role and cost of research assessment indices, particularly those around the “impact” agenda.
Collini’s analysis of the UK “impact” experience is comprehensive, devastating and deep. The nub of his argument is that the consumer model of higher education has led to a switch from “specifying aims to measuring outcomes” and the recasting of research purposes into quantifiable measures of end-user satisfaction.
When applied to research, “impact” gathers “evidence of incidental byproducts or side effects”, data that is expensive and time-consuming to obtain, a poor proxy for quality and only tangentially related to the substance of a research project. Overall, impact is “a textbook example of the way a misconceived system of accountability can end up determining the character of the activity it is only designed to monitor and measure”.
Impact for Arts Deans
How then should the DDCA proceed? And what contribution can creative arts researchers make to the research assessment debate? These questions are both charged and complicated. But two things can be said up front.
The first is that those engaged in leading research assessments should consider their purpose and not simply their parameters. If the results of such exercises are to be used in puerile ranking games or by governments blindly intent on making across-the-board budget cuts, measures of research output are in bad faith, voiding the democratic principles on which they are founded.
The political analysis must be as sophisticated as the methodological one. Just because something can be counted, doesn’t mean it should be counted.
Following Collini, it is important to know that a research index honours the values it claims as its base motivation. Individual researchers have a duty to enforce a degree of bottom-up scrutiny. So far as the impact agenda in Australia is concerned the next step is clear: we need more information about the UK experience before bringing its measures here.
The second thing to be said relates specifically to the creative arts. For a declining number of “real” academics creative arts research remains an oxymoron. ERA’s great boon has been to reveal this for the prejudice it is. Creative arts activity can be “real” research just as some traditional publications are so poor as to be unworthy of the name.
Now – this is something I hear quite a bit – the creative arts no longer have to engage in “special pleading”. Now they are “just like everything else”. But they aren’t.
Quite obviously the “uses” of the creative arts are particular, if “use” is the right word, which it probably isn’t. Collini talks about the fetishisation of quantification that feeds public debate in our contemporary, over-connected, hyper-competitive, market-mad society. What gets ditched in the scramble for attention and funds is anything long-form or unfashionable.
Ethical arguments, say, or reasons that can’t be put into a sound-bite or twitter-feed. To assess a Shakespeare play you need not a snap measure of use-value, but knowledge of blank verse, characterisation and Elizabethan staging. You need to know it to judge it, not the other way round.
Which doesn’t mean you can’t measure some indicators of, say, a production of Hamlet (“change in social behaviour in relation to the reduction of regicide”?). It does mean that such measures need to be placed in a context that broadly makes sense.
Here the creative arts can do a favour to other disciplines as well as their own. Because they so clearly need specific consideration (quite different from “special pleading”) they force governing authorities to come clean about their over-arching aims.
It is not only universities that hide behind methodology. Data is the new bunkum, and not only for the sake of qualitative issues, but for the sake of continuing faith in quantitative analysis, it is vital that the research assessment debate continue to dialogue with areas like the creative arts that have their uses but will always elude precise numerical representation.
“Though this be madness, yet there is method in ‘t,” noted the wily Polonius. Sometimes, though, it can be the other way round.
Measure for measure: the creative arts and the ‘impact agenda’
By Julian Meyrick – Flinders University
Julian Meyrick does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.