Saturday, February 04, 2012

The Myth of Peer Review--And Helping Make Academic Gatekeeping Work in Digital Environments

Peer review has long been something of an unexamined black box. Something is peer reviewed? We accept that it has been checked and re-checked, examined and tested. Just look at the way it is used in the popular media--"peer review" is accepted as reflecting a process of rigorous vetting. In situations of promotion, re-appointment and tenure within academia itself, it is often seen as a necessary benchmark indicating importance. We imagine, without ever really examining it, that peer review is a carefully crafted and considered system. We imagine that it is a process of submission, consideration, reflection, and review--and it can be, in fact. The system of Kairos explicitly fosters just this, through a three-step process open to author, editors, and reviewers. [During the question-and-answer part of our recent MLA panel, Kairos editor Cheryl Ball did make it clear that "open," in this context, does not mean "public," but open to all of the participants--in contrast to "blind" peer review where the names of author and reviewers are hidden from each other.] Most peer review, however, is neither carefully structured nor tailored to improvement of submitted work. The peer review system was never a careful creation; it only appeared in response to need--a need that has now changed completely. It's value--as a whole, at least--is the result of myth, not fact.

The range of what peer review can be is extensive. From Kairos with its deliberate process meant to help an author build something substantive to proforma rubber-stamping of what an editor wants, peer review can be used, also, for a variety of purposes. For the most part, it's a rather simple system: an editor of an academic journal receives a submission. After an initial vetting (often of the researcher's credentials, the formal aspects of the paper, and the surface logic of the presentation), the editor (not likely to be a specialist in the particular area of the paper, though probably someone within the same or related field) chooses from a panel of reviewers two or three people whose interests and expertise relate more closely to the paper under consideration. The reviewers, generally (like the editor often is) unpaid, have been selected by the editor or editorial board for their willingness to review in a timely fashion and on the basis of their own publications.

Much of the time, the review is conducted "blind." That is, the reviewers have no access to the name of the writer, and the writer (when receiving reviewer comments) has no idea who the reviewers were. This process was established to ensure no favoritism and no negative consequence. Unfortunately, as with almost any closed system, it has led to just the opposite, and to an unwillingness to take risks on the part of those submitting to peer-reviewed journals--when you don't know who is going to be judging your work, and you need positive judgment for your own career, you are unlikely to stray too far from the standard line of thought in your field.

Though it has been a system that has not faced the challenges that gatekeeping in journalism has (not until recently, that is--see my posts here and here), peer review probably worked as well as any system within a milieu of scarcity--during a time when it just was not possible to publish everything any scholar (or anyone at all) wrote, when the expense of publication required selection. Yes, it can be unfair, but it can also work quite well, when the editors, reviewers, and authors are able to cooperate with each other to improve the scholarly product. Personally, I don't think it should ever have been "blind," but I do understand even the motivation behind that.

When I said, during my MLA talk that blind peer review is dead, but just doesn't know it yet, that line was picked up for an Inside Higher Education article.  Since then, I've become something of a focal point of the debate over peer review, something I never expected nor really want. I've even been accused to being a technological determinist, though that was not my intent at all (I do not argue that technology is determining the death of blind peer review, but that it allows it to be taken off life support--the distinction is significant: the former makes the technology the driving force, the latter keeps the human in that position).

What worries most scholars isn't my claim. Most of us recognize the truth of it. What worries people is what will replace blind peer review. Are we in academia to face the tsunami of unfiltered information that seems to be today's internet without any way of determining what we should look at and what we can safely ignore? In asking this, we are following in the footsteps of journalism and of culture as a whole. What we are about to experience isn't new, but is something we are going to have to deal with--which is why I made my statement. The reality we have to face has already been dealt with (to some extent, at least) by journalism, where alternate means of gatekeeping are now in place in response to the explosion of 'citizen journalism' over the last decade. Blogging, for example, is no longer seen as a threat but as a tool that can be enfolded into any journalistic endeavor.

The old system of peer review, like the older versions of gatekeeping in journalism, are not effective within a digital environment where publication itself has ceased to be a nearly insurmountable barrier. We need to accept that ('peer review is dead'), but we haven't yet, not really ('but it just doesn't know it yet'). We shouldn't be arguing about that, nor is it worth our time to see the statement as one of 'technological determinism' or of anything else. It is simply a statement of observable fact, and a challenge for us in academia to decide what we want to bring to life in replacement. I'm not advocating anything by pointing this out, merely illuminating what should be obvious.

There's quite a bit going on today, activity that will lead to a replacement of blind peer review. Some of it has rather surprising outcomes, but it is all a question of experimentation, of learning. In some places, journals like Kairos are trying to take the older model and rebuild it for a digital age, making it more responsive to changing needs and more effective in promoting scholarship. Others, such as academia.edu and researchgate.net, are trying to use social-media formulae as means for sorting and presenting academic work. Science Works Magazine (soon to be Science Works Journal) is trying to use blog format and a new business formula to provide both gatekeeping and sustainability. There is much more going on, of course. All it takes to find it is interest and a little bit of time on the web.

In addition, in many academic departments, peer review is retreating as the standard for re-appointment, promotion, and tenure. In its place, scholars are asked to provide a broader grounding for proving the importance of their work. How many subsequent works cite the article, chapter, or book? What other reactions have there been to it? What is the distribution of the journal (just calling something "peer review" never should have been sufficient--and is less and less so, every year)? How many libraries hold the book? Who else contributed to the series, the journal, the volume--and what is their status? Rather than relying on a poorly understood outside process, departments are asking individual candidates to provide specific defenses of each piece of work presented. 

The old system, no longer sufficient, is being replaced. There's nothing we can do about that. If any of us wants to have an impact on the new system, we can't be spending our time defending what is already doomed. We can't even spend our time addressing the more obvious problems of peer review. That's like trying to put patches over the most rotten part of a roof when it is time for the whole to be replaced.

What we have here is the old situation where, if you are not part of the solution, you are part of the problem. The sooner we accept that the old system of peer review is not only unsustainable but is, in fact, dead, the sooner we can all help the various experiments in replacement along to the point where academic gatekeeping is productive and supportive, furthering scholarship and promoting it, never narrowing it.

No comments: