Who should teach? And who should decide who teaches? What should the learning environment look like? And who should decide how it looks--and should there even be just one “look”? These old questions came to mind today when I read Marty Nemko's October 29 article in Time, "Why Ph.D.s Shouldn't Teach College Students."
I went to college at the tail end of a period of experimentation in higher education, graduating from Beloit College in Wisconsin (after attending two others) in 1973. The Beloit Plan, an attempt to move away from traditional models of higher education, had drawn me to the school. It was an attempt to step aside from some of the rigid structures that had been in place for decades. Included in the Beloit Plan were three full (15 week) semesters in each year with students expected to be on campus continually for their first three and last three, leaving a five-semester flexible period in between (what would have been the third semester of senior year wasn’t included, keeping the time frame from entry to graduation in line with other colleges). A student could choose to take four semesters of classes on campus in a row, then take a year away to do something else, and then come back for a final four--or any variation. The only thing each student had to do was complete a “field term,” working at an approved internship or job for at least one semester and writing a substantial report on the experience (I worked as a “copyboy” in the newsroom of The New York Times). There were other changes, including attempts to move away from the “Carnegie hour” and to give students greater control over their programs (I was able to design and teach a course on Science Fiction during one of my last semesters--an experience that would prove quite valuable later, when this Philosophy major finally got around to going to graduate school in English and completing a dissertation on a Science Fiction writer).
Financial problems (among other things) let to the abandonment of the Beloit Plan later in the seventies. But it was the reason I went to Beloit, choosing it over Antioch (which had an even more robust "work" program) because I knew I was not disciplined enough to succeed in the even more student-driven environment of Yellow Springs. There were other possible alternatives to the traditional college model, including the “great books” school, St. John’s College in Maryland; Goddard College in Vermont; Friends World College with campuses on Long Island, in Japan, Kenya, Costa Rica; and elsewhere. Some of these are still around and still working to provide alternative paths to the baccalaureate, but their numbers are sadly diminished.
There were also experiments going on in what today we would call “sandboxes” within many colleges and systems, though few of these remain, either. Individual teachers, in addition, were experimenting with new models of learning, including Fred Keller at Columbia, whose 1969 article “‘Good-Bye, Teacher...’” still influences how I envision the perfect educational situation at the college level.
It seemed, at the time I entered college in 1969, that higher education was on the cusp of real and universal change, the development of a panoply of possibilities able to meet the divergent needs to multiple sorts of learners. Instead, thanks in part of the economic turmoil of the 1970s and the subsequent cultural turnaround of the 1980s, we have a reduced vision of education at all levels today, one driven by “outcomes” and “accountability” instead of by learning.
Nemko provides, accidentally, a nutshell explanation of why there these events have led to such a limited breadth to education today, and to so little real contemporary experimentation. Nemko is a “career coach, writer, speaker and public radio host specializing in career/workplace issues and education reform” with a Ph.D. in “education evaluation” from UC Berkeley. Though he has some teaching experience, his main focus seems to be on “results,” not on learning. In other words, he is a reflection of the growing mania for statistical verification and standardization that is now driving education in the United States at all levels and has done so since A Nation At Risk in 1983. He has created what he calls a College Report Card that he thinks should be front-and-center on all college websites:
- Results of the most recent student-satisfaction survey.
- The most recent report by a visiting accreditation team (for a college to retain accreditation, a team of experts periodically visits for a few days and writes a report listing the identified strengths, weaknesses and recommendations).
- The four-year graduation rate.
- The average four-year student’s growth in writing, analytic reasoning and mathematical reasoning (many institutions use a standardized exam like the Collegiate Learning Assessment).
- The percentage of students who graduate with their intended major who are professionally employed or in graduate school within six months of graduation.
Frankly, I can’t think of many things seeming so innocuous that would be more detrimental to actual learning on campus or that are more directly reflective of the wrong turn American thinking on education has taken.
Student satisfaction, for one thing, should not be a goal of education. Believing it so is part of the fallout of a mistaken belief that business models can be useful in education. They cannot be (except in aspects of running the institutions). I want my students to be engaged, curious, excited, focused… but I do not care if they are satisfied, as long as they are learning. That is quite different from my attitude toward my customers back when I was running a store. Sometimes, in fact, the process of learning requires dissatisfaction and discomfort. That is never true of customers in business.
Accreditation bodies are becoming more and more suspect as evaluators to those of us who care more about learning than outcomes. Over the past few decades, they have become drivers of pedagogical focus through an over-emphasis on assessment through quantification. They are also overly interested in things like graduation rates, which are not always the most useful tools for evaluation (City College of San Francisco, for example, which is having trouble with its accreditation body, is criticized for low graduation rates yet its mission is not always to move students to degrees but to provide educational possibilities for those people who may be more interested in courses than programs). Today, it can seem like the accreditors want to reduce education to entries on spreadsheets. Their evaluations can seem to have little to do with assessing the amount of actual learning going on.
Which is also the weakness of looking at the four-year graduation rate as a one-size-fits-all gauge. One of the nice things about the Beloit Plan was that it could allow students to take from three full years to five (or more) to graduate. With a one-semester field term and eight semesters of classes, a student could graduate almost exactly three years from matriculation. On the other hand, a student who found their field-term project particularly engaging could extend it to a year or much more without endangering their ability to return and complete their degree. Today, for many CUNY students (I teach at a CUNY campus), graduation in four years is an impossibility. A great number of them work and, even with financial aid, cannot afford the time to take 15 credits a semester, the number they would need for a four-year path. Others have family commitments that keep them from that pace. The idea that the institution should be evaluated negatively for the socio-economic situations of the students is, well, silly and counter-productive.
Let's move on to standardized assessment. In my field of English, it is impossible to standardize assessment and one could even argue that trying to do so is counterproductive. Even the rubrics that supposedly can reduce writing to numerical scales are based on subjective judgments, making the resulting numbers, which can seem so “scientific,” meaningless. One, for example, asks that writing evaluation be based on numeration of things like “minimal,” “some,” “adequate,” and “thorough.” And, in another aspect, of “numerous,” “several,” “few,” or “very few.” How can one differentiate between these fuzzy terms on a numerical basis? It just doesn’t translate. Also, the standards that we do supposedly have were not established through any nationwide (or higher-education-wide) discussion but have evolved through the accrediting bodies or experts without review from outside (one of the same problems Common Core State Standards faces in regards to k-12). There’s an aura of arbitrariness to contemporary “standards” that not only makes them suspect but that makes one wonder if single standards are a good idea in the first place. Though the idea of "standards" may be attractive, there needs to be care and consideration behind their development--and a broad discussion. None of this has happened.
The final of Nemko’s “grades” would be on what students do afterward. OK, but education, to my Dewey-saturated mind, needs to be much more than job preparation. In fact, that should be the least of our concerns in terms of evaluating the efficacy of education. When schools are evaluated on how their students do in the short run, as Nemko wants, they will concentrate only on that, producing graduates who can step into particular jobs but not the educated citizens who will benefit American society over the long haul. Education should be aimed at the future, not at filling immediate needs. When students are well enough educated generally, they can learn the specifics of a job quickly--and that sort of training should be on the shoulders of employers, not colleges. (Of course, I am not writing of certificate programs here, but of the baccalaureate degree that attempts something greater than simple preparation for employment.)
A report card like Nemko’s will do nothing to improve education or make potential students better able to make intelligent decisions on where to attend. It will only limit administrative focus to those particular areas he lists, making them, not education, the center of the college experience.
Nemko’s title, as should be clear by now, is perhaps more than a little misleading. It is, in respects other than institutional evaluation as well. Little of college teaching, these days, happens at the hands of Ph.D.s. Complaining about them is somewhat like beating a dead horse. Adjuncts, most of whom are not research scholars, shoulder much of the burden of undergraduate education. And many of us who are Ph.D.s follow, to some degree, the ideas developed through the likes of Paulo Freire and Keller’s Personalized System of Instruction or the Mastery concept that grew from it over the seventies. At CUNY (and at City Tech in particular) we are constantly working to improve peer-led learning. Nationwide, digital possibilities are being explored as means of enhancing current educational processes or as alternatives. Rare is the “sage on the stage” that Nemko complains of, so rare that the phrase itself has long ago turned stale. What Nemko claims as a “hidebound” situation is not that at all, but a constantly evolving process.
Now that, of course, may seem to contradict my initial point that experimentation in higher education has died down. What has changed, really, is the nature of experimentation, a change brought about by the type of mindset that Nemko himself represents, the mindset with a focus on outcomes and not on learning. Assessment and “standardization” (standards, again, imposed without appropriate discussion of the reasons for them or the reasons why any particular ones are chosen) have straitjacketed attempts at systemic experimentation; the experimentation happening today takes place on an individual level or within constraints and regimentation imposed by accreditation agencies (among other external forces) unlike anything imposed in the past.
This is why there are so few experimental colleges today, or attempts to break out of the box of traditional educational assumptions: We have forced education into boxes like those on Nemko's "report card" or on grading rubrics--and we won't let it out. We are more interested in ranking than in learning. What Nemko offers will only increase this.
I know: I haven't answered the questions I posed at the start. Who should teach? Everyone, but subject specialists can organize the learning in ways nobody else can so need to have at least supervisory responsibilities. Who should decide who teaches? The collective faculty needs to have responsibility for who is deemed a subject specialist, but teaching itself is best based in the collective that includes students and faculty--and everyone in between. Who should decide how education looks? The institutions, working as wholes, need to do that--not outside experts but the people who will actually use the spaces. This, of course, implies that there can be no standardization.
Certainly not, if real learning is the goal.
(Originally posted on the Academe blog.)