Monday, December 31, 2007

Academic Commons [Updated]

Via Blogging Brande, I discovered Nick Montfort’s post on Grand Text Auto concerning open access and academic journals.

Let me start with a sober and analytic reaction: “Yippee!”

That out of the way, the battle between those of us who believe that the “commons” benefits everyone (in part, through what I call “the Grateful Dead effect,” from the impact of the band’s decision to encourage taping of shows and the trading of tapes—Steve Gimbel expands on this a bit on his blog) and those who believe the same about “ownership” extends well beyond academia. It has a particularly pressing and important connection to education, however, for decisions about ownership have an immediate and fundamental impact on what we can do in the classroom. And on the place of our research in the wider society (Montfort's immediate concern).

Montfort writes:

I think there must be a few things that those of us who are part of the scholarly publishing process can do to foster an open-access future. The easiest thing that I’m able to think of is simply not volunteering our labor to lock academic writing away from the public.

It’s ironic: We who most need unfettered access to knowledge are sometimes the most protective of it. Or, as Montfort describes, allow ourselves to be used by a process that results in what he calls “anti-publication”:

It may serve some credentialing purposes and help universities assess tenure and promotion cases, but it ends up restricting access to scholarly work rather than helping to publish that work, that is, helping to make it available to the public.

Yet many of us, even proponents of what Brad DeLong calls ”The Invisible College”, continue to support this process through continued writing and reviewing. With that in mind, Montfort sent this email to one journal requesting he act as a reviewer:

With regard to your request, I cannot agree to review for your journal right now. If [it] becomes an open access journal, I will be very glad to review articles for the journal.

“Bowerr” of Blogging Brande extends Montfort’s call a little bit:

I've grown more and more irritated by big journals, and I will be making subscription changes this year.

Montfort ends with this:

I was also thinking that those of us who are academics dealing with digital media have the chance now to determine whether we’re going to become one of those public-irrelevant fields where anti-publication is the norm and we speak only to ourselves, or whether we want to speak to and learn from those creating and encountering poems, games, art, drama, writing, and other sorts of digital work outside the university.

For us to speak effectively outside the narrow confines of educational institutions, we in academia need to start moving publication of our work online and into venues we create ourselves (without institutional backing), developing a web of reference and insistence that evaluation for promotion no longer rest on the “quality” of the publication venue but on the work itself. We can even help establish the value of particular works through our links and comments. This is already going on, of course (even this post is part of it--in a very small way), but we've yet to reach the 'tipping point' that will bring us to academic respectability.

Idealistic? Yes. Difficult? Yes. But it is happening--on blogs and on new types of open-source online journals. As Montfort says, without it we will never establish the dynamism and relevancy in the wider world that so many of us want. Even as scholars, after all, we desire to be of the world, not simply about it.

[Update: Sherman Dorn makes the point that what Montfort is advocating is simply divisive, setting up a dichotomy where a spectrum would be preferable. While I agree, I also see that Montfort's position is one in need of loud expression, if it is going to have any impact at all.]

Sunday, December 30, 2007

"Stop!" He Says, But Stop Them He Can't

The creators of The Jazz Singer (Alan Crosland, 1927) extended the concept of the sacred far beyond the religious. They applied it to all cherished beliefs, including the one their film would destroy. There were many in Hollywood who believe the art of the film would die if conversation via sound were added. Seeing this worship of the silent, Crosland (who had directed Don Juan the year before, with music and sound effects, but no talking) and Warner Brothers made a deliberate choice of vehicle for the first “talkie” (that is, a feature-length sound film with conversation). They decided to pick a story allowing them to use the new technology to accent both the point of the film and the point they were making about movies through the film. Recently, this came home forcefully as I watched the new, restored The Jazz Singer on DVD.

Though The Jazz Singer is “about” the pressures of a new and changing secular culture on a religious family and its traditions, it is also “about” the pressures technological change was placing on the film industry of the late 1920s and on its traditions.

In the movie, a cantor has raised his son to carry on as the sixth generation following the father's path; the son, however, has felt the allure of jazz, turning from religion to a secular belief equally strong. But that’s not the whole of the film: Just as the son feels pressure to respect his father's wishes even as he turns irrevocably to jazz, the makers of the movie show that they also appreciate and understand the elders they are out to replace, those who saw the advent of sound as debasement of the art of their sacred silent movies—even if they disagree. In both cases, the pressure for change elicits response and the movement towards the secular appears irrevocable but—and this is the key point for my project—the tools of the secular are eventually turned towards new sacred ends. The religion of the cantor has not died within his son. [One of the ironies of the movie is that the new secularists in Hollywood could not do the same. They destroyed much of the old art rather than finding ways of incorporating it in the new.]

Given the momentous nature of the change The Jazz Singer has come to represent, it is not surprising that questions concerning its content are often given short shrift, importance of form dwarfing consideration of content. Add to this the movie’s unfortunate utilization of race and avoidance of its content becomes understandable. It is true, as Corin Willis writes, that:

The Jazz Singer is not a film which generally features in discussions of cinema art. In the 2002 Sight and Sound poll of the 'ten greatest films' it failed to attract a single vote from the hundreds of directors and critics consulted. The meaning that the film has, its place in popular memory, arises from the material conditions of its production, from its status as the first talking picture. In more recent times the sense of The Jazz Singer as a film defined by its context has been compounded by the attention which has been paid to Al Jolson's use of blackface.” [“Meaning and Value in The Jazz Singer” in Gibbs, John and Douglas Pye, ed. Style and Meaning: Studies in the Detailed Analysis of Film (Manchester: Manchester University Press, 2005), 127]

The fact of the movie's unique place in the history of film, exacerbated by a racist aspect anathema to modern scholars, has left the movie as an unexamined marker rather than as a contributor to any content discussion, let alone to examination of questions of the impact of technology on the sacred and the secular. Yet, aware of the significance of what they were doing, the filmmakers certainly did incorporate the context of the change they were exhibiting as part of the content as well. The coupling they produced deserves attention it has never had, especially as an exemplar of future couplings of secular technology and the sacred.

Specific to discussion of the sacred and the secular within the context of a changing entertainment and communications media is the placement of speech in The Jazz Singer. Speech “appears” only in two of the movie's scenes. In the first, speech starts with the famous “Wait a minute, wait a minute: you ain't seen nothin' yet” and is followed by instructions to the pianist... about the “jazz” Jolson's character wants played. The scene explicitly links the secular with technological innovation, leaving the earlier silent, caption-carded films as part of a sacred tradition that, while never disparaged in the movie, will clearly have to change to survive. The second scene encompasses a conversation between Jolson's character and his mother (though she hardly gets a word in). Here, he talks of the future, imagining vistas of success and joy—just as did the promoters of new sound movies. The conversation ends with Jolson playing a jazzy tune (connecting back to the first instance), only to be interrupted by the entrance of his character's father, who shouts his one audible word of the movie, “Stop!”

The way the word is utilized shows that even the cantor (as a representative of religious tradition) is going to have to move into this new world whether he knows it or not, whether he likes it or not. His only way of resisting the change bedeviling him is, ultimately (says the movie), to use what change offers. He will fail if he says nothing; jazz, having invaded his home, will stay.

By allowing his word to be heard instead of shown on a caption card, director Crosland undercuts the very command, making it a part of the world the cantor is trying to resist. This, in terms of the dual themes of technological change and change in the family from the sacred to the secular, is perhaps the central statement of the movie. The cantor must say “Stop!” But his word emasculates his desire. This genie cannot be forced back into the bottle, no matter the belief or the passion.

The tensions between the sacred and the secular go much further in The Jazz Singer than that shown by the analogy to the change to sound film. Jolson's character’s use of blackface becomes a metaphor for the movement from a sacred Jewish life to one where his character is “passing” in a secular culture he wants to be a part of, but that is not, in at least one essential aspect, his. Not yet, at least. The stage is set for this earlier in the movie, when the character’s mother worries that a woman mentioned in a letter is a “shiksa.” She is advised not to judge on the name, that many Jews in theater have changed their names, anglicizing them, “passing,” as Jolson's blackface, applied while he speaks of the Jewish “race,” makes clear that he will do, too. As he already has, changing his name from Jakie (probably “Jacob”) Rabinowitz to Jack Robin.

The point of the example of The Jazz Singer here is that, way too often, consideration of the impact of technological change is often conducted within too narrow a framework. It wasn't simply the technology that needed changing in the movies of the late 1920s, but the attitudes towards technology, and even towards art. As there is a constant, long-term tension between the sacred and the secular, its use as a metaphor for tensions in other areas assures a lasting impression, understandable to almost any generation. At the same time, technology, generally first embraced by the secular, consistently moves towards utilization for sacred purposes. By refusing to look at just technology, or just the secular and the sacred, new light can be shone on even eighty-year-old movies and even on technological movements that continue today.

Technology, Culture, and Development

The debate over Nicholas Negroponte's One Laptop Per Child (OLPC) reinforces my belief that few of us willingly recognize that technology alone can never be an answer for anything. Technology works within cultural and need contexts and not by itself. That is, it has to address perceived need and to operate within a milieu of cultural acceptance. Only then can it to be of use.

There are people I know who would be insulted, were I to go out and buy for them a new flat-screen TV—not because they wouldn't appreciate the idea of a gift, but because they believe that a friend should recognize that such a TV is not part of what they want as their lifestyle. Most of us have learned to respect such individual differences, even if we can't imagine not getting our daily does of The News Hour. Yet, for some reason, we are not willing to allow difference and choice outside of our own cultural context.

Yes, few of us are willing to extend the same courtesy we give to individual friends to other cultures... especially if those cultures are poorer or less developed than our own. We hold an assumption that our wealth and technical skill provides us with knowledge about what other people should want or should do. In addition, we would never say to the friend appalled at the TV set we've brought over, “Just keep it; it can't hurt,” yet I have heard many say exactly the same thing about the OLPC computers.

Anytime we wish to work with someone to improve their life, we need to start with a number of basics. First, we need to make sure we are dealing with each situation individually, not claiming that one size can fit all. Second, we have to make sure that the person wants our help. Not everyone, even in the direst of need, does. Third, we have to work with the person we are helping in making decisions about the aid. Fourth, we need to follow their lead when the actual activity starts.

The same process needs to hold true for development. Yes, it can be frustrating to “know” what may be good for someone else or some other culture, but (just as we have learned in our personal lives) forcing something on someone never really works.

A friend of mine, when we were in Peace Corps, wanted to give a gift to a family she had stayed with. Their home seemed barren, so she bought them something to brighten it up. The parents, when she gave it to them, reacted almost angrily. They knew how much the item cost, and what they could have done with that much money. They saw the gift, though well-meant, as a waste... for indoor decorations had never been part of their culture, where the rooms of a house are for sleeping and the real life takes place in the courtyard. My friend had not yet learned that. She didn't yet know enough either about how the people lived or about their gift-giving tradition (gifts are usually simply of money, of food, or of service) to “give” appropriately.

Before Christmas, my girlfriend picked out a letter to Santa from the Post Office in Manhattan. It requested clothing for four children. We bought things in the requested sizes and delivered them, staying for half an hour, drinking juice and chatting. There was no awkwardness, though the mother spoke very little English; the items had been requested (and the request had been the family's idea) and we were merely providing what was asked.. Still, we were not simply the unquestioning deliverers: All of the articles of clothing were things we and the family would all agree were needed by the children.

OLPC, on the other hand, may really be a gift not asked for and not wanted.

My problem with the OLPC project has nothing to do with the technology itself, though I do question the advisability of making a version of anything specifically to be used to “give” to poor people. Sure, there should be cheap computers, but make them cheap and useful for everyone, with no distinction between the one I might buy and the one you might give away. That aside, my real problem with OLPC is that the “need” it addresses is not one expressed by the people it is meant to “help.” It is a “need” discovered in the think tanks and universities of the developed world.

Furthermore, OLPC is constructed as a “mass” product, one design being “effective” in all cultural situations on all continents. But cultures are as different as individuals, and products “for” them need to take into account the particulars of each situation. Cell phones and other products already incorporate cultural differences. Why not OLPC?

The supporters of OLPC assume that anyone would want the product. Why not? They want it themselves and are delighted as they play with the commercial version that has recently become available. But, were they to look beyond their own desires, they might find it is not quite so wanted everywhere. Unlike a cell phone, this laptop, though very small, is not easily hidden. Though its $200 price tag may seem small to us, it is not, everywhere. Ownership of such an item brings with it responsibilities—ones that a particular individual may not want to take on.

There are other reasons why people in certain areas might not want the laptops, seeing them not as “help” but as items tying them in to a monetary economy. Perhaps political reasons will keep them from wanting the laptops. Whatever. The decision should be theirs to make, not ours to assume.

Even if a product can be made that fits specific needs and is wanted by the people of that culture, one really does have to spend time with the people of the culture on crafting the product so that it will actually be used. So that it won't become just another in a long series of doorstops (assuming its in a culture with doors). In development situations, what the aid worker brings is knowledge of the technology and of the developed world. What she or he does not have going in is an understanding of how the technology might meld with the particular culture. That takes time, and takes being on the ground.

To succeed, such a product cannot be introduced massively. It needs to expand as its use by people in the particular culture expands. The OLPC is meant as a classroom product. Yet it won't work well if simply distributed to schools. Giving one (after determining that people want it, after customizing it to the particular culture) to each classroom rather than to each student, then assessing—with the teachers, the parents, and the school's administrators—its impact before introducing more... constantly watching what is done with the laptop, constantly listening to all the people involved... that's how to make something like this work.

If, again, it is wanted.

In most places of extreme poverty, there's no infrastructure for making something like OLPC work, and it really would not be wanted. It would be received with the attitude of the family my Peace Corps friend visited.

A student of mine recently visited the “campo” in the Dominican Republic where her father had grown up. She describes the local school:

the windows and doors were broken and the paint was chipping off. When I went inside it looked even worse. The classrooms were crowded. There weren't enough desks and the blackboards were so small. I even heard some of the kids saying that they were going to get to school early the next day just to get to one of the few desks that are in good condition. Some of the kids had no shoes on and their uniforms looked faded and some of them even had outgrown their uniforms.

No computers will solve the problems of that school.

In fact, they could hurt. How would the computers be protected? Few schools in impoverished areas have any security at all. Collateral damage to the schools coming with theft could destroy what little there is of them in the first place. Would , instead, they be distributed to the students to take home? “Home,” in areas of poverty, is no place of security. And what would the student's responsibility be if the laptop were stolen? Just give her another, and you may simply be starting a conduit onto the black market. Make him pay, and you might bankrupt the family.

More important than the OLPC project itself is the larger attitude towards development that its proponents express. Having spoken (until recently) only to those with development experience about development issues, I was shocked to find how naïve the majority of even well-educated Americans are about it. Most of us assume that the arrival of goods and food and tools—along with the people to “teach” how to use them—is all that is needed for successful development. We tend to forget that poor people are not stupid or unable, but are generally in traps they haven't ways out of. They may have moved to a city, having heard of plentiful jobs—only to find nothing for them and no one wanting them. They may have farmed the same land for generations, but now desertification is overtaking their fields, forcing them to buy fertilizer (among other things) that once they didn't need, pushing them over the line from sustainability to poverty. For the worker, what is needed is an economy that provides both work and a sense of a future with work. For the farmer, what is needed is means of improving efficiency without increasing debt. In both cases, needs extend well beyond what a laptop can do.

Development works, if its projects are designed on the ground and within a stable political situation (though that's another topic for another time). It does not help, however, to have people from far away, no matter how well-meaning, deciding what is needed.

It also doesn't help to institute projects that require extensive outside funding. The problems of corruption, for one thing, can hinder this project just as they have so many others—and these problems are as real as they are pernicious, sometimes making the project more damaging than helpful. But that, too, is another topic for another time.

Wednesday, December 26, 2007

One Laptop Left Behind

What follows is an edited excerpt from my new book, Blogging America: The New Public Sphere. I’m presenting this passage here because of continuing talk about how Nicholas Negroponte’s One Laptop Per Child (OLPC) is going to save education in the developing world (when it really is nothing more than a new and fascinating toy):

Technology alone has no impact. It needs understanding, acceptance, and a place in a plan towards a goal. It almost seemed, though, that the United States came to believe after World War II that technology alone could solve any problem. But many, even in the fifties, of course, did recognize the weaknesses of this view, and understood that industrial might alone would not prove sufficient (something else many Americans had come to believe in the wake of World War II) to improve the world. Among these was Philip K. Dick, whose 1963 novel The Man in the High Castle contains within it pieces of a science-fiction novel by one of its characters. One of those passages goes like this:

Only Yankee know-how and the mass-production system—Detroit, Chicago, Cleveland, the magic names!—could have done the trick, sent that ceaseless and almost witlessly noble flood of cheap one dollar (the China Dollar, the trade dollar) television kits to every village and backwater of the Orient. And when the kit had been assembled by some gaunt, feverish-minded youth in the village, starved for a chance, of that which the generous Americans held out to him, that tinny little instrument with its built-in power supply no larger than a marble began to receive. And what did it receive? Crouching before the screen, the youths of the village—and often the elders as well—saw words. Instructions. How to read, first. Then the rest. How to dig a deeper well. Plow a deeper furrow. How to purify their water, heal their sick. Overhead, the American artificial moon wheeled, distributing the signal, carrying it everywhere… to all the waiting, avid masses of the East.

Today, there are still people who have such idealistic visions… such as Nicholas Negroponte, with his One Laptop Per Child project. They forget that it is not technology alone that drives cultural change or creates new worlds, but the interaction between the old and the new—between, to use the image created by Henry Adams, the dynamo and the virgin. As Adams wrote, “whatever the mechanicians might think, both energies acted as interchangeable force on man, and by action on man all known force may be measured.” Like the old and the new, the machine and belief (culture) are inextricably linked. It is foolhardy, therefore, to imagine that Negroponte’s $100 (now $200) laptop will be grasped to the bosom of Africa (say) with the passion that the continent has embraced the cell phone.

The machine is being presented as the discrete answer—something it never has been and never will be. The African writer and editor Binyavanga Wainaina wrote an article for Bidoun in which he explains quite clearly the problems with the Negroponte vision. His take on the subject dovetails perfectly with my own, one built during my own four years in West Africa, and makes me understand an incident that happened while I was teaching at the University of Ouagadougou in the mid-1980s.

A Dutch physics professor at the university developed a seminar for secondary school science teachers to show them how to make use of available items for experiments in their physics classrooms. Though the experiments he had designed were ingenious and could have been quite effective, the teachers rejected them unanimously. “What, we don’t deserve equipment of the quality found in your schools in the Netherlands?” they cried. “Aren’t you, this way, condemning us to a perpetual second class?” The Dutchman was devastated—but the African teachers had a point. The things made for the poor by the rich carry with them a slightly repellant odor.

Wainaina writes about the wind-up radios, rarely seen today in Africa, that were once all the rage in the developed world for the underdeveloped:

But Baylis's Freeplay Radios still exist. You will find them among new age fisherfolk in Oregon; neoblue collar sculptors working out of lofts in postindustrial cities; backtoearthers in Alberta; Social Forum activists and neoGrizzly Adams types everywhere. Angstridden victims, all. But the enthusiasts of the windup radio suffer not from poverty or lack of information but from wealth, vague guilt, and too much information. They are the only people who can find nobility in a product that communicates to its intended owner: you are fucked.

Later in his article, Wainaina explains:

A windup radio. A magic laptop. These pure products are meant to solve everything. They almost always fail, but they satisfy the giver. To the recipients, the things have no context, no relationship to their ideas of themselves or their possibilities. A great salesman can spark a dialogue with you; in a matter of minutes, you come to make your own sense of his product, fitting it into your imagination, your life. You lead, the salesman follows. Whereas a pure product presents itself as a complete solution; a product built to serve the needs of the needy assumes the needy have measured themselves exactly as the product has measured them. … There are few useful "development models" for genuinely selfstarting people. I am sure the One Laptop per Child initiative will bring glory to its architects. The IMF will smile. Mr. Negroponte will win a prize or two or ten. There will be key successes in Rwanda; in a village in Cambodia; in a small, groundbreaking initiative in Palestine, where Israeli children and Palestinian children will come together to play minesweeper. There will be many laptops in small, perfect, NGO-funded schools for AIDS orphans in Nairobi, and many earnest expatriates working in Sudan will swear by them. And there will be many laptops in the homes of homeschooling, goattending parents in North Dakota who wear hemp (another wonderproduct for the developing world). They will fall in love with the idea of this frugal, noble laptop, available for a mere $100. Me, I would love to buy one. I would carry it with me on trips to remote Kenyan places, where I seek to find myself and live a simpler, earthier life, for two weeks a year.

Wainaina’s point deserves reiteration: much of the technology developed in the metropole for the people on the “fringes” (in the view from the metropole) fails simply because it was developed more for the image of the “simple fringe life” contained in the metropole and not for the life as it actually exists on those fringes. Only in following the desires of the people living that life can one develop products for them—not by deciding what they should want.

Wednesday, December 19, 2007

An Old Plaint, Renewed

There's a real lack of professionalism alive in our academic departments, an atittude that cares more for the nicety of bureaucracy and its rewards than for our academic goals.


What's so startling about that?


But that doesn't mean it shouldn't be fought, or that it doesn't deserve the pointing out.

Two of the problem areas, release time and department service, should remind us of how we professors ought to be viewing what we do, as opposed to the way we are actually looking at things. Or, rather, as some among us are actually seeing our activies (the majority of professors are conscientious and caring--but I'm not talking about them).

There's a grant program at CUNY that provides release time for research. A good idea... a necessary signal of support for scholarship, aside from the real help it gives. Yet faculty members are encouraged to apply not because it can really aid their efforts, but because it gives release time from teaching. That is, the goal becomes the lessening of the teaching load, not the assist to research. People are actually advised to create project proposals so they can get the release time. “If you do this,” people are told about many, many campus activities, “you can get release time.”

That's backwards. The release time should be granted to support activities that real professional academics would be doing anyway.

Another way that CUNY supports faculty scholarship is through generous release time over the first years of employment. I used six hours last spring to finish Blogging America and will use three this coming spring so that I can work on the conference Ann Seaton and I are organizing, “Race and New Media.” Thing is, I am taking the release time to help me do better what I would do, no matter what, not simply because I can.

The emphasis on release time also makes teaching seem a burden, not one of the dual centers of our professional activity. 'One should do anything to get out of teaching' seems to be the the underlying assumption. Following from this, release time comes to be some sort of perk, frequently doled out by department chairs as reward, the tasks sparking the release proving far less than onerous.

In some instances, people turn release time into private fiefdoms, making a sketchy yearly show of activity for what is really a way to reduce their workload. As a result, things that should get done don't get done, the release time actually becoming a roadblock where it was supposed to provide smoother pavement.

One of the side-effects of release time is that quite a number of more senior faculty are able to withdraw from intense involvement with their departments. Well, the actual number of these people may be small, but the impact can be large—for these people sit in judgment on promotion of their juniors. On people some of them hardly know at all.

This is one of the factors that has led to the ridiculously bloated reports on activities needed for promotion—some of them running to 600 pages or more. If the “peers” don't see one every day, don't know what one has been doing, they have to be informed somehow.

The reports, unfortunately, have become the driving force behind activity in many departments. “Do this, and you can put it down as department service.” “Save every piece of paper; you'll need them for promotion.” Here again, the cart is put before the horse. The report is supposed to be documentation of professional activities that should be a natural part of academic life. Instead, too often, the report becomes the end, the goal—and not simply a description.

This, coupled with the fact that some senior faculty members are so removed from departmental activities that they know little about their junior colleagues, can have tragic consequences.

I know one young professor, a leader of the junior faculty in his (very large) department, an informal mentor to all of the people who have followed him into the department, a man who has made a significant contribution to the way lower-level courses in the department are taught through his (often unseen) contribution to pedagogy, who was recently turned down for promotion, putatively because of a lack of department service.

His service, you see, could not be so easily documented. It was personal, and the personal doesn't fit into the bureaucratic, for it cannot be reduced to a form.

The tragedy is that his department may well lose one of its most dynamic younger members, making him a victim of bureaucracy where professionalism should have reigned.

Sunday, December 16, 2007

Gaming the Classroom

This past Friday, I had the chance to listen to a remarkable man for about three hours at the CUNY Graduate Center. His name is James Gee and he occupies the Fulton Presidential Chair for Literary Studies, Curriculum and Instruction at Arizona State University, having followed the lure of the sun belt from the University of Wisconsin where he was a central part of the Games, Learning, and Society group located in Madison.

Before his talk, Gee spent about two hours discussing his views with the CUNY-wide Writing Across the Curriculum group. His topic, in both venues, was “Literacies, Learning, and Video Games.” The auditorium was packed for the talk, with people sitting in the aisles.

I had to leave a little early (though I was able to return for the reception), for I had a phone interview for a position that could conceivably take me away from CUNY (I don't know how I feel about the idea of leaving, for I don't think I would ever again find as interesting a bunch of students as those at City Tech—we'll see; most likely, I'll stay put). My head was so filled with things Gee had said, however, that I don't know that I gave the best responses to the patient and perhaps too perceptive questioners.

Gee's book What Video Games Have to Teach Us About Learning and Literacy is not one I am familiar with, but I have now ordered the revised edition that is due out next week. I wish I had known Gee's work earlier... and I probably should have. However, in terms of scholarship, I tend to dive into pools I know nothing about, learning the depth and width of the water even as I learn to swim. In other words, I don't write about what I already know, but about what I want to learn.

Much to my delight, Gee said he operates in much the same way. It carries danger with it (missing someone as significant as Gee, for example), but it generates a level of enthusiasm and attention that “expansion” over and from the same old ground never will. There's a place for both types of scholarship, but I'm always delighted to find that I am not alone.

It has only been six year, Gee said, since he turned his interest to video games (I think he should use the term “computer games” as well, for problem-solving and strategy games don't fit well with the public conception of the term “video games”—but, hey, I'm not the one who wrote the book).

Though I wasn't able to bring it up with him, Gee is moving towards a new conception of literacy (which I call “neteracy”) that encompasses a great deal more than pre-Web (and video game) literacy. He also understands that a lack of physical activity is not a sure sign that no learning is going on: watching can also involve learning.

What fascinated me most was that he is seeing possibilities in video games in ways similar to what I am seeing in hand-held communications devices. In both cases, the initial reaction is to keep such things out of the classroom. Yet they may prove the most valuable aids to education we've seen in a long, long time.

Saturday, December 15, 2007

Results: Cell Phone, 2.0/Laptop, 0.0

When I visited Senegal last April, seeing West Africa for the first time in more than a decade, I was struck by two things. First, even in an area of the world where gum wrappers had once been saved and turned to other uses, the plastic bag was now both ubiquitous and worthless. There’s no value in stockpiling them for future use: they cover the roadsides. This sad fact made me despair for the future of humanity (and I’m not kidding). Second, cell phones were everywhere. Once again (as they had from bicycles to cassette players), Africans had seen a technology that they wanted—and weren’t letting poverty stand in the way of attaining it.

In an article recently republished by ePluribus Media, Bronwyn Hughes and I argue that the only way development assisted from the outside ever works is when the outsiders follow the lead of the locals. We wrote:

For a long time many people saw the developing world as an empty pit, pouring in skills, goods, and money to fill it. More and more, this pit appears bottomless, basic human conditions within it only worsening. Project benefits are not lasting. Little seems sustainable.

We shouldn’t have put that in the past tense, though those involved in on-the-ground development today do understand this (for the most part). The problem is that too many people sitting far away still believe they can solve the problems of the developing world through their superior thought, technology, and money.

In my recent book, Blogging America: The New Public Sphere, I write about Nicholas Negroponte’s No Laptop Left Behind project (well… let’s be fair: he calls it One Laptop Per Child) and about why it is no solution for the developing world. I quote Binyavanga Wainaina, writing in Bidoun, who says that the message of artifacts like Negroponte’s laptop to Africans is “you are fucked.” Negroponte fell into the classic trap of thinking he could find a solution for the problems of someone else—without consulting that someone else.

The embrasure of the cell phone by Africans is a leapfrogging of the laptop, anyway. They have seen the future much more clearly than Negroponte, and have made the right choice. Writing in today’s The New York Times, Tim O’Reilly points out what he (and I, and millions of Africans, among others) has known for some time:

In the future, the cellphone and similar wireless devices, not the personal computer, will be the primary interface to the cloud of information services that we now call the Internet.

Not only that, but it will be the primary technological device for the classroom (see my diary and presentation on the subject) quite soon, replacing the PC even there. The “smart” classroom, along with personal devices, makes for flexible possibility far beyond anything a room filled with PCs can offer. And this is true for Africa as much as for Atlanta.

Like the phone companies that O’Reilly writes about in his piece today, schools have to open their minds to a more expansive view of the cell phone and its possibilities. Instead of banning them completely, as the New York City public schools do, ways need to be found for bringing in the cell phone in a non-disruptive manner. We college professors need to do the same thing.

Thursday, I walked into my last class of the semester for one of my Composition I sections—my new iPod Touch in hand (I’m too cheap to pay the monthly fees for the full iPhone). One of my students stared at it and marveled:

“I never saw someone so elderly with something like that.”

I told him he’d better shut up, if he expected to pass the course.

Thursday, December 13, 2007

Evolving to Neteracy: Comportment on the Web

One development in the evolution towards neteracy is a broader appreciation of what it means to write without editing. That is, most bloggers and readers of blogs have become quite forgiving of the type of mechanical and grammatical errors most of us miss when going over our own writings. They are reading for content and not to pick apart the prose; they recognize that the mistakes are commonplace and (dare I say it?) insignificant. In the past, in a literary culture dominated by the immutable page, this attitude was somewhat rarer. Today, most neterate people recognize that, while the printed page is a finished product, the blog is always a work in progress, reducing the importance of much “error.”

Of course, the Web has reduced the space between the author and the publication to something close to zero. Over the past centuries, we had developed an editorial apparatus that most writers had to utilize. F. Scott Fitzgerald couldn’t spell? So what? There was a structure of editing, copy-editing, and proofreading that made sure his “errors” did not become part of the printed version of his work. Today, even in organizations that retain their editorial structures, the willingness to alter what a writer has produced has dwindled.

There are, however, lessons of the literate culture that we are having to relearn on the Web, having jettisoned a little too much of the old in our embrace of the new. David Newquist, writing for Keloland TV’s Northern Valley Beacon, in looking back over the changes in journalism over the past two centuries (I found his piece because he mentions The Rise of the Blogosphere), points out that:

Our nation reached a point when the literate population understood that anger, malice, falsehoods, scurrility, and defamation are destructive and dangerous. They are counter-productive to democracy. It has reached the point again in this electronic age when those qualities form an unabashed part of what passes for discourse and often predominate. A significant sector of the public likes to see people humiliate each other, debase themselves, and otherwise participate in the degradation of humankind. We have developed a media to serve that audience.

This is an important topic for discussion, something that Tim O’Reilly has been trying to address with his Blogger’s Code of Conduct.

Over the past year, I have seen (or imagine I have seen) an increase in civility on the Web, which I attribute to two factors. First, more and more people are finding they want to present themselves online in their “real world” persona, stepping out from behind their pseudonyms. Second, people are recognizing the dangers in the type of climate Newquist describes, and are discouraging the behavior that once did seem to be becoming the standard for the Internet.

We’ll see. Perhaps I’m imagining things when I sense a change. On the other hand, maybe we are maturing into a fully neterate world just as we did into a literate one, centuries ago.

Sunday, December 09, 2007

On Learning the Value of Protocol

As one returning to academic life after several decades away, I've had a lot of fast learning to do. One recent lesson has been on the importance of protocol.

From a different perspective, that is.

When you're the boss, when the company is one you've created and run yourself, you do put procedures in place—and hope that your employees will follow them. But you are taking a holistic approach—and are developing means to achieve specific and clear goals,or to solve immediate problems. There's no “institutional memory” involved... hell, you are the institution.

In a huge university system like CUNY (with 19 campuses) and a history going back some 160 years, the situation is... how can I say it... slightly different. Things that seem arcane and simply there to confuse can actually turn out to have a certain amount of merit. So, I am finding, it is useful to go along with the system (though, as a good leftist, I can't believe I'm saying that) until you have seen its effect and can judge its utility through full knowledge of the situation.

A year ago, during my first tenure-track semester at New York City College of Technology, I met with the colleague who had observed my class for the obligatory post-observation conference. He sat me down and handed me his report. I immediately pulled out a pen and made to sign it.

He stopped me. “Read it first.”

“Oh, I'll read your suggestions for improvement, but I know I'm fine with whatever you say.”

“Read the whole thing. That's how it's done. Just do it to humor me. After all, I had to write it. And if I can follow procedure, so can you.”

And so I did, and we talked for a bit about the class... and then I signed. The whole thing was relatively painless and, I have to admit, I got something from it.

A year later, my fictional friend Sam Stamper had a rather peculiar observation conference that he worried might have a negative impact on his career. He didn't keep quiet about it, though, and asked a number of people what to do, including his union representative.

The union person told him that any observation report by Professor Fayles concerning that particular class and conference would be grounds for a grievance if it ended up in his personnel file.

Why? Because Fayles had not done what the man who observed me had: she had not written up her report before the conference, violating the protocol set forth in our union agreement. The Collective Bargaining Agreement that governs many of our professional actions is quite explicit:

Each observer shall submit, through the department chair, a written observation report... within one week of the observation....

The department chairperson shall schedule the post-observation conference for the employee within two weeks after receipt of the written observation report. The post-observation conference shall include the employee and the observer. (Article 18, 2.a and 2.b)

Stamper, of course, immediately arranged for a second observation by another observer—one that followed protocol exactly.

When I was so cavalier about protocol last year, I was brushing aside as merely bureaucratic nicety what I now realize is important protection for both parties in an observation. The man who insisted we follow the rules, a veteran CUNY professor, knew exactly what he was doing and why.

By forcing the observer to write a report (and a report whose parts are clearly spelled out on the required form) before the post-observation conference, and by inserting another person (the chair), a brake is placed on the process, if needed. When we write, we edit ourselves, trying words out and seeing, sometimes, that what we had intended to say is inappropriate. Fayles would have benefited by writing first, most certainly. During the conference, she would have had something to refer to and probably would have stuck to it, never getting angry or accusatory.

Even if her writing had carried in it her frustration with what Stamper represented to her (a change in the department that threatens to leave her behind), the chair would have been able to intercept her report, keeping her from presenting it to Stamper. Fayles and the chair could have talked privately, giving her a chance to air her grievances and the chair an opportunity to try to bring her in line with what he is trying to do to create a broader and more dynamic department.

But Fayles, like I would have before this incident, decided to ignore protocol and do things her own way. The result was a meeting that did no one any good and an observation report that, when finally written, alit in no file but that classic circular one.

Thursday, December 06, 2007

Singin' John Wayne

One of the most frustrating things about the traditional book industry is the finality of publication. Once something is out there, it can’t be changed (unless, of course, your book does well enough to warrant a second edition). Three years after The DVD Revolution: Movies, Culture, and Technology appeared, I still get the occasional email pointing out that I gave a certain well-known director a new first name. There’s an even more embarrassing stinker in The Rise of the Blogosphere—and I just went through a copy of my latest, Blogging America: The New Public Sphere, combing it for the errors that slip through, no matter what you do.

One of the pleasures of reading books rather than just blogs or other flexible online entities is that books allow me to see that I’m not the only one whose fumbles bounce over the goal line to score for the bad guys in a final product. Better is when the writer is superior to me. Best is when that writer is also a scholar whose skill makes me envious.

Such is the case of the new book by Jeanine Basinger, chair of film studies at Wesleyan University and the author of numerous brilliant works of scholarship—all accessible to the average reader (quite an achievement, for an academic). The Star Machine (New York: Knopf, 2007) is a delightful and eye-opening look at the workings of the studio system in Hollywood, primarily during the thirties and forties. I’m only about a quarter of the way through it, for I keep stopping to look into something or other because of a point Basinger has made. I’m having such fun that I have no desire to hurry through.

But that doesn't mean it's a perfect work. On page 82 of the book, Basinger writes:

John Wayne spent a decade as a singing cowboy.

That made me whoop, “I’m not alone!”

My newest project entails watching quite a number of the 1930s “poverty row” Westerns where Wayne first excelled. I haven’t seen all of the movies he made at that time, but I’ve seen a lot of them. Oh, and I also have an interest in the “singing cowboy” phenomenon.

But, before I would let myself get too excited, I decided to check and see if there really was any way Wayne could be counted as a singing cowboy. After all, I did remember The Man from Utah (1934), where Wayne does seem to be handling a guitar during the opening sequence and something that could be deemed singing is going on—but the movie seems to have had second thoughts, substituting stock rodeo footage to fill things out rather than more “songs.” Could there be more, enough to characterize his career before Stagecoach (1939)—when he became an A-list star—as that of a “singing cowboy”? Let’s see… his first starring role in a Western was for The Big Trail (1930). Nope, no singing there! What about Two-Fisted Law (1932), where he plays “Duke” for the first time? Nope. Well, then… “Ride Him,Cowboy” (1932), where he rides Duke? I don’t think so.

What about other movies he made for Robert Bradbury, director of The Man from Utah?


Well, wait a minute. Bradbury certainly does have Wayne playing “Singin’ Sandy Saunders” in Riders of Destiny (1933) and dubbed a singer for him again in Lawless Range (1935). And there may even be a couple of others (Bradbury directed Wayne frequently). But, of Wayne’s more than fifty movies in the 1930s leading up to Stagecoach, there can’t be more than a couple I have missed where Wayne “sings”—not in any fashion at all.

But you knew that. Just as I did. Just as Basinger does. She would never argue that Wayne really was a “singing cowboy”—and certainly not for a decade.

Still, she will have to live with that claim, one that she certainly never intended to make, just as I will have to live with the knee-slappers in my books—though I, too, never intended to say what I did.

Whatever else may happen, I hope ‘the book’ never dies. Mistakes like Basinger’s (and mine, if I am lucky) can actually serve a positive purpose. I might never have looked into the attempt to portray Wayne within the “singing cowboy” craze of the mid-thirties had it not been for her—and I must admit to having learned something as I did (but that’s for my own next book).

Monday, December 03, 2007

Classroom Rights and Responsibilities

[Cross-posted from Free Exchange on Campus]

Way back in the 1970s, I heard an ad on the radio that got me so annoyed that I still remember it. The tag line was, “Your children have a right to have whales in their world.” It wasn’t the aim of the ad that bothered me—I was soon to be raising money that would be given to Greenpeace and other “save the whale” organizations myself—but the word “right.”

We hear similar things today, of course, ads telling us we “deserve” this or that. They can make the naïve believe that, simply because of who they are, certain things ought to happen to them or come to them. And without taking on responsibility.

You ask, what’s this got to do with the “free exchange of ideas”? Simply this: in all the discussion of student rights generated by David Horowitz and his Students for Academic Freedom, we tend to forget to ask just why students have these rights—or why they deserve them—or what they need to bring to the classroom in the way of responsibilities entwined with their rights.

Only a few rights can really be posited as inalienable—that is, existing beyond a compact. We Americans find these readily in the Declaration of Independence:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

The question, then, for the Declaration, lies in securing rights, not in establishing them. In most other instances, rights are based on agreement—on negotiation. Land ownership, for example, exists only because the community has decided that individual custodianship of land is in its best interest—and it has instituted an “agreement” with the individual that continued ownership is dependent on the meeting of certain responsibilities—payment of taxes, for one. The community, generally, also reserves for itself the right of eminent domain, meaning it can override individual rights for what has been decided is a greater common good.

When I was a college student (almost 40 years ago), student rights had not been established—not well, at least. Colleges believed that they must act “in loco parentis,” and they certainly treated students like children, setting curfew hours, banning items and visitors from dormitories, and much more. In classrooms, the professors could act with impunity; the power was all on their side. That changed, particularly in the wake of the Kent State killings, when students began to force themselves onto the governance apparati of their universities. Regulations tumbled, grievance procedures appeared, and the traditional concept of a ‘teacher-centered’ classroom was severely dented. Professors, now bound (to a certain extent) by student evaluations, had to find new ways of insuring order in their classrooms, ways that could not be construed as insulting or limiting students.

Some small amount of chaos ensued (or not so small, depending on your perspective), eventually leading to a certain reassertion of control by the institutions. The return to a 21-year-old drinking age, though legislative and not institutional, was one of the results of the desire to reestablish control over campuses. Though professors still tried to cloak their clout in “community of learners” rhetoric, they have used the increasing importance of grades as a means for reasserting their dominance in the classroom.

One right that students did not gain—and do not have now—is that of academic freedom. Though students are affected by academic freedom, it is not their right, for that right arises through a compact between universities and their faculties.

But what rights do students have? Or should have?

One expectation that students have—and one that might be expressed as a right—is professional competence on the parts of their instructors. This does bear on the compact of academic freedom, for essential to academic freedom is the idea that faculties be self-governing, that decisions about hiring and retention, about classroom and research duties, and about other professional responsibilities come from themselves and not from some outside force. The student expects that the faculty will take this seriously and not turn itself into something of an old-fashioned guild dedicated to protection of the insider against any and all external questioning as, to some minds, was happening in defense of Ward Churchill, formerly of the University of Colorado at Boulder. Whatever the value of defenses of Churchill on free-speech grounds, the fact that he lacked the competencies for college classroom responsibilities certainly became clear as his scholarly activities were investigated. The faculty that hired and promoted him had failed to do its job—and Churchill’s students were poorly served (that they may have liked him is irrelevant in this context), their expectation that the faculty insures competence unmet.

The self-governance that failed in the Churchill case has serious implications on the rights and expectations of students and is expressed, through the 1915 AAUP “Declaration on Academic Freedom and Academic Tenure” as a:

freedom to perform honestly and according to their own consciences the distinctive and important function which the nature of the profession lays upon them.

That function is to deal at first hand, after prolonged and specialized technical training, with the sources of knowledge; and to impart the results of their own and of their fellow-specialists' investigation and reflection, both to students and to the general public, without fear or favor. The proper discharge of this function requires (among other things) that the university teacher shall be exempt from any pecuniary motive or inducement to hold, or to express, any conclusion which is not the genuine and uncolored product of his own study or that of fellow-specialists. Indeed, the proper fulfillment of the work of the professoriate requires that our universities shall be so free that no fair-minded person shall find any excuse for even a suspicion that the utterances of university teachers are shaped or restricted by the judgment, not of professional scholars, but of inexpert and possibly not wholly disinterested persons outside of their ranks.

Perhaps we amongst the faculty should be grateful that “fair-minded” modifies “person” in that passage, and that it specifically warns against outside influence. Be that as it may, the implication here is that students should be able to expect a certain level of competence from their teachers.

If this is a student right, it carries with it a certain responsibility. And that is to ‘suspend disbelief’ at least for the duration of the semester in order that all in the class can learn from the professor without the distraction of continual contentious debate. After all, the faculty has determined (supposedly) the competence of the instructor—and it is not the place of the student to second guess the faculty about a professor—not, at least, within the context of a particular on-going class. That is, it is the responsibility of the student to assume that the professor just may know what she or he is doing in the classroom, and to reserve judgment until after the end of the semester—or drop the class. Later, of course, or in a forum outside of the classroom, the student certainly should voice her or his concerns.

For exercising restraint in the classroom doesn’t mean that the student needs to give up the right to disagree—far from it. It simply means that the student must not act to impede the plan of the class. There are only fifteen weeks in a semester, and certain tasks do need fulfilling in that time. Out of respect for fellow students and for the faculty, no individual student should try to impede the necessary progress because she or he disagrees with the professors contentions. This isn’t necessarily fair, but it is fairly necessary.

What we have here is, in a way, an equivalence to eminent domain. That is, there is a right to one’s own opinion or belief, just as there is a right to ownership, but if it gets in the way of the greater needs of class or community, that right can be set aside. The responsible thing to do is to accept that or to fight it through establish routes (grievance procedures; the courts) rather than through disruption.

Though our media work hard to convince us otherwise, there are very few rights that do not come with responsibility. This is just as true for faculty, who need to make sure that loyalty to individual faculty members doesn’t supersede responsibility to the profession and to academic freedom, as it is to the student, who must take responsibility for classroom behavior in exchange for the right to both learn and express dissent.

Sunday, December 02, 2007

The Professional and the Amateur

Bill Keller, Executive Editor of The New York Times, recently spoke about the state of journalism. While his attitude is refreshing and his thoughts are generally on target, I do have a few nits to pick:

Keller, when he speaks of the founders’ view of “the press” elides the fact that the conception of “the press” at the time of the writing of the Constitution and (more significantly) the Bill of Rights was quite different from what it is now. There was no profession associated with “the press,” for one thing—“the press,” in the sense meant by the founders, was an entity of politics, not of news gathering and dispassionate analysis.

In writing that the press should be seen as “supplying citizens with the information to judge whether they are being well served by their government,” Keller ignores the absolutely partisan nature of the press in the early years of the Republic. He says he spends his time explaining “why the founding fathers entrusted someone like me with the right to defy the president.” Thing is, they didn’t. Cloaking himself in the mantle of the founding fathers is a disservice to history and, I believe, to the press of today.

Keller does understand the real problem with Bush’s “distaste for debate and dissent,” however, and has a gut understanding of what the founders were doing—even if he misstates their purposes for the aggrandizement of a profession that did not even then exist—and that is that the success (and sometimes failure, as in the Civil War) of our system is based on the tensions arising from dissent and disagreement. We need these, and “freedom of the press” was meant as protection for the necessary opposition, part of insuring that it always has a chance to oust the government peacefully. That the press, in some respects, has moved away from direct involvement in the political process makes its importance in this no less significant.

Keller implies that polarization is something new. I’d ask him to go back and read Jacksonian newspapers. We’ve always been polarized in America (my grandfather hated Roosevelt with all the passion of my own for Bush), but our news media “papered” this over for quite a long time, developing a mythology of consensus that never existed.

“The supply of what we produce is sadly diminishing. And the demand has never been greater.” But what do you produce, Bill Keller, that is diminishing? Aside from newsprint itself, what exactly of what you do is there less of now than in the past?

“Trustworthy information”? Really? When has newspaper information, for the most part, been trustworthy? Historically, in only a few cases it that ever really been true that news media information has been consistently trustworthy.

“In other words, something is happening out there, and if we don't understand it, it's not just the newspaper business that is in peril.” But you don’t understand it, and the peril is real. Bob Dylan, way back in the 1960s, could have been writing to you:

You have many contacts
Among the lumberjacks
To get you facts
When someone attacks your imagination
But nobody has any respect
Anyway they already expect you
To just give a check
To tax-deductible charity organizations

You've been with the professors
And they've all liked your looks
With great lawyers you have
Discussed lepers and crooks
You've been through all of
F. Scott Fitzgerald's books
You're very well read
It's well known

Because something is happening here
But you don't know what it is
Do you, Mister Jones?

There’s something rather elitist in the attitudes you exhibit, Mr. Keller, just as there was for Mister Jones. The reason that you don’t understand what is happening is that you continue to insist on that artificial barrier between the professional and the amateur in journalism. That keeps you from getting out enough into the changing world. That keeps you from understanding just what is going on. It colors your perceptions, building a second barrier, one within you, that makes it all the harder for you to grasp what is going on.

“And at this time of desperate need for reliable news reporting, the supply is dwindling.” Really? Please show me a time when RELIABLE information has been in greater supply.

Keller claims that the professional news media deploys “worldwide a corps of trained, skilled reporters to witness events and help our readers understand them.” What he may be unwilling to face is that the aggregate of interested people around the world with access to the Web can provide the same thing—and maybe even better. He says, “The civic labour performed by journalists on the ground cannot be replicated by legions of bloggers sitting hunched over their computer screens.” Bloggers hunch over their computer screens no more than reporters do. To each group, the computer is a tool. Bloggers, the myth of the person in the basement never leaving notwithstanding, get out in the world every bit as much as reporters do—and many of them know their localities to a depth few reporters will ever attain. “But most of the blog world does not even attempt to report.” Well, Mr. Keller, this is also true of most of the news-media world. Newspapers rely on the AP (for example) every bit as much as bloggers do.

Keller also tries to set journalism apart by “a rigorous set of standards. We have a code of accuracy and fairness we pledge to uphold, a high standard of independence we defend at all costs, and a structure of editorial supervision to enforce our standards.” I would argue that the standards have rarely been upheld. Only a few newspapers and other news-media entities, only the very best, have ever seriously held to standards. His point about an editorial structure, however, is significant. There are entities in the world of “citizen journalism” (ePluribus Media being one of these) that are developing new types of procedures for fact checking and editing—and adherence to standards of journalism is not something that is only found within the profession. Like The New York Times, there are many “citizen journalists” who, for example, “put a higher premium on accuracy than on speed or sensation.” And the link is at the heart of what bloggers do, the way “we show our work.”

Keller sees an inherent difference between the hobbyist and the professional that is, in my view, unwarranted. He wants to imagine that the training of the journalist somehow can be equated to professions where the needs of craft demand strict training and constant updating—as in medicine. Yet his is a field where the talented amateur can often equal the trained professional—something not possible in medicine.

Keller says, “The truth is, people crave more than raw information. What they crave, and need, is independent judgment, someone they can trust to vouch for the information, dig behind it, and make sense of it. The more discerning readers want depth, they want scepticism, they want context, they want the material laid out in a way that honours their intelligence, they might even welcome a little wit and grace and style.” Yes, but that “someone” needn’t be an established news-media entity.

For more on Keller’s talk, see this diary on ePluribus Media.

Saturday, December 01, 2007

Academic Self-Evaluation

In my last post, I created a fiction to make a number of points about teaching and attitudes within academia. To me, one point stands out: the pressure to conform remains strong on our college campuses and within our faculties. As in most other areas, we professors like to encourage others to be “like us” rather than strike out on their own by trying something new.

Though we love to talk about diversity, almost to the point of worshiping it, we are hypocritical acolytes. In some respects, we are as bad as the anti-gay evangelical preacher who gets caught soliciting sex in a men’s room.

Look at the ways we encourage conformity and not diversity. We demand that our students conform to style sheets in their papers, keeping presentation the same; we expect conformity to the dominant dialect of English; we generally judge our colleagues’ work by the “quality” of the imprint. And these are just the tip of that iceberg. Sure, there are arguments in favor in each case, but the fact remains: we insist that people toe the line as rigorously in academia as anywhere.

Yet we talk “diversity” all the time.

Sometimes it seems that on-campus diversity has been confined to a ghetto of superficial qualities associated with race, gender, and orientation. We dismiss attempts at inserting real intellectual diversity into our equations by establishing “norms” for inquiry that have as many unquestioned underlying assumptions as does, say, Intelligent Design. While we say we disdain situations where belief precludes and defines inquiry, we quickly turn away from examination of our own cherished convictions.

Let me take just two examples, choosing them because, as I believe, they underlie so much of over cultural dynamic, and these are the questions of race and class.

Most of us in academia feel that we, personally, have risen above considerations of race. There may be racists in America, but we aren’t among them. Yet, when you look at our lives dispassionately, it is impossible not to conclude that race colors them. We marry and socialize within our races, for the most part, and generally live segregated from other races. Or relations with colleagues from other races tend to be superficial, petering out beyond campus boundaries. Yet, when called on this, we tend to deflect it by saying it’s a question of class, not race. But that, though we refuse to admit it, begs the question.

Let me be clear: I’m not saying that we academics are racist, not in the sense of believing in the inferiority of another race or of wanting enforced segregation. I’m merely saying that race colors our actions in ways much greater than we are willing to admit, let alone examine. If we were the honest seekers of truth we pretend to be, we’d be looking at racial issues in ways far different from the simple polarizing posturing we use today.

Class colors our actions and attitudes in much the same way, though it should not be confused with race (which happens more and more frequently). Most academics are perfectly at ease talking about “hillbilly,” “redneck,” and “white trash” Americans in a disparaging manner they would never use, were race part of the equation. But, when it is simply class, it seems OK to reject the “other.”

Yet it is as small-minded, as biased, to make jokes about Appalachians as it is to do so about Mexicans, say. Why are we willing to do one and not the other? What makes it OK to make fun of a mountain accent but not one from the inner city of Philadelphia? Isn’t the attitude ultimately the same, even if the butt of the joke is different?

It’s time we academics began to wake up from our self-satisfied dogmatic slumber and start examining these (and so many other) questions.

I’ll be writing more on this topic in coming weeks. As we fight to retain our “academic freedom,” we must show that we are living up to the responsibilities implied by that freedom. The only way we can do that is by rigorous self-examination of a kind I find sorely lacking on our college campuses.

Wednesday, November 28, 2007

“Discussion” in a Faculty Office

Fiction? Yes. But….

Full Professor Irma Fayles has been teaching at the inner-city institution since its days as a community college a quarter of a century ago. Never having published a book, let alone an article, she became a full professor at a time when the college had not re-envisioned itself as a four-year school with scholarship an important focus. Assistant Professor Sam Stamper is new to the college, but arriving with one book out already and another about to go to press. He doesn’t yet know the “traditions” of the school and has no preconceptions about its student population.

Fayles was recently assigned to observe Stamper and has decided that it is her job to put this tyro in his place, to teach him how things really are. He had asked her to attend his sophomore literature class where the students, over the semester, were reading four novels, four plays, and a number of works of poetry. The observation took place the day Doris Lessing was awarded the Nobel Prize. This conversation followed a week later:

Fayles: First of all, I must admit that you have a strong presence in the classroom. And you’re clearly a good and dynamic actor. But I must caution you: beginning with a mention of Lessing is going to do nothing for these students. You’ve got to consider who they are and what their backgrounds have been. They haven’t heard of Lessing, and aren’t likely to. This is just a bit of advice: work with things they know or will need to know; leave out irrelevancies.

Stamper keeps his mouth shut, though he cannot bring himself to nod any agreement. His chances of promotion and even retention, he knows, could be affected by Fayles’ evaluation of his class. Behind his straight lips, however, he bristles: the students, as he is well aware after even his short time at the school, know a lot more than many of his older colleagues believe, and have experienced a good deal in their short lives. Their world is no more limited than that of their professors. Doris Lessing might or might not ever again appear as a name before them, but now they could make some connection if she did. Furthermore, he thinks, there is something essentially classist, if not racist, about what Fayles is saying. The implication is that, at a “better” school, one with fewer blacks and immigrants, speaking of Lessing might be OK. But not here. He silently rejects Fayles’ implied thesis that, because the students come from what seems to her to be a limited background, their teachers cannot expect them to move beyond it, and should not even encourage them to try.

Fayles: I saw a lot of teaching in your class, a lot of pyrotechnics, but little learning was going on. Too much performance by you and too little activity on the part of the students. As a result, much of your class was wasted. You need to have different tasks, each an activity for the students, each lasting fifteen minutes or so. Otherwise you will lose them. Maybe have them read aloud, a paragraph each, and then write for a few minutes.

Here again, Stamper keeps his mouth shut, and once more can’t bring himself to nod. He simply sits and waits, looking at Fayles. Did Fayles see no difference between the needs of a remedial (or even first-year composition) classroom and a more advanced literature one? Or does she really believe that the only sort of learning possible for these students lies in mastery of a series of small tasks? In the context of this course, he is not interested in developing skills, but in encouraging students to think and to develop enthusiasm for reading and ideas—and he does not feel that reading a paragraph aloud or writing short paragraphs would contribute to that. He wants to bring his students into a more sophisticated dialog rather than falling into the simplistic thinking fostered by the sort of program Fayles had described. His class is themed around questions of generation of knowledge and his students are beginning to grasp and argue about the distinction between the believed and the demonstrable. Neither five nor fifteen minutes of writing—or of small-group discussion—is going to further that. In fact, any success that he has achieved would be lost.

And little learning? No, he had seen a great deal exhibited in the papers he had just returned to the students—book proposals, following the standard professional model, for works of fiction exhibiting some aspect of the problem of belief. Some of these students, whom Fayles believed couldn’t manage a task exceeding a quarter of an hour, had turned in creative and sophisticated ten-page proposals, some of which would actually make intriguing novels.

Fayles, he thinks to himself, mistakes activity for learning, one of the side effects of the “student centered” pedagogies of the 1970s. Though there is much to be said for Paolo Freire and his Pedagogy of the Oppressed, it is essentially a political statement, and one that moves classroom pedagogy in one particular direction for reasons that have as much to do with desire for cultural change as with the real needs of teaching and the learning implied by the act of teaching. It has led to the confusion of learning and doing, placing (for example) an undue emphasis on small groups, short writings, and other in-class exercises. These have their place, of course, but they need not dominate every classroom. In fact, they should not. What Fayles is promoting, furthermore, is actually a perversion of Freire, for she is turning his methodology into a means for furthering oppression rather than stopping it.

Fayles: Really, I don’t see why you should do the reading aloud, though you are very good at it. Have the students do it. It’s good for them.

How patronizing, thinks Stamper, still silent. Anyway, I am not teaching reading, but am trying to show a group of students who have never seen it the beauty prose can rise to. The passage I read was short, no more than a page or two, and my purpose was for the students to hear the skill of the author and the beauty of the phrasing without my telling them. There are times when it is appropriate and useful for students to read aloud (I use play cuttings read by students, for example, when teaching drama), but this was not one of them. Fayles, why have you put such a blanket rule in place, stating categorically that, if text is to be read, students should read it. I prefer a much more expansive and flexible view of the classroom. There are, one might say, more arrows than one in my quiver—and I choose the one best for the situation.

Now that I think on it, I wonder if she has actually read Freire, or had simply heard tell of his describing and debunking what he called the banking model of education, where passive students just give back interest on what they had been given—or worse, simply regurgitate what they have taken in. This was part of a simplistic concept of audience present from the 1950s through the 1980s, and not only in regards to the classroom. Watchers of television and movies were also considered passive receptacles. However, readers of books, for some reason, were not. It was rarely recognized that watching could be just as active and intellectually stimulating as reading. We are beyond that now, most of us, and realize that lack of physical motion is not lack of intellectual activity. Fayles wants me to go backwards; it’s not going to happen.

The movement towards “student-centered” classrooms was a response, in part, to what was seen as a patronizing, paternal system of education that, in many eyes, amounted to indoctrination, not education. The irony is that, today, it is people like Fayles, insisting on the Freire-influenced classroom, who are being accused (by critics on the political right) of indoctrinating rather than educating. The accusers, though I hate to admit it, are right to this extent: any attempt to enforce a cookie-cutter model does lead to conformity and not to thought. And the older models of education were not nearly as indoctrinating as many, in the heat of a political moment, came to believe.

Teaching by example of knowledge and enthusiasm, as the best lecturers have always done, amounts to something quite different from indoctrination. And it is a necessary element in a good education—though never the only one. Not every course should be a lecture/discussion of the sort Fayles observed, just as her model, while admittedly useful in certain contexts, should not be universal. Many of us, when we think back to the teachers who influenced us most, find that they were the ones who lectured and discussed—with fervor and finesse. We weren’t indoctrinated by these teachers, but were led by their passion to explore on our own. When we decide that such leaps into our own learning are not possible by our own students, we demean those students and block access to an important element of education. It was good enough for us, we are showing, but is beyond what they can handle. That is unacceptable.

Since I began teaching, Stamper thinks, trying to be patient, observations have been my bane. The checklist of small groups, exercises, and constant shifts in activity that has become the observation staple (to the point where students make jokes about their professors adding these things to the class only when being observed) has become quite stale. I will not lower myself any longer—as well I could—to playing this particular game, certainly not for an observer who does not recognize that both times and students have changed.

Small groups were new and unusual in the 1970s, and students saw them as a refreshing shift from the teacher-centric classroom they had been familiar with. To many of today's students, however, the small group is something they “suffered” all the way through high school. In fact, Stamper knows, all of the parts of that checklist are things now more common to high school than to college. Today, if students are to move beyond their high-school behavior, they have to be treated as something other than high-school students, utilizing methodology other than what they earlier experienced, methodology more demanding upon them. Methodology moving them forward in their education, not simply providing the same thing over and over again, class after class.

There's that other factor, of course: the computer. In a year or two, more than a quarter of students nationally will be taking classes that are at least partially online. Such classes are necessarily task oriented and many of them have to follow the Fayles model. Simply to survive, on-campus classes are going to have to be offering things online classes cannot. We need, Stamper believes, to accent the instructor in the classroom today, not further reduce her or him to the "facilitator" that some online programs actively promote. The leadership, the broad knowledge, and the enthusiasm that a professor can show in the classroom does not come through so well (at least, not in the same way) in online situations, so should be emphasized if the "real" classroom is to survive.

Yet we must be careful, Stamper warns himself, in what we “say” to students in our classrooms in other ways. Tasks of the sort Fayles wants utilized in the classroom are seen, more often than not, as onerous by the students. These do not engender a love of the art being studied, but can even lead in the opposite direction. In a course where a skill (such as writing) is the central focus, task-oriented classrooms are essential. In a course where the goal is much broader, tasks of the sort Fayles insists on can deaden student enthusiasm for, and appreciation of, the art. Certainly, they do little to enhance it.

Fayles: One of the basic rules of teaching is that students must be engaged at all times. I saw students drifting in and out of the discussion. You need to draw them all in.

What, she wants me to work down to the lowest rather than challenging the highest, doing so in a way allowing the lowest to rise as well? Either way we do it, we risk losing some of our students. I’d rather, in this class, that risk be at the bottom than at the top. Yes, I like teaching remedial classes, too, bringing the struggling students to the point where they can attempt college work… but not every class should be like that, focusing on the lowest common denominator.

Fayles: You need to be careful with the things you say, or you will lose the students. You should have explained the “butterfly effect” when that came up. Some of the students probably don’t know what it is.

More dumbing down, she wants? No thanks. What she is asking, again, is that I stoop to what she believes is the level of my students, not demanding more from them than they are used to giving—any of them can find out what the “butterfly effect” is quite easily by asking others in the class or looking it up online. After all, the mention wasn't mine, nor was understanding of it essential to the point being made. The students don't need to be spoon-fed such things, anyway. My feeling, again, is that more can be gained by demanding the students rise than by lowering myself. Sure, a few students will be lost—but as many (if not more) will disappear if I dumb things down—and all of them will be poorly served.

As she talks and Stamper does not respond, Fayles becomes angry, more and more so with each stony lack of response to each comment. When her officemate comes in and starts puttering around, she stops, waiting for the other to leave. Stamper finally speaks, telling her it is OK if the other overhears. Fayles, trying to smile, says it is not OK with her—and asks her colleague to withdraw. Once they are again alone, she continues, her frustration with Stamper clearer than before.

Fayles: During the class, you brought up World War II a number of times. That was a mistake. Our students have little knowledge of history; some confuse the Civil War and World War II. It’s best to avoid history unless you are going to teach it.

Understanding history, Stamper believes, is necessary for understanding literature. He has been laying out the basics necessary for the texts being covered since the beginning of the semester. If he were to follow Fayles’ advice, he would have to teach different texts, probably much simpler ones. And that would not suit his purpose. We serve our students poorly, he believes, when we don’t open up the unknown country.

This, he tells himself, is getting ridiculous. But, boy, is she steamed!

Fayles: And bringing in 9/11? That was gratuitous, facile, and unnecessary. There is no reason to talk about something like that in a literature class.

What are you talking about, thinks Stamper, forcing himself not to respond. 9/11 was the most significant common event of the lives of today’s students. I wonder if she would have said the same in 1969 about the assassination of JFK, same number of years earlier. 9/11 needs to be a part of our teaching for quite a number of reasons, including the simple fact that it can be used to open all sorts of doors. Our students, quite naturally, are interested in it; they perk up and listen, making 9/11 an effective entry into any number of topics.

This isn’t ridiculous… it’s stupid. He stifles a sigh.

Fayles’ anger is now clear in just about everything she says, her words becoming more and more accusatory in face of Stamper’s determined lack of response. This young man just isn't listening, she realizes, isn't respecting the experience that she brings to interaction with these students. But she goes on anyway.

Fayles: Another problem was that you didn’t ask the students enough questions, and did not call on specific ones. You need to drag them into the conversation, sometimes! And you should never answer the questions yourself. You did that at least twice.

My goodness, more high school? It becomes like high school if I force students to squirm under my eye as I put them on the spot. And I don't believe that is effective pedagogy. All it does is embarrass the student. The last thing I want is for the classroom experience to be actively painful. I try to build a comfort zone into the classroom… which is one of the reasons my students show up. Maybe you didn’t notice, Fayles, but all 30 of them were there.

What time is it? Five minutes to the hour. Ah, good! I’ve an excuse for getting out of here and I had better use it—or I’ll end up saying what I think, and that won’t get us anywhere. Though it has to be said, this is not the place.

Stamper: I’ve got to go teach.

He stands and leaves without another word.

Tuesday, November 20, 2007

Look What They've Done to My Song, Ma

In one of those bits of serendipity that, when you examine them, really have more to do with a greater cohesion, David Brooks' column today deals with American music—just the topic of discussion in my composition classes, yesterday and today. Brooks, however, bemoans the splintering of the music. I see things differently.

Brooks has been speaking with 'Little Steven' Van Zandt, guitarist with the E Street Band:

He argues that if the Rolling Stones came along now, they wouldn’t be able to get mass airtime because there is no broadcast vehicle for all-purpose rock. And he says that most young musicians don’t know the roots and traditions of their music. They don’t have broad musical vocabularies to draw on when they are writing songs.

As a result, much of their music (and here I’m bowdlerizing his language) stinks.

He describes a musical culture that has lost touch with its common roots.

Later, Brooks says Van Zandt has “drawn up a high school music curriculum that tells American history through music. It would introduce students to Muddy Waters, the Mississippi Sheiks, Bob Dylan and the Allman Brothers. He’s trying to use music to motivate and engage students, but most of all, he is trying to establish a canon, a common tradition that reminds students that they are inheritors of a long conversation.” I did something similar in 1993, the last time I taught high school. In fact, I actually taught the course, doing more than simply concocting a curriculum. And I use music in my writing classroom frequently, doing things like introducing students to the backgrounds of rap music in poetry and song going back (believe it or not) to Beowulf, where the 'sprung rhythms' (to use the phrase that Gerard Manley Hopkins created for his own nineteenth-century poetry) and alliterations show an accent on the beat that was later masked by the stricter cadences of the poetic forms introduced with the French of the Norman invasion. I recite for them Woody Guthrie's “Talkin' Dust Bowl,” Bob Dylan's “Subterranean Homesick Blues,” among other things, and tell them about the story-telling of country music. And I talk about Amiri Baraka's Blues People, written when he was still known as LeRoi Jones. And much more.

In other words, I know a little bit about the history of American music--and love to share it. The so-called 'American Songbook' is familiar to me, as is the jazz by the likes of Carla Bley and Paul Motian, as is the boogie piano of Rosie Sykes. Not only that, but my mother is a classical musician and I grew up surrounded by her music (it was first hearing Leadbelly's 12-string guitar that shocked me into another listening direction). I have heard the mermaids singing, in other words, each to each—and have paid attention.

One thing I have learned from my obsession with music is that its history is one of constant flux, of melding and separation. It always has been thus. There has never really been the kind of cohesion or national musical 'language' that Van Zandt laments in the quote from Brooks above. When I was a kid in the 1960s, there were few I could talk to who would know both Professor Longhair and Phil Ochs... or either one, for that matter. Rare was a songwriter like Bob Dylan, who soaked up influences like a sponge. Most people—and even most musicians—knew little beyond the particular genre they worked within.

How many American musicians of the 1960s were familiar with Ska? Or John Cage? Or Johnny Gilmore of Sun Ra's Arkestra?

Very few.

Every generation laments the passing of a golden age that was somehow more nuanced than what we see “today.” In 30 years, David Brooks' replacement will be interviewing Kanye West about how musical knowledge has dwindled, about how much broader the influences were, back in the day, that first decade of the century.

Brooks calls what we have today “the segmented society.” As an amateur historian of the history of American music, that makes me laugh. He conveniently ignores the fact of “race” records, an enforced segmenting with a power greater than anything around today. He forgets (or never knew) that many classical musicians resolutely refused to listen to jazz (Leonard Bernstein had much to do with changing that—in the 1940s). If anything, though the average person knows no more today about the history of music than she or he did fifty years ago, the possibility of exposure to a wide range of music is greater today than it has ever been. Rappers sample Buffalo Springfield as well as Bootsie Collins—and some of them can talk in detail about Monk or even Mozart.

Yes, it is true that there are few today who can speak broadly of “Bessie, bop, or Bach”--but, Mr. Brooks and Mr. Van Zandt, it was ever so.

Monday, November 19, 2007

Race: Still the Base in American Politics

The recent endorsement of Rudy Giuliani by Pat Robertson is called “unlikely” by some—but, and make no doubt about this, it is not. For all the talk of “family values,” “protecting America,” abortion, and whatever else, the real dividing line in American politics falls along questions of race. It has been so throughout my lifetime (I was born in 1951), the only change being in alliances. Giuliani became mayor of New York City through race (David Dinkins, the black mayor he replaced, probably did more than Giuliani to redirect NYC, but he was black—and a nice guy, but that's another story). He is so popular in much of America not simply because of 9/11 (what exactly did he do to stop terrorism, anyway?) but because he “tamed” New York. He put the blacks and other non-whites in their places, that's what he did.

Look at Robertson's career: though rarely making African-Americans the focus of his attacks, his ministry parallels the rise of the “southern strategy” used by Republicans in the wake of Democrat-led successes in civil rights in the mid-1960s. Before that time, southern Democrats were reliably racist and it was a coalition of liberal Republicans and northern Democrats that made the civil-rights legislation possible. The result of the “southern strategy” was a shift that saw the disappearance of liberal Republicans and a move of southern Democrats to the Republican party—and the rise of what is seen as a newly vigorous conservative movement. It should be no surprise that Robertson feels comfortable with Giuliani, for their differences are ultimately peripheral. Robertson, using the coded terms of the “southern strategy,” was a part of the movement to use race to bring the Republican power to dominance—just as Giuliani will prove to be, if his campaign succeeds.

As was Ronald Reagan. In fact, he was probably its ablest practitioner.

There has been a bit of a donnybrook recently on the op-ed page of The New York Times. It started with another of those attempts to pretend that the whole “southern stategy” is simply a “slur” on the Republicans and, in the instance under consideration, on Ronald Reagan. The columnist, David Brooks, writes:

The distortion concerns a speech Ronald Reagan gave during the 1980 campaign in Philadelphia, Miss., which is where three civil rights workers had been murdered 16 years earlier. An increasing number of left-wing commentators assert that Reagan kicked off his 1980 presidential campaign with a states' rights speech in Philadelphia to send a signal to white racists that he was on their side. The speech is taken as proof that the Republican majority was built on racism.

Like many contemporary apologists for Reagan, Brooks attempts to move the blame to his “strategists” but then tries to exonerate them as well. The wink and nod used by Reagan by appearing at the Neshoba County Fair and using the term “states rights” were, in the vision Brooks tries to build, purely accidental. He fails to mention (though he surely knows this) that such coded references to the side of the race-issue divide one falls on was a major part of a strategy that, by 1980, was already a decade-and-a-half old.

Four days after Brooks' November 9 attempt at revisionist history, fellow columnist Bob Herbert took him to task, though without naming him. He wrote:

Reagan was the first presidential candidate ever to appear at the fair, and he knew exactly what he was doing when he told that crowd, “I believe in states’ rights.”
Reagan apologists have every right to be ashamed of that appearance by their hero, but they have no right to change the meaning of it, which was unmistakable. Commentators have been trying of late to put this appearance by Reagan into a racially benign context.
That won’t wash.

Unfortunately, it does wash—in the eyes of the millions of racist Americans who have subsumed their hatred under a patina of coded red-herrings and internal misdirection. Whether Herbert (or I) like it or not, the justifications and explanations those like Brooks provide allow people to hide their racism while continuing to support essentially racist policies—which, I suspect, is what led Paul Krugman to jump into the fray today.

Krugman took Herbert's column, which dealt with the murders that made Philadelphia,Mississippi iconic to the American racial debate, and brought the issue smack into the current political climate:

More than 40 years have passed since the Voting Rights Act, which Reagan described in 1980 as “humiliating to the South.” Yet Southern white voting behavior remains distinctive. Democrats decisively won the popular vote in last year’s House elections, but Southern whites voted Republican by almost two to one.
The G.O.P.’s own leaders admit that the great Southern white shift was the result of a deliberate political strategy. “Some Republicans gave up on winning the African-American vote, looking the other way or trying to benefit politically from racial polarization.” So declared Ken Mehlman, the former chairman of the Republican National Committee, speaking in 2005.
And Ronald Reagan was among the “some” who tried to benefit from racial polarization.

Krugman, who must have been feeling rather optimistic when he wrote this column, sees in the fact of a racist base to the rise of conservative political power as containing its own destruction: the power of racism is receding in America, he believes, though slowly (receding, principally, through demographic changes that will soon reduce the white majority to a plurality).

Make no mistake about it: the rise of conservative power in American politics is tied inextricably to race, as any honest examination of Reagan's appearance at that 1980 county fair (among thousands of similar incidents) makes clear. The continuing potential of appealing to the racial divide will be obvious as the 2008 presidential election continues to make what seem to be strange bedfellows—such as Pat Robertson and Rudy Giuliani.

For it's really not so strange. Robertson and Giuliani are both appealing to a racist base that will come out to vote in response to their coded appeals—just as it did for Richard Nixon, Ronald Reagan, and George W. Bush. Whether or not this strategy can still succeed remains to be seen.