Wednesday, January 30, 2008

"Mother of Mercy, Is This the End of Rudy?"

In Mervyn Leroy's 1931 film Little Caesar, Edward G. Robinson plays a crook named Caesar, nicknamed Rico. Belligerent and ambitious, Rico claws his way to the top, but his best friend betrays him and the media turn on him. He dies, shot down when he has nothing left, uttering one of the most famous lines of early sound cinema.


Rico Giuliani has yet to utter any Famous Last Words, but he will. Maybe later today. Still, he always was a “little” Caesar, a mean-spirited, vindictive man loyal only to his inner circle—much like Robinson's character in the movie.


There will be no mea culpa from “America's mayor,” for he's one of those who things happen to, who never does wrong—not in his own mind, at least. But the reasons for Giuliani's failure aren't hard to deduce. He even pointed to one in his Florida concession speech last night:


Elections are about a lot more than just candidates. Elections are about fighting for a cause larger than ourselves. They're about identifying the great challenges of our time and proposing new solutions.

Exactly. And that is exactly what Rudy did not do. His campaign was all about him, and the more people saw of him, the less they liked him. Because of the national reputation he gathered by walking towards Ground Zero rather than away on 9/11 (not an act of bravery, but an attempt to get to his poorly placed emergency command center in the WTC), Giuliana had a name recognition among the American population, and an association with defense of the nation, that his competitors couldn't match:


He led the Republican field in polls conducted by The New York Times and CBS News throughout the summer, as his support peaked in August at 38 percent nationally in a four-way fight with Mr. McCain, Mr. Romney and Fred D. Thompson. That put him 20 points ahead of his next closest competitor, Mr. Thompson.

But there was no vision, nothing beyond Rudy himself, his ambition, and 9/11. As Joe Biden so aptly put it, "There's only three things he mentions in a sentence: a noun, and a verb and 9/11." And there's no “cause” in 9/11, just reaction to a tragic and unwarranted (and ineffective, though successful) attack. And the only “solution” Giuliani ever really proposed to anything was punishment.


Over the last six months, the people of the United States have come to see the Rudy that those of us who lived in New York during his mayoralty have long reviled. A petty, small man, Giuliani viewed the city as “his,” and acted on that, seeing any opposition as a personal affront. He is a petty autocrat, a would-be Caesar who claimed the accomplishments of others as his own. A man with much more of an affinity to fascism than to democracy, he even went so far as to suggest that the mayoral election after 9/11 be canceled so that he could remain in power.


Rico was “betrayed” by his dancer pal who put his own potential career and the law ahead of friendship. Rudy, too, was betrayed by a pal, Bernard Kerik, who put his own greed ahead of loyalty to the man who had “made” him. Rico's ambition led him beyond his ability, condemning him to never rising above “little” Caesar status. Rudy, too, was “crawlin' way past... [his] speed,” as the Leiber and Stoller lyrics to the old Clovers song put it.


It will be good to see him gone.

Monday, January 28, 2008

A Treat from Edward Weston...

...followed by a little something of my own.

My aunt Anne Barlow Nygren, a former dancer, and her husband Jim got this photograph sometime in the 1950s, I expect. The photograph itself is unsigned, but it is glued to a sheet of paper on which Edward Weston signed his name. One the back are the words "Point Lobos." Aunt Ann gave me the photo when she left New York for San Antonio nearly twenty years ago. It remains one of my prized possessions:



Below is an image of the mosque at Djenne in Mali, the largest mud mosque in the world. I took it in 1986, printing and hand-coloring it in 1987 or 1988. In the middle on the right is a Koranic school, probably not that different from what have become the madrassas of today:


Sunday, January 27, 2008

How Many Times Do We Have to Tell You?

Wasn't it Tolstoy who believed that “leaders” simply follow from in front?


For the second time, we have a victory in a Democratic primary where a candidate won by a margin well beyond what the polls predicted.


Why?


Today, on the television gobble-fests, we'll be given reason after reason why. In every case—mark my words—the focus will be on the leaders, on what they have done or haven't done. On strategy, on manipulating the voters one way or another.


Yet the real story of South Carolina isn't the candidates at all.


It's the voters.


Just as it was in New Hampshire.


On the issues, on the substance of their campaigns, there isn't (in the eye of the voter—not of the pundit) much difference between the three Democratic front-runners. They have all come around to reflect the demand for an end to the occupation of Iraq; for a foreign policy that reflects the positive and cooperative way Americans view themselves, one that rejects the constant belligerence of the past seven years; for health-care reform; and for economic policies that reflect the needs and desires of the vast middle class and not simply the extremely rich.


With their candidates now reflecting voter demands, the voters are moving on, asking even more of them. When they get it, they reward the candidates. When they don't, they punish.


Hillary Clinton evinced a human and responsive side, those last days in New Hampshire, and drew people into the polling booth who otherwise might have sat the primary out. That was the real story of New Hampshire: Barack Obama's support didn't go down: Clinton's went up. Why? Not because she was convincing anyone of anything, but because she was acting as a person and not as a leader.


The voters of the Democratic party are showing that they are sick of being “led.” Of being manipulated. They want a candidate who will respond to them, and not tell them—which is why they don't mind the vagueness of some of, say, Obama's positions. They'll tell him later exactly what they want. Right now, it's OK, as long as he makes it clear he will respond.


The irony of the South Carolina vote is that Clinton, after claiming to have found her voice in New Hampshire, didn't really get it. It wasn't her voice that anyone cared about, but the people's. If she had understood that, she wouldn't have immediately begun speaking to the voters again, trying to manipulate them. She would have continued speaking about herself and her own feelings, about her “willingness” to listen and consider.


She didn't do that, but once more treated potential voters as though they are stupid and malleable, as though they can be swayed more by petty bickering. She did that in the face of all evidence to the contrary, in the face of primary voters who had already told her they want that Clinton who cares, not the one who schemes.


Patrick Healy, writing in today's New York Times, points out that:

Last week, Clinton advisers believed Mr. Clinton was rattling Mr. Obama and drawing his focus away from his message of moving beyond the politics of the 1990’s and the Bush presidency. The results on Saturday indicated, instead, that voters were impressed with Mr. Obama’s mettle and agreed with him that the Clintons ran an excessively negative campaign here.

The Clinton team was again making the mistake on concentrating on a rival and not on the voters—and they paid for it. Just as they benefited in New Hampshire, when Clinton managed, for a moment, to forget about the “enemy.”


Obama seems to have learned this lesson. If she does not, Clinton will lose the nomination to him. For, more than anything else, the Democratic voters want a president who listens to them and responds, not someone with a grand plan or belief—they've seen enough of than with the neo-cons—but someone who will listen to the people and do what they want.


They had now said this forcefully twice. They shouldn't need to say it a third time.

Saturday, January 26, 2008

Pictures of West Africa

As I prepare for classes and have little time for writing at present, I thought I would share a few snapshots of West Africa. The first was taken in April, 2007 in St. Louis, Senegal. The second in Boumbouka, Togo, probably in 1990. The third in Djibo, Burkina Faso in November, 1985. The fourth in Nassiet, Togo in 1990. The fifth on the Niger River between Gao and Tombouctou, Mali, in September or October of 1989:






Wednesday, January 23, 2008

Why I Teach

[Crossposted from Free Exchange on Campus]

When I was in Peace Corps, I taught farmers the rudiments of using oxen for plowing. I did this at an instruction center in the north of the West African nation of Togo. It was a complicated task: Aside from the actual plowing, the farmers had to learn to 1) use the tools needed for maintaining their plows and other equipment, 2) to house and care for large animals, and 3) how to grow and store appropriate feed. They also needed to learn rudimentary bookkeeping. They were entering into a monetary economy they had rarely experienced—they had to borrow from the government to buy the oxen, the equipment, fertilizer (no matter what you do, cotton is hard on the soil), and even pesticide. They would be embarking on cash-crop farming (as opposed to the primarily subsistence pattern of their past) to pay back the loan.


At first, I saw nothing wrong with this. After all, I was teaching them how to complete specific tasks, each one of which could be of value, especially as it became harder and harder to support a family through traditional farming methods (population growth, deforestation, and desertification were ensuring that). The skills gained, I rationalized, would always stand the farmers in good stead.


By the time my second year started, though, I had withdrawn from all but minimal duties at the center, focusing instead on my secondary project, developing a tree nursery for hedgerows (a reforestation, soil preservation project).


Why? What was wrong with what I had been teaching the farmers?


Well, I had begun to feel I was teaching them into poverty, not out of it. And I did not like that.


The costs of the new ways of farming were so high that the farmers were having to take land away from the crops that fed their families, meaning that they had to start buying food. Which meant that they had to plant even more cotton than they had before. It also meant that they were putting more stress on the land, unable to rotate crops appropriately or let their fields lie fallow ever seventh year. And it placed them at the mercy of market forces well beyond their control. They may have had new skills, but they were facing a poverty that could prove more severe than anything they had previously experienced—a poverty that could (through their debt), eventually, remove them from the land completely. Remember that old Frank Norris story, “A Deal in Wheat”? That's where these farmers were heading.


What I was seeing brought home to me, more than anything else ever had, the fact that education needs to be more than simply skills acquisition if it is going to be truly effective as a means for moving people out of poverty. There has to be a cultural goal for the education, something that can benefit the entire village, city, or country.


If it is going to be a means for anything at all.


Also, skills, if not providing benefit for the individual locally, lead the skilled to leave. Just look at the migratory patterns today. The very people whose skills seem most able to help their home communities are leaving. They are going to places where the value of their skills has already been established and evidenced through adequate compensation.


The only way I learned about the problems of Togolese farmers was through working with them, through teaching and watching, through talking and listening. I also learned that the only way to be of real help in the developing world is through following the lead of the people I am trying to aid, through then helping build a platform of local support for new skills. That hedgerow project? I started it because one man in the village was already planting hedgerows. He and I talked over how to expand, and he suggested getting his grandsons involved, that I teach them how to handle seedlings.


The project cost very little and, when I returned five years later, the little nursery was still there, though the grandsons were using it for vegetables instead of trees. They had planted the trees, though, and were still doing hedgerows, too—though not so ambitiously. They had, however, gained skill in working with plants that they were continuing to use—and to make money from locally.


By listening rather than simply telling, rather than coming in with my own agenda, my own ideas, I had (at least) done no damage, and may even have accomplished just the littlest bit, allowing skills do develop within a milieu that could support those particular skills. A small step, yes. But it may allow those grandsons to stay in the area and continue to learn and develop rather than, as many of their contemporaries, wandering to the city from lack of opportunity at home.


Soon, not surprisingly, I found I was viewing teaching in a new way: Learn about your students and their environment, then use what you know about them to teach them—not just the skills you have brought in, but the place those skills can have in the community.


This, I liked.


Still, I didn't want to be a teacher anymore.


When I returned from Africa, I did teach for a bit, but neither high schools nor colleges held much attraction (and I tried both). I tried to transfer what I had learned to the American classroom, but I couldn't find enthusiasm for what I was doing.


I couldn't find a real role for me, outside of keeping order and providing evaluation. What I wanted was for the students to take control of their own learning, the thing I had found the only effective way in Togo, leaving me to point them to the appropriate tools and the means for learning to use them.


But I found, back home, that doing that was boring, and I wanted to have fun. Besides, it only really worked for motivated students—just as development projects only work when the motivation precedes the aid worker.


So, I left teaching for most of a decade, only returning to it (and then just part time for the first three years) seven years ago.


When I entered the classroom again, something was different. Though I still wanted to facilitate what my students could do, I found I was trying to do this in ways I had never before considered. Without understanding why or how, I was attempting to motivate my students, not simply facilitate their learning.


Now, this is a task that would not have been appropriate in my role as a Peace Corps Volunteer. Effective (that is, lasting) motivation needs to come through those invested in the situation. It is something that needs to be done at home, and not for another culture.


As an American, I was (and am) upset by the direction of my country. I want to change that, and the only way I can successfully do so is by motivating others to join me in working to turn this Titanic around. This motivated me to do something other than I had before: Share, and not simply instruct. Share in a way I never could as an outsider in Africa.


That does not mean I now need to (or would want to) indoctrinate my students. Just the opposite.


Because I am invested in the idea of listening to my students and starting where they “are,” I don't lay out a life course for them, but still do facilitate their learning rather than trying to control it. What I want to do today (and what I had not done before) is share with them the enthusiasm for learning, for exploration, for discovery, that I have developed over a lifetime of wandering and wondering. I want to imbue my students with the optimism I feel—the belief I share with Thoreau that the sun is a morning star, that the day is always unfolding. Then, enthused by possibility, they can take control of their own education, even turning their enthusiasm to personal gain.


And why do I do that?


I love the country I live in and the exploration I am making, and that leads me, like all enthusiasts, to want to share.


And that, ultimately, is why I teach.

Saturday, January 19, 2008

Gaming the System


It's a huge world out there—even bigger, now that we have the Internet expanding its boundaries and hoarding its history. So vast is it that we all miss things. Nobody, no matter their field or narrow specialty, can any longer say that they are on top of all that's going on. Anyone who believes otherwise will sooner or later be hit by a gigantic surprise, will be forced to discover that what they thought was an entirety of a discourse was merely one small clique.


As a dabbler, someone who looks into a number of fields for my primary research (the intersection of culture and technology), this doesn't bother me particularly. In fact, I like it, for it assures me that my work will never be finished, let alone up-to-date, making it an endless game. And I love playing.


One recent surprise was a man called James Paul Gee, newly ensconced as the Mary Lou Fenton Presidential Professor of Literary Studies at Arizona State University. Once a linguist, he is a founder of the Games, Learning, Society group based in Madison, WI (where Gee used to teach).


Immediately after hearing him speak at the City University of New York Graduate Center last December, I ran home and ordered his most recent book, a “revised and updated” version of his 2003 What Video Games Have to Teach Us About Learning and Literacy (New York: Palgrave/MacMillan, 2007).


This is one of those “duh” books. That is, there's nothing startling in it, just things that many of us teachers will say, on reading, “I should have thought of that.” Thing is, we didn't; Gee did.


He writes, in his Introduction, that:


you cannot play a game if you cannot learn it. If no one plays a game, it does not sell, and the company that makes it goes broke. Of course, designers could make the games shorter and simpler. That's often what schools do with their curriculums. But gamers won't accept short or easy games. So game designers keep making long and challenging games and still manage to get them learned. (3)

How could that obvious, though generally unnoticed, fact not say something about how and why we learn—and how we design our classes? But few of us have been paying attention. Rather than steeping ourselves in studies and pedagogical theory, we should have been watching what young people actually do when they want to learn something. And, today, of course, one of the first things they want to do is succeed at the video games they play. After all:


learning is, I would argue, learning to play “the game.” For example, literary criticism and field biology are different “games” played by different rules. (They are different sorts of activities requiring different values, tools, and ways of acting and thinking; they are different domains with different goals and different “win states.”) To learn either one at any deep level requires learning to play the “game” or, at least, to appreciate the sort of “game” it is. (7-8)

Gee organizes his book around what he identifies as 36 'principles of learning' that he has drawn from the design of video games. He ties these in to current research on “situated cognition,” “New Literacy Studies,” and “connectionism,” but his is not so much a work for scholars but for teachers, most of whom (being too old) haven't had the pleasure of learning to learn through video games.


As I do in Blogging America: The New Public Sphere, Gee believes that we are now dealing with a new type of literacy—which I call “neteracy”—based on “multimodal” (mixing words, images, and (I would say) sound) texts. Gee takes his definition of literacy in a direction that I have not explicitly followed in talking about “neteracy,” except when I speak of the cultural signals that the literate or neterate pick up, clues they have learned to spot through becoming literate or neterate. Gee expands far beyond my signals of value and significance:


Because literacy requires more than being able to “decode” (words or images for instance) and because it requires people to be able to participate in—or at least understand—certain sorts of social practices, we need to focus on not just “codes” or “representations” (like language, equation, images, and so forth) but the domains in which these codes or representations are used, as well. (18)

This requires, Gee argues, the ability to take on (and link to) new identities for the sake of learning (and succeeding in) new situations, be they games or otherwise:


if children cannot or will not make bridges between one or more of their real-world identities and the virtual identity at stake in the classroom... —or if teachers or others destroy or don't help build such bridges—then learning is imperiled. (57)

One of the most important of my tasks when I teach freshmen at New York City College of Technology, where the students rarely have a developed identity as “college student,” is to build just these bridges, between identities in the community and in high school and the ones students must build to succeed in college. Children of the college educated begin to build such identities—with the help of their parents—as early as their junior year in high school, when they begin to think about college, begin to watch their elders already in college, begin to visit campuses, and (often) are even introduced to the ways of college classrooms in the Advanced Placement courses. My students lack this advantage. So, a part of my job is to help them build the new virtual identity that will take them through four years of college and to a degree. If an understanding of the building of identities in video games can help me in this task, then I would be a fool not to turn to them. After all, I am introducing my students to an environment as alien as that of any game.


As Gee writes:


One good way to make people look stupid is to ask them to learn and think in terms of words and abstractions that they cannot connect in any useful way to images or situations in their embodied experience in the world. Unfortunately, we regularly do this in schools. (72)

My students, when they enter City Tech, are completely open to such abuse. It is our duty, as teachers there, to insure that this doesn't happen, that they quickly get the experience in the world of academia that will keep them from looking—and feeling—stupid. Which they absolutely are not.


Gee makes a good case for using video games to teach one how to understand “cultural models” through those the players assume in order to negotiate the universes of the games:


Certain circumstances can... force us to think overtly and reflectively about our cultural models. We certainly don't want or need to think overtly about all of them. But we do need to think about those that, in certain situations or at certain points in our lives, have the potential to do more harm than good. (154)

In other words, video games help us to think flexibly, and to examine the assumptions we generally act on without thought.


There is much, much more in this book, including depiction of the video game as more than an individual pursuit. One of the most interesting points that Gee makes has to do with the breaking down of communications barriers, the old media-coming-at-passive-consumer model:


Good video games allow players to be not just passive consumers but also active producers who can customize their own learning experiences. The game designer is not an insider and the player an outsider, as in school in so many instances where the teacher is the insider and the learners are outsiders who must take what they are given as mere consumers. Rather, game designers and game players are both insiders and producers—if players so choose—and there need be no outsiders. (208-209)

Clearly, video games, as much as any of our other techniques, can be an important tool in moving us away from what Paolo Freire described as the 'banking model of education.' They can also help prepare students for operation in the non-hierarchical environment the Web may continue to generate—or even help them learn to negotiate the vertical structures of our corporations by showing them that passivity, in most any situation, isn't going to get them anywhere.


Though it would have been useful to me to see this book five years ago, when the first edition appeared, I can't really complain. What Gee has done, even if it needs updating every few years as games and technology changes, is present a challenge for us teachers that will be ongoing for quite some years. If nothing else, he is showing that simply putting technology in the classroom is never going to be enough. We need to learn from the technology (and from our students) even as we begin to adapt the technology for our own narrow definitions of “learning.”


Maybe, with Gee's help, we can even broaden that.

Waking the Dead

[Crossposted from ePluribus Media Journal]



A Review of The Grateful Dead and Philosophy: Getting High Minded About Love and Haight, edited by Steven Gimbel (Chicago: Open Court, 2007).

Appropriately enough, I learned about The Grateful Dead and Philosophy: Getting High Minded About Love and Haight not through any academic conference or high-minded scholarly journal but through Daily Kos, the premier liberal group blog... a place of popular discourse well beyond the academy. Editor Steven Gimbel, aside from being a Philosophy professor at Gettysburg College, is a dedicated Kossack. Following the philosophical lead of The Grateful Dead, he wants to move his work beyond library walls and book covers, just as the Dead did, opening their work, providing accessibility beyond concert halls and album jackets.

Part of a series called “Popular Culture and Philosophy” that covers everything from Seinfeld to Star Wars, this volume contains 19 essays by a motley group of academics who seem prouder of their histories as Deadheads than of their academic credentials, impressive though the latter are. All of them are trying the tricky task of writing inclusively on an academic topic for, yes, even though this volume is tied to the Dead, the discussions on philosophy are serious, not ironic or simplistic. As Gimbel writes:

When you put the words “philosophy” and “Grateful Dead” in the same sentence, you run the risk of invoking precisely that sort of image—vapid, silly statements that collapse into the triviality of something you'd find in a fortune cooking when you take the time to think about it with a sober mind. (xvii)


But trivial this book is not. As Gimbel goes on to say, some of those Deadheads who argued all topics into the wee hours while listening to traded tapes of Dead shows went on to study philosophy seriously. This volume is the result.

The first essay, “Keep Your Day Job? Tie Dyes, Veggie Burritos, and Adam Smith in the Parking Lot” by Gimbel and Brendan Cushing-Daniels, explores the brisk marketplace that surrounded just about every Dead show, even as early as the mid-1970s. What statement about capitalism was made? About the larger corporate culture? As background, they write:

Capitalism began as a far out left-wing notion, as an economics of liberation. In European societies which were agriculture-based with long-standing monarchies, where the property was owned and controlled by a few nobles, but worked by serfs and generation after generation there was not even the possibility of economic, social, or political mobility, the idea that just anyone could make money, and lots of it, was quite radical. (4)


One of the greatest contributions a scholar can make is to provide context for the events of our world—and everything does reflect the past and the broader world, whether those involved know it or not. Someone selling a home-made tie-tied tee-shirt may have thought they were simply trying to pick up a couple of extra bucks to put some food in the stomach and provide a ticket for the next show. But they were involved in a much larger continuum—we all are.

One of the most unfortunate aspects of the ivory tower is that it concentrates knowledge that should be available to all of us. But there are cracks in the walls, letting some actually useful knowledge escape into the broader public discourse. Socrates may have said that the unexamined life is not worth living, but he might have wanted to add that the examination means nothing without context. And it is books like this one that attempt to provide that for all of us, not simply for fellow academics.

In “Buddhism Through the Eyes of the Dead,” Paul Gass writes:

We cling to a sense of self-identity and believe there is a permanent self or soul that persists not only through this life but through our many reincarnations. But, ultimately, there is nothing there to cling to, and this becomes the root of our suffering. We detect this disjointedness between perception and reality and feel an uneasiness because things are misaligned. (130)


This is no problem just for philosophers, but for all of us, and its implications temper our personal belief systems, whatever they may be. That The Grateful Dead can be used to crystallize discussion should not be surprising: A band with such a wide range of music and that cared enough about its lyrics to the extent of having songwriters as, essentially, part of the band—and sustain success over decades—could be expected to do no less. Without some sort of real grounding—and not just in the notes—The Grateful Dead would be as forgotten as The 1910 Fruitgum Company.

But the Dead are not forgotten, remaining with us through the music, yes, but also through memories of the experience of listening—live, and through cassette tapes that often surpassed the band's recorded albums in quality.

There's something else that makes The Grateful Dead last where The 1910 Fruitgum Company does not: Judgment of quality. This is a complicated topic for philosophers, as Mary MacLeod shows in “You Don't Need Space,” but significant to the rest of us, too, whether we want it to be or not:

Unless you're a music critic or an art critic, aesthetic judgments may not matter much to you, but moral judgments do, so before you relegate Realism to the trash, you may want to consider recycling. With music, Subjectivism doesn't leave much of a bad taste in your mouth, but with ethics, it does. We don't think that action choices are simply a matter of personal preference. Rather, we think moral errors are possible. There may be no errors in musical taste, but we think people can be wrong in their judgments about the moral status of actions—which kinds are morally right and which kinds wrong. (196-197)


The point is that there is no way of leaving things 'as a matter of taste' without making implied statements about ethical issues, too. Even if you posit a dividing line, you are stuck with defending both the existence of the line and its particular placement.

The book ends, appropriately, with an essay entitled “Death Don't Have No Mercy: On the Metaphysics of Loss and Why We Should Be Grateful for Death” by Ian Duckles and Eric M. Rubenstein. It ends with this:

Though it may be mere coincidence, we can nevertheless see an important respect in which the name “Grateful Dead” makes sense. If we weren't mortal, if we didn't someday die, we wouldn't have the freedom and control over our lives and our values that an authentic confrontation with death provides. Thus, we should be grateful for death, or, at the very least, grateful for our mortality. Without it, we wouldn't be what we are, and we wouldn't be capable of doing the things we can. (238)


There are lessons from any cultural phenomenon as powerful and long-lasting as The Grateful Dead. There certainly are. All it needs is the looking to find them. The value of this book is that it opens the exploration to everyone, not simply to scholars hiding away in ivied halls. It is a volume that most any Deadhead will want, something to pull down and browse at three in the morning while “St. Stephen” plays in the background. But it can be of use to others as well. Not only can it open up an 'alien' subculture, but can lead to new personal explorations, Dead or no Dead.

Wednesday, January 16, 2008

Why Do I Teach?

Why do I teach? Because I'm not learn'd, not in the sense of Whitman's astronomer. I'm more like his listener, who fidgeted:


Till rising and gliding out, I wander’d off by myself,
In the mystical moist night-air, and from time to time,
Look’d up in perfect silence at the stars.

A lover of the meaning, and not simply the word, I want to walk and lead through the beauty of the word to the beauty of the night.


Why do I teach? I hated school, but love exploring, and felt terribly misundersood as a student in a classroom that always seemed in agreement, seeing something other than what I was trying to convey:


I was much too far out all my life
And not waving but drowning.

I hate schools, hate the walls of academe, hate that we decide upon knowledge while looking from a distance. But, human, I am drawn to what I hate—to best it, to change it.


Why do I teach? Lust for knowledge and beauty, I had and have, often unfulfilled, but fulfilling in their being and in the knowing of them:


We have lingered in the chambers of the sea
By sea-girls wreathed with seaweed red and brown
Till human voices wake us, and we drown.

But don't be depressed. There's something beyond us, something we don't know, sure. Something we can strive for and yet be warmed even by our failure.


Why do I teach? Because teaching is life, and:


To be alive is power,
Existence in itself,
Without a further function,
Omnipotence enough.

And learning wakes us to it.


Why do I teach? Because:


A little learning is a dangerous thing;
Drink deep, or taste not the Pierian spring;
For shallow draughts intoxicate the brain
And drinking largely sobers us again.

The spring, the source of knowledge and inspiration. And inspiration.


Why do I teach? Perhaps to be more than an individual ecstasy, but part of a continuity:


The body dies; the body's beauty lives,
So evenings die, in their green going,
A wave, interminably flowing.

Teaching makes it impossible to be simply one alone.


Why do I teach? When I watch others learning, I learn, and re-learn myself:


He is learning, well behind his desperate eyes,
The epistemology of loss, how to stand up
Knowing what every man must one day know
And most know many days, how to stand up

I teach to learn to stand up myself.

Inspired by Free Exchange on Campus.

Tuesday, January 15, 2008

Elephants!

As I've written about the elephant that charged me in 1990 and about the death of the small herd that elephant came from, I couldn't resist posting these, pictures of members of that herd, taken within sight of my house in Tambaong, Togo... where, some months later, that one elephant let me walk away:




From Tambaong

Monday, January 14, 2008

Tuaregs

The first Tuaregs I saw were shadowy figures in long robes, sometimes wearing turbans, trying to sell cheap, blunt swords and knives sheathed in leather, cassette cases and other boxes, also leather covered, and jewelry made in imitation of what I would later find to be desert finery. Most of the bars I frequented in Ouagadougou were separated from the street only by low walls (if that), and the Tuaregs would glide up to the table, never making eye contact, laying out their wares until the bar owners shooed them away.


They were lighter skinned than most of the Burkinabe, with hair a lot straighter. Later, I would imagine them somehow related to the Roma of Europe, for they aren’t much darker and are also nomadic. The Tuaregs I saw in Ouaga, or so I was told, could not really go home again—except to purchase more tourist geegaws—for they had abandoned the traditional life.


They had done so, usually, of necessity.


It was a time of drought, or just the end of it. Worldwide attention had been focused on the area just a year earlier, with the first Live Aid concert of 1984. Things were better, but not much more so.


The next March, a friend named Brent and I decided to go up to Tombouctou overland. The Niger river was now too low for the boats, and the first truck of the season—a Land Rover—was going to take up the mail instead, defining the road over what had been, until the week before, submerged land.


The only foreigners in the Land Rover were the two of us and a Japanese man who was planning on buying a camel and joining a caravan across the desert. Everyone else was Malian.


For some reason, I ended up sitting right next to the 55-gallon drum holding spare gasoline. It had not been sealed well; at any good bump, gas sprayed up and over me. Add to that the dust that flew in from the open back, and I was soon quite a sight.


When we got to Niafunké (the home, though I did not know it then, of Ali Farka Touré), two of the Tuaregs motioned for me to follow them. I did, and they took me to a market stall and quickly made a deal for a long swath of loosely woven, synthetic cloth—which I still have.


One of them tied it into a turban around my head, undid it, and ordered me to do it myself. It took a couple of times, but I soon could do myself up “like a Tamachek,” as he said. Back in the truck, I imitated them and covered my face with the loose end of the turban that had hung in front of my left shoulder, draping the rest over my right shoulder and down my back. The two men who had helped me nodded when I had it correctly in place and ignored me for the rest of the journey.


Though the gas still splashed and the dust continued to swirl, my lips and nose were now protected—and the turban proved surprisingly cool.


Over the next week in the desert, I learned to love that turban. I took it back with me to Ouaga, and it returned to Africa with me some years later when I joined Peace Corps. It folds quite small, though it's over two meters long, is light, and takes little space.


While we were in Tombouctou, Brent and I hired a couple of camels and Tuareg guides to visit one of the nomad encampments outside of town. The guide (it was his encampment we went to) said they weren't really nomads any longer, for they had no goats. No more was there vegetation enough to support them away from the wells of the few towns--and they weren't welcome in the towns. He showed us his garden with disgust and resignation, telling us this was no way for a Tuareg to live.


Below is a picture I took at that Tuareg encampment, colored a bit with pencil a couple of years later:



Tuareg resilience is legendary for a reason. People of the desert—the whole desert—they had been parceled into a number of countries, including Mali, Algeria, Niger, and even Burkina Faso, though feeling allegiance to none. Hit hard by the drought of the eighties and stung by governments reluctant to recognize their nomadic ways, it looked like the Tuaregs were a vanishing people. It seemed as though they would drift to the cities, abandoning their traditional lifestyle, selling trinkets and even begging. Yet that is not what happened. Not completely, at least.


For a while, it seemed as though the Tuareg had decided instead to go out in a blaze of gunfire, fighting the governments of Mali and Niger. While that may still happen, the Tuareg are getting smart. They are taking a lesson from those who have been selling pale imitations of their art (the cross of Agadez, as a silver pendant, can now be seen almost anywhere in the world) and are using their culture to reach the rest of the world as something other than supplicants. The desert may no longer be able to support them, but the skilled artisans and musicians among them just might.


And, the Sahara being the Sahara, not many are going to want to move in on them, as they disappear back into the sands, having stocked up on necessities through trade.

Wednesday, January 09, 2008

Again, with the Narrative

Once more, the American commercial news media have created a story and then reported it as “news.”


How long, just how long can this go on?


Polls, the day before the New Hampshire primary, show Hillary Clinton suddenly dropping far behind Barack Obama. Clinton has an emotional moment before the primary. She wins the primary. Suddenly, we have a story, a narrative the news media can get their teeth into. Rather than simply reporting the news, they can create it.


If there is a story at all, here, it is in the unquestioned acceptance of the polls. Looked at rationally, no one in their right mind could have believed that Clinton dropped from a lead to a double-digit deficit in five days, days during which she performed well and during which Obama did nothing spectacular. The polling data should have raised alarm bells, leading the news media to start asking why the polls were proving so volatile in what was—for all the media hoopla over Obama’s win in Iowa—a reasonably stable political situation.


It didn’t. Instead, the news media have latched on to the easy story, the one they can project forward and back in a nice, simple form. The New Hampshire voters are ornery, I heard one commentator say, and refused to be buffaloed by the Iowa results.


Say what? People don’t vote just to send that kind of message. To claim so is to be ridiculous on the face of it. But that’s the kind of narrative that the members of the commercial news media like to build, for it is simple and understandable—silly though it might be, on serious examination.


By uncritically accepting the polling data, the news media were deliberately setting up a number of stories, the one used depending on the actual results. Had Clinton lost, the story would have been of a fall from grace. That she won becomes a “comeback.”


But she was never falling, and had never gone away.


When, oh when, will the commercial news media begin to simply report the news, rather than twisting it into stories that it can manipulate over a series of days or weeks? When, rather than taking on the entertainer’s role, will they recognize that their job should be to find and share information, not to tell stories?

Monday, January 07, 2008

ABC's Digital Correspondents

Last fall, ABC News began an experiment with “digital correspondents,” young reporters who were handed equipment and sent to live in under-covered areas of the world. The program is a cheap way of replacing the old,unwieldy news bureaus that have all but disappeared and are a substitute for “parachute” journalism where reporters land, get a story, and leave.


“Cheap,” however, does not mean inferior journalism—any more than it means bad movies (Clerks, for example, cost less that $30,000 to make), nor does it mean ABC isn't committing a substantial chunk of change to the program—over a million dollars for the first year. Much more, if it proves a success (as I suspect it will).


The most important factor in journalism is the reporter, the person “on the ground.” The best of these are those who have developed real expertise in the subject matter or location. In terms of foreign news, this is what we have lost over the past twenty years: With the lack of permanent bureaus, foreign correspondents have become grasshoppers (maybe “locusts” would be a better analogy), jumping from place to place and story to story, relying on local stringers and not on their own knowledge for information.


For a corporate entity like ABC News to recognize that it needn't spend tons to money to get quality news from around the world is certainly unusual. What we'll have to see is whether or not ABC is willing to continue to spend what (to it) is a paltry amount of money as the “digital correspondents” learn (or invent) their trade and become familiar with the environments they are covering—something that can take years (months, at least, before they become conversant enough to produce substantial stories). There may be failures in the program—reporters who find they are not able to live for substantial lengths of time in unfamiliar environments, others who simply cannot work with the lack of supervision of “outpost” work—but the idea is sound, as Dana Hughes, the “digital correspondent” in Nairobi is proving during the current crisis in Kenya.


Judge for yourself. Here's a link to a look at one of her recent reports:
Humanitarian Crisis in Kenya

Sunday, January 06, 2008

The New Hampshire Debates

Last night, for the first time, I watched the debates on television. Aside from the Facebook fooforall and something irrelevant called the “Spin Room,” I even found the presentation reasonable—though the guy from ABC (the host of their evening news—which I never watch) clearly knows little about the lives or incomes of much of America (he raised derisive laughter when he said that two married college professors would be making around $200,000 combined) and has a naïve view of causality in Iraq (he believes that the “surge” is responsible for the drop in violence, that it is “working”).


What I am finding unreasonable is post-debate coverage.


Though there was a little scuffling, I found the Democrats to be a forceful quartet, agreeing on a number of issues and presenting a cohesive vision of the direction our country needs to take. Yet the headline in The New York Times, reflecting most of what I heard on radio last night, was “At Debate, Two Rivals Go After Defiant Clinton.”


Little in the article, or in any other commentary from the mainstream media that I have seen, focused on the fact that we now have an emerging Democratic consensus on where the country should go. On global warming, on health care, on nuclear proliferation, and on Iraq, the four set out programs that differ only in the details. The big difference was Edwards' and his desire to take lobbyists out of politics and returning the government to the people (Obama agreeing on this part).


In fact, it is this that may prove to be Clinton's Achilles heel: She is perceived as “in bed” with entrenched interests, while Edwards and (to a lesser extent) Obama argue that the only way real change will happen is through real movement away from such “interests.”


The big story of the debate, that the commercial news media seems to have missed, is that the Democrats, if Obama or Edwards wins the nomination, will have a platform that can be expressed coherently and quickly, built on the long process of refinement through debates and, now, primaries, and that the difference between them and Clinton has more to do with conception of how to govern than with the goals of governing. The debate, for all the little bickering, showed that the party is developing, for the first time in a long, long time, a real series of goals—and ones that will hard to resist, if one of the Democrats wins the presidency.


As I am not going to be voting for any of the Republicans, I was not able to watch them with the same enthusiasm I had for the Democrats. What was interesting, though, was seeing faces I had only heard before on the radio—I hardly knew what Mitt Romney looks like. Now I have seen his charm, but also saw him coming across as little more than a pretty face in a pack of much more substantial (even if I don't like them that well) candidates. Except for Fred Thompson, who seemed little more than a cypher, the others each seemed to have a well-thought vision to get across. Even Giuliani, whom I loathe. You could tell where they stand, and what makes each individual.


What they lack is a feeling of any sort of growing cohesion. Unlike the Democrats, they don't seem to be coming together through their competition. Their bickering, rather than being put aside when the real issues need the fore, seems to dominate—as it did when they went back and forth about the meaning of “amnesty.” They all want to build a wall around the United States, sure, but they could not come to any broad agreement about what to do then, about the “illegals” they would be locking in.


All in all, outside of Thompson and Romney (who came off as lightweights), I gained a little respect for each of the Republican candidates.


They are doomed to lose, though, unless they can imitate what the Democrats are doing (and what Republicans in the past did so well), and that is build a consensus vision of the future.


Personally, I hope they don't.

Friday, January 04, 2008

The "Race and New Media" Conference

On May 3, 2008, the first “Race and New Media” conference will take place on the campus of New York City College of Technology (CUNY) in downtown Brooklyn close to both the Brooklyn and Manhattan bridges. Though in an academic setting, the conference is intended to draw from the broader community as well, bringing in “outsiders” as both presenters and participants. My colleague Annie Seaton and I are the organizers, and are exploring a number of ways of expanding the conference beyond college walls. One panel, of example, will be a walking tour of a small section of Brooklyn while relevant topics are discussed.



Brooklyn native (and Harvard PhD candidate/Rockefeller Fellow) Omar Wasow has agreed to provide the keynote talk. Wasow is the co-founder and ongoing strategic advisor to BlackPlanet.com and an on-air technology analyst. Under Wasow's leadership, BlackPlanet.com became the leading website for African Americans, reaching over three million people a month. Wasow also works to demystify technology issues through regular TV and radio segments on NBC's Today Show and public radio's Tavis Smiley Show.



Tentatively, Natasha Dow Schull, Assistant Professor in Science, Technology, and Society at MIT, is also scheduled to speak. Professor Schull has won fellowships from the National Science Foundation, the Alfred P. Sloan Foundation, and the Woodrow Wilson Foundation. Joel Rainey, from the Harvard History Department, a Cultural Historian, agreed to be another featured speaker.



City Tech physics professor Dr. Reginald Blake Assistant Professor, also a Visiting Research Scientist at Brookhaven National Laboratory and director of the CUNY-wide Black Male Initiative (BMI), will provide the opening remarks.



Panel papers will deal with a range of aspect of the relationship between "race" and "new media." Included may be questions like the following: Does race work differently in the "new" media than it did in the "old" media? Network news, for instance, was widely derided as a nearly diversity-free zone. Is the blogosphere different? How do video games, blogs, chat rooms, and other forms of "new" media and "digital" or "virtual" spaces construct or reflect notions of race? What kinds of "new" identities and/or communities exist in these "new" digital spaces? How is new media being used to make connections, to empower communities, and/or to control, colonize, or dominate them? In other words, are there digital forms of "cultural" imperialism? Anyone interested in participating on a panel should sent a 250-word proposal to raceandnewmedia@gmail.com.



City Tech students will be drawn in in a number of ways. Students in my Advanced Technical Writing course will be creating and presenting one of the panels; coverage of the conference will be provided by my City Tech journalism students, who will be involved in a cooperative venture with ePluribus Media to supply information about the conference to the wider world. ePMedia will offer mentoring in such things as ethics, fact checking, and editing.



We are open to experimentation of many sorts. Contact us through the raceandnewmedia@gmail.com email address with any suggestions or proposals.

Thursday, January 03, 2008

Looking Back to See

[I've also a post today on Free Exchange on Campus, "Thoughts for the New Year," on faculty responsibility under the umbrella of "academic freedom."]

Sixty-six years ago, the Athenæum Press of Ginn and Company published Readings for Our Times edited by Harold Blodgett and Burgess James of Union College. As both a cultural historian and a teacher of introductory literature surveys and first-year composition courses, it is interesting to me to look back on college readers of earlier times. Assumptions of value change, and new modes of teaching often require new types of texts--and can be reviewed by a look at what has been cast aside.


The book precedes the split between composition and literature that, I believe, so damages the teaching of both. Perhaps Blodgett and James would agree:


The first business of a college English course—or any other college course, for that matter—is to equip students for the business of living. The immediate objectives of the English teacher are to arose an intelligent interest in good literature and to train students in effective written expression. If these two things are to be taught by the same teacher, and in the same classroom at the same time, they must somehow be so taught as to strengthen each other. (iii)

When we decided that they needn't be taught together, we somehow also started to assume that they weren't needed, each by each. That was a mistake. The synergy the editors allude to is quite real, as most composition teachers have discovered, providing stimulus, example, and more.


The editors go on to say that:


It is not essential, even in a composition classroom, that all reading should have a vocabulary limited to words and phrases in common use today. Any intelligent reader can substitute in his own mind a current term for one that is obsolete or obsolescent, and hardly lose momentum as he reads. And it is worth while to be reminded that all contemporary writing which is worth preserving owes a vast deal to writing of the past. (iii)

To me, this sounds extraordinarily contemporary. Until recently, there has been an unspoken assumption (growing from the 1960s on) that we can't challenge our introductory students overly, in terms of subject matter, style, or vocabulary. Today, many of us see that our students can stretch beyond their immediate experiences and can be trusted to grapple with texts themselves.


What really interests me, when I pick up an old anthology, is the selection. Who, I want to see, has gone out of style? Who remains? Who, ubiquitous today, was once ignored completely? The answer to these questions can tell a great deal about the cultural shifts from one time to another.


Blodgett and James were aware of this, even as they made their selections—and sometimes felt they had to defend themselves:


When critics make their lists of England's great novelists of recent years, the name of Sir Arthur Conan Doyle [whose “A Study in Scarlet” is included] is likely to be absent. We are reminded of Hardy, Conrad, Wells, Bennett, and Galsworthy—indeed England's greatest. (5).

The first three are still considered “great” (Wells maybe just on the list), but Bennett and Galsworthy and not Forster, Huxley, Waugh, Ford, Lawrence, Maugham,or Greene, all of whom, though younger than Bennett and Galsworthy, were writing successfully before World War II? The question of just why our conception of “great” changes is always intriguing. Satire, sexual intrigue, and suspense, for example, may not have been acceptable parts of the “greats” to the earlier time—though any blanket statement like that, of course, should be met with a great deal of skepticism.


Some of the writers from this book who I rarely see in anthologies today, but miss, are O. Henry, Ring Lardner, Rudyard Kipling, George Ade, Don Marquis, Ogden Nash, Carl Sandburg, Archibald MacLeish, and Sidney Lanier. Many, today, may consider them “lightweights.” I think that's as much a reaction to past popularity as real analysis of their work. Some who will still be found are Porter, Jewett, Hemingway, Frost, Harte, Parker, cummings, Eliot, Dickinson, and Schwartz. Poor F. Scott Fitzgerald is nowhere to be found. Same with Ezra Pound. And James Joyce... who's he?


We like to think that we know what is good, and that it is good for all time. What looking back at a volume like this does for me is remind me that taste does change—and that mine has no more permanence or solid grounding than that of any other. I am reading Wilfrid Sheed's The House that George Built (With a Little Help from Irving, Cole and a Crew of About Fifty) where he posits a golden age of songwriting from the twenties up to the advent of rock and roll. That reminds me of the arguments within the science-fiction community about the genre's golden age. The combatants finally all gave up, resigned to the truth that 'the golden age of science fiction is 13.' Each generation defines new golden ages for its own needs and experiences. To me, the golden age of songwriting is the 1960s when lyrics pushed way past anything the “songbook” writers imagined. But that's as open to argument as the idea that the Edwardians Arnold Bennett and John Galsworthy deserve mention as recent English greats in 1942 while E. M. Forster does not.

Wednesday, January 02, 2008

Kenya... Wither Goest?

My interest in Africa began at age of eleven, when I met a Kenyan who was studying at Antioch College. Alphonse Okuku was living with the Ernest Morgan family, the founders of the little boarding school, The Arthur Morgan School (named for Ernest's famous and influential father), I was attending. Ernest and Elizabeth's son Lee brought Alphonse down to the school sometime in the fall of 1963. I was fascinated by the young man, who seemed so idealistic and pure.


The next spring, perhaps having taken note of my fascination with Alphonse, another Antioch student, one of those working at the school as part of their college studies, gave me a book, Alan Paton's Cry, the Beloved Country. I took it with me as my family left for a year in Thailand—and devoured it.


Over the next few years, other books on Africa came my way, including Jean-Pierre Hallet's Congo Kitabu and a book on ballooning over Kenya, Anthony Smith's Jambo. But my interest in Africa eventually waned—there was growing turmoil at home. I did take note, however, of the assassination of Alphonse's brother, Tom Mboya, in 1969. A Pan-Africanist and independence leader, Mboya was of the generation that had come to adulthood after World War II in Africa, many of them educated abroad and bringing their skills back home to countries just beginning to try to take control of their own destinies.


I didn't know it, but Alphonse had been with his brother just minutes before the assassination.


They had been close—and I've been told that Alphonse looked very like his brother. They grew up about an hour from Nairobi, in Thika, made famous in Europe and America by Isak Dinesen's Out of Africa and Elspeth Huxley's The Flame Trees of Thika. They were Luos, though, from a family originating on Rusinga Island in Lake Victoria, not far from Homa Bay. Mboya is buried there, in a mausoleum meant to celebrate the promise he exemplified.


Alphonse took me there in 1990—and is now buried there.


Though my interest in Africa had been eclipsed by, well, the 1960s, it reasserted itself in the 1980s, thanks to a girlfriend who dumped me in favor of a Peace Corps posting in Benin. In a vain attempt to rekindle the relationship, I spent a month visiting her in 1984. Though the affair had died, my fascination with the continent rekindled.


So, a little more than a year later, I arrived in Ouagadougou, Burkina Faso as a Fulbright lecturer. I loved it there, and signed up for a second year. On my return to the United States, all I could think of was getting back to Africa, so I followed my ex-girlfriend once more, joining Peace Corps, which sent me to Togo, a country bordering both Benin and Burkina Faso.


While there, I began to wonder about Alphonse—and looked him up during one of my trips to Lomé (the country's capital), finding that he had gone into politics, even becoming a member of Daniel arap Moi's cabinet. I found an address for him and wrote him.


He wrote back, inviting me to visit him if I ever crossed the continent to Kenya.


And so I did.


By the time I arrived in 1990, Alphonse had left government. He told me that KANU (the Kenya African National Union—arap Moi's political party) ran in his veins but that he just wanted to attend to his new business, a lumber concern assisted by a land-usage grant from the government. Political tension was high: A Luo politician had died in an auto “accident” shortly before I arrived in Kisumu, where I met Alphonse's wife before leaving with him to visit Rusinga and see his new business.


Alphonse impressed me quite a bit. He had worked for the UN for years before entering Kenyan politics, was clearly a kind man, and knowledgeable on world and development issues. I didn't say so, but I regretted that he was out of politics—and suspected his own Luo heritage had something to do with it.


The only thing I didn't like about Alphonse was his driving. He had a little Isuzu SUV that he spun along dirt tracks at outrageous speed leaving my knuckles white.


The political corruption of Kenya was quite clear during my visit, and ethnic rifts were obvious causes for concern. But the country is beautiful and bountiful. If it continued to produce men like Alphonse, I thought as I left, it would probably make its way. Sure, Alphonse was taking advantage of political privilege for his business and, yes, he responded to the strengths of his ethnic ties, but Alphonse was a Kenyan first. Thika, after all, is Kikuyu country—and his wife was from Mombasa. I doubt she could have been Luo. Compared to the West African countries where I had spent most of the last five years, however, Kenya seem poised for an explosion of success.


But hope, all too often, gets overtaken by events. In 1994, Alphonse died in an auto accident. It may have been a simple accident (Alphonse's driving, after all, made every experience with him at the wheel an adventure), but he had recently broken with KANU, joining a Luo-based political party. As “accidents” have been a tried-and-true way of removing opposition in Kenya, the timing of his death so soon after his political “conversion” raises questions, to say the least. Whatever the cause of the accident, however, Kenya lost another of the leaders with the ability and dedication necessary for changing the course towards self-destruction it seemed to have decided upon.


Seems to have decided upon.


Today, they are saying that hundreds have been killed in the disturbances following contested election results. In Eldoret, a church filled with Kikuyus joined together for protection in that Luo-dominated town, was torched yesterday, burning to death dozens and raising specters of Rwanda.


I don't know if there is much we in America can do to stop the violence and change Kenya's route, but we have stood by way too often over the years since I moved back to the United States. Liberia, Sierra Leone, Rwanda, Sudan, Congo, Zimbabwe and more: The situations in Africa continue to devolve, though there are bright spots (Benin and Mali, for example).


Hand-wringing of the sort I am doing right now as I listen to what is happening in Kenya and dread what might happen tomorrow will do no good. I know that. But, if enough of us make it be known that we are paying attention, then it might become clear that our governments will have to pay attention, putting pressure on the Kenyan government to take the radical action that may be the only way to save the country. Possibly, a continuation of the crisis can be averted.


Maybe.


Do what you can, even if it is as little as a blog post.

Tuesday, January 01, 2008

Value and Choice

MIRANDA: O, wonder!
How many goodly creatures are there here!
How beauteous mankind is! O brave new world
That has such people in't!

PROSPERO: 'Tis new to thee.
--Shakespeare, The Tempest

What follows, coming from one who has been involved online since before the World Wide Web, may seem rather strange. Just remember, though, what I'm talking about is not the value of the Internet but the value of choice.


On NPR's All Things Considered today, “futurist” Syd Mead (known to me for his contribution to the “look” of Blade Runner) predicted microchip implants in humans. He's not particularly comfortable with the idea—nor am I.


Yet just imagine it: Recipients of the implants would have “freedoms” that, today, we can hardly begin to comprehend. They could go into stores and, if their bank balances were healthy, simply pick up items and walk out with them. They could avoid almost all lines, a built-in “EZ Pass” zipping them through every time—even onto airplanes. And more: Like I said, we can't even imagine the advantages of such an implant any more than we really imagined the Web, twenty years ago.


Evangelists for the implants will extol their wonder, urging those without to cross the “implant divide” just as we do today, with the “digital divide.” Programs will be instituted in the schools, bringing children from deprived backgrounds their own microchips. Aid will be given to poor countries, a modified, cheaper, and somewhat restricted version of the implant provided so that people from the “developing world” are not kept on the “outside.”


A “consensus” having evolved among the rich, the educated, and the insiders that the implants, on balance, make human life better, no one will be asking the question as to whether or not everyone really wants to be part of this new web. Well, in the developed countries, opting out will be tolerated. Eccentrics will be tolerated, back-to-the-land folk wishing to live off the grid. But the poor? They need to be pulled in—for their own good.


The idea that the poor, or those living in the poorest countries, should make their own decision about whether to join in this new network would not really be considered. They don't have enough information, anyway. They need these things. Those who have dropped out have made their own decisions, but the poor, never having dropped in, needn't be granted that option.


I don't need to list the dangers that such implants will entail—you can imagine them for yourself quite easily. Probably (like me), you shudder at the very thought. Implants will make freedom contingent on the sufferance of whomever controls the central processing. Our freedom is limited enough, today; with implants, it will become nothing more than illusion.


We neterate folk generally shake our heads in pity at the sight of those who have not yet crossed today's digital divide. We see only that those who don't join us are in danger of being left behind. We don't see the dangers that the Web itself represents, dangers just as real as those implants will present. “We can handle it,” we believe. But can we really? And should we be making the decision about the value of the Web for others, some of whom may want to decide that they would rather not join in.


Or have their children join in.


When we start exporting our belief in the value of our digital world through things like Nicholas Negroponte's One Laptop Per Child (OLPC) program, we've made the decision about joining in for families and communities, not allowing them the choice that really should be their own. Some make the elitist argument that the people in the communities OLPC is designed to help are in no position to make an informed choice on their own—but that's not really true, today, not anywhere in the world. The poor are as able to opt in or out just as intelligently as your cousin who sold everything and moved to northern Arizona “off the grid.”


Though I love the Web, and see it as a ray of hope for the future, I want to entice people into it by interest, not forcing them into it by herding. When we talk of the “digital divide,” we should be discussing ways of making it easy to cross, not developing means of shoving people across. We need to convince people, not push them.


Why? So that, fifteen years from now, I can feel I am making my own decision, my own choice, when faced with getting an implant or not.