Richard Beck

Richard Beck

Spring 2009


 



In 1964, two years after the publication of his debut novel, *One Flew Over the Cuckoo’s Nest, *Ken Kesey left California on a cross-country road trip. He took recording equipment, hallucinogenic drugs, and a dozen friends. Kesey drove east in part to escape the loneliness of novel writing, but he was also putting both physical and symbolic space between himself and the creative writing program at Stanford, from which he had graduated in 1962. He took his education with him, though: the Merry Pranksters drove a school bus.



There is a provocative irony in the fact that *One Flew Over the Cuckoo’s Nest*—one of the twentieth century’s loudest anti-institutional novels—was written for class credit. That irony, Mark McGurl writes in his book *The Program Era: Postwar Fiction and the Rise of Creative Writing,* lies at the heart of the last half-century of American fiction.



McGurl’s argument that the advent of creative writing “stands as the most important event in postwar American literary history” is an important one, and it is going to be controversial. The first graduate-level creative writing program began in 1936, and there are now more than three-hundred of them scattered throughout the country, with four hundred additional degrees offered to undergraduates. Admissions are competitive; as McGurl writes, students love creative writing “suspiciously much.” The estimated success rate for creative writing graduates—that is, how many actually go on to write for a living—is roughly one percent. (It is ninety percent for medical school graduates.) Each year, thousands of students voluntarily put themselves in debt for what amounts to a slightly modified extension of their college education. It is hard to imagine what the country’s literature would look like without it.



***



Anyone can write alone for free on weeknights; what creative writing students go broke for is the workshop. We think of writing as a private struggle, but that is now out of date. Creative writing is both social and theatrical. Seated at a large seminar table with a dozen peers, students receive advice, respond, debate the wording of a particular line, all under the parental gaze of the creative writing instructor. McGurl writes that the workshop’s sociability is alternately “supportive and savage.” Like colleges, creative writing programs always present themselves as nurturing communities, but students know that every word and attitude is subject to brutal scrutiny. Their teachers have been happy to dish it out. Flannery O’Connor believed that teaching was a negative exercise: “We can learn how *not* to write.”



O’Connor graduated from the country’s most prestigious writing program, which was also the first. The Iowa Writers’ Workshop began in 1936, and it was a long time in the making. The clearest point of origin for creative writing as an academic discipline is Wendell Barrett’s course in Advanced English Composition, which he began giving at Harvard in 1884. Students who had previously been asked to compose themes on given topics—“Can the immortality of the soul be proved,” say—were now asked to write daily assignments on whatever they wanted, the only criteria being “that the subject shall be a matter of observation during the day when it is written . . . and that the style shall be fluent and agreeable.” With his Vandyke beard, walking stick, and spats, Barrett was not only a teacher of creative expression but also a charismatic model of creative *being. *By its second year, his course was attracting one hundred and fifty students.



Barrett described his class as an “educational experiment,” and within thirty years his formulation could be found at the heart of the progressive movement in American education. In 1894, a thirty-five year old philosophy professor named John Dewey arrived at the University of Chicago. The Pullman Strike, which was taking place at the time, brought him into contact with the social scientist Jane Addams. One of the goals of Addams’ social settlement Hull-House, founded in 1889, was to help prepare the poorest members of Chicago’s rapidly expanding immigrant communities for life as Americans. Addams frequently found that the ideas they generated for guest lecturers and other educational programs were more useful than her own. Hull-House became a model for collaboration and pluralism that would resonate throughout the first half of the twentieth century.



By 1894, Hull-House was resonating with Dewey as well. “There is an image of a school growing up in my mind all the time,” he wrote. “A school where some actual & literal constructive activity shall be the centre & source of the whole thing.” Two years later, Dewey founded his own educational experiment, the University Elementary School of the University of Chicago, which would eventually become the still-famous Laboratory School. The children there did not learn anything that they did not also do. They cooked lunch, for example, which provided an occasion for teaching arithmetic by requiring students to weigh and measure ingredients, and they also built little smelters and worked with iron. One of the consequences of Dewey’s experiments was the conceptual marriage of learning and doing, and his ideas on education, in slightly modified versions, are how creative writing programs explain and justify themselves. They are literature laboratories, and stories are experiments in creativity.



It wasn’t until the decades following World War II, when the student populations of American universities began to resemble the ethnically diverse residents of Hull-House, that creative writing really found its place. In 1946, the President’s Commission on Higher Education advocated a radically expanded public role for the university in American life. The GI Bill sent millions to college, presenting universities with what Paul Buck would describe in *General Education in a Free Society* as an “unimaginably varied” set of tasks: “How can general education be so adapted to different ages and, above all, differing abilities and outlooks, that it can appeal deeply to each?” Creative writing, with its emphasis on ways of knowing instead of bodies of knowledge, was part of the answer, and it was folded into the expanding multi-versity.



Across the American cultural landscape, the value of creativity was appreciating. Throughout the 1950s, a C.I.A.-funded organization called the Congress for Cultural Freedom organized exhibitions of Abstract Expressionist art throughout the Western bloc of Europe, where they were intended to dazzle the nearby Soviets. The painter-cowboy Jackson Pollock, a drunken monument to the very idea of the lone hero-artist, led the charge. We think of the 50s as the great era of American conformity, but no other decade did more to make individual artistic expression a specifically American value. By 1961, creativity had secured its place as a public good; Raymond Williams wrote that “No word in English carries a more consistently positive reference than ‘creative.’” He was right (and still might be). In the 1960s, the number of graduate writing programs multiplied by ten.



***



Paul Engle was a poet, editor, and translator, but he is mostly remembered for the Iowa Writers’ Workshop, which he ran for twenty-four years. He believed that “good poets, like good hybrid corn, are both born and made.” Others have had doubts about “made.” Flannery O’Connor wrote that “the ability to create life with words is essentially a gift.” That’s not very encouraging, and it gets worse. Philip Roth, who taught at Iowa in the 60s, believed that one of the creative writing instructor’s responsibilities is to “discourage those without talent.” In the most radical critiques, the creative writing program actually smothers talent rather than encouraging it. What the Iowa Workshop lacks in creativity, the leftist novelist and one-time lover of Simone de Beauvoir Nelson Algren wrote, it makes up in “quietivity.” Like many people who worry that creative writing programs are useless or even harmful, Algren taught at a creative writing program. (He is also rumored to have lost some $35,000 playing poker in his spare time on the Iowa plains.)



Throughout the last half-century, two American writers have embodied Engle’s equation for good fiction: Ernest Hemingway (made) and William Faulkner (born). The *Des Moines Register* once published a photograph of Engle typing away with a whip curled within easy reach, and one imagines that when he used the whip it was to make students write more like Hemingway. One creative writing commonplace is “Show, don’t tell,” and Hemingway, whose characters seem to live by the even simpler motto, “Don’t tell—drink,” is the patron saint of well-disciplined fiction. No genre fits creative writing as well as the minimalist short story—in large part because it is possible to cover the whole thing in a two hour seminar—and Hemingway lore has it that he once wrote a complete short story with only *six* words: “For sale: Baby shoes. Never used.” While Hemingway never enrolled in college, he believed that the best way to improve was to put in the hours, to simply go away and write. He sent himself away to a kind of MFA in miniature in 1921, taking private tutorials in Paris from Sherwood Anderson and Gertrude Stein. (Frank Conroy once called his own MFA experience “sort of a Fascist version” of the Lost Generation’s European heyday.)



Even as the Program era was turning out accomplished minimalists like Raymond Carver, however, it was also preserving the Faulknerian impetus to let loose. With less than one full year of post-secondary education, Faulkner was the ideal model of self-realized individual talent, and his legacy is at the heart of another Program era cliché: “Find your voice.” As identity politics began to take shape in the 60s and 70s, a flood of previously underrepresented ethnic, racial, and social voices began to make themselves heard. Philip Roth was one of them; his *Portnoy’s Complaint,* an electrified current of neurotic sexual hysteria, ends by wordlessly asserting its protagonist’s voice:



"Aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa



aaaaaaaaaaaaaaaaaaaahhhh!!!!"    



Roth was yelling on behalf of Jewish maleness, but other groups took inspiration from his example. In 1975, Anchor Books published *AIIIEEEE!–An Anthology of Asian-American Writers.* 



A writer can find too much of his voice. Thomas Wolfe, who studied playwriting at Harvard before beginning to produce enormous autobiographical novels in 1929, described himself in the University of North Carolina yearbook as a “young Shakespeare,” and also a “genius.” The Hemingway-Faulkner dialectic usually finds a more comfortable middle ground, resulting in what McGurl calls “fine writing,” the lively but not overbearing prose of John Updike, among others. It is precisely the tasteful brilliance of Updike’s style that made him a kind of father figure to an entire generation of Program writers. Fine writing is the quintessential Program style, and Updike is the quintessential fine writer. It’s funny that he never attended a program himself.



Another quintessential Program writer who never actually enrolled in one is Joyce Carol Oates, who now teaches creative writing at Princeton. Updike is frequently called “prolific.” He has no idea. Oates has published thirty-six novels, with three more on the way in the next year or so. There are also thirty-three collections of short stories, as well as eight more novels written under the pseudonym Rosamond Smith, three written as Lauren Kelly, and eight novellas. Twelve volumes of essays and criticism, eight plays, and ten books of poetry. When she was first discovering her own poetic voice, she once produced twenty-eight poems—“some of them rather long,” according to her biographer—in one day. In 1982, Oates published her fifteenth novel, which James Wolcott reviewed with an article titled “Stop Me Before I Write Again: Six Hundred More Pages by Joyce Carol Oates.” She seems to write like other people blog.



Writing is easy for Oates—some think *too *easy. “The novels of most so-called serious writers are usually exercises of craft and care,” writes Wolcott, before going on to describe *A Bloodsmoor Romance* as “word-goop.” The term “genre-fiction,” always a pejorative in a serious literary setting, sometimes hovers around Oates’ name. Isn’t it supposed to be difficult, even for geniuses?



The idea that creativity stops being creative if it happens too regularly: now *there’s* something to make people uncomfortable with creative writing programs. Oates, who has turned out not just a large body of work but an extraordinarily varied body of work, is like her own private Iowa, and she helps to dismantle the myth that the production of art is anything other than a completely predictable human activity. It’s what keeps critics in business: there will always be something to write about. This is not to say that the name on the spine doesn’t matter—as McGurl writes, Oates’ output is “inimitable”—but just that individual and institution are not mutually exclusive terms. Kesey would have told you otherwise, but think how many teachers have used *Cuckoo’s Nest* to lure adolescent rebels back into the educational fold. After his merry trip east and a stint in jail, Kesey returned to the family farm in Pleasant Hill, Oregon. In 1987, he went back to his alma mater to teach creative writing.



Applicants to creative writing programs offer to institutionalize themselves in exchange for the chance to flower out as individuals. All institutions of higher education now make this promise, to a greater or lesser extent: You can be whatever you want. Nothing is closed to you. Growing up, I was told so often that I was unique, that I could choose any career path, that the opportunity began to look more like an assignment. Colleges do not just open the door to self-realization. They make you walk through it. As an undergraduate, I have been told more times than I can count that my college learning experience is not limited to the books I read and the essays I write, but sometimes that has sounded like a pretty attractive college learning experience. Engle once dreamed that he was an inmate, and that Iowa was a concentration camp.



That may be a little melodramatic, but McGurl’s insight is to see what Engle’s anxiety has done to American fiction in the last half-century. Again and again, postwar novelists have written institutional allegories, thinly veiled retellings of their own MFA days. “No sooner did American institutions (in many cases begrudgingly) open their doors to outsiders of various kinds,” McGurl writes, “than these newly minted insiders often wanted to get back out, if only in a spiritual sense.” It’s why we have the campus novel, the usually satiric portrait of university life; writers like to remind themselves of the possibility of outside, that they can transcend the university that hands them a paycheck. And yet the reality of institutionalized creative writing—with its deadlines, workshops, and departmental meetings—does not melt away and disappear in the face of publication, and vague dreams of freedom haunt postwar fiction. “Hardly Big Nurse,” Kesey wrote of his Stanford creative writing seminar. “But hardly City Lights Bookstore up the freeway,” either.



Some are worried by the fact that creative writing happens at a university, but they are missing the point. The issue is what doesn’t happen at a university, and the answer is, “increasingly little.” In 1959, George Stoddard, then the dean of the New York University School of education, observed that “slowly we are becoming a nation of college alumni, as we are already one of high school graduates.” That process continues today. Some of those college alumni will produce the next half-century of great fiction, but it’s impossible to predict which ones. Likewise, an MFA only promises that you’ll write well, not that you’ll make it onto future syllabi. “Not even in America,” wrote John Barth, “can one major in Towering Literary Artistry.”   



 



Commencement 2009


In 1930, after receiving dozens of letters complaining about their “ultra-modern” radio broadcasts, the British Broadcasting Company published a set of listening instructions in *Radio Times*:



 



Listen as carefully at home as you do in a theatre or concert hall. You can’t get the best out of a programme if your mind is wandering, or if you are playing bridge or reading, give it your full attention. Try turning out the lights so that your eye is not caught by familiar objects in the room. Your imagination will be twice as vivid.



 



If you only listen with half an ear you haven’t got a quarter of a right to criticize.



 



Operating under the direction of Edward Clark, the BBC had been doing its best to bring the musical avant garde to the masses, programming brutal dodecaphonic operas alongside stand-up comedians and patriotic marches. Lord Reith, who had founded the BBC after replying to an ad in *The Morning Post*, believed that audiences would accept new musical works with “comparatively little effort.” He was wrong. Complaints continued to pour in, and before long the BBC was lashing out. “Many of you have not even begun to master the art of listening,” a programming director wrote in *Radio Times*. “You have not even begun to try.” In 1936, the BBC gave up the fight. Tonality returned to the airwaves.



The situation is not much different today. As David Stubbs notes in his new book *Fear of Music: Why People Get Rothko but Don’t Get Stockhausen*, avant-garde and experimental music remain cultural punch-lines. Starting somewhere around 1907, when Arnold Schoenberg began to overhaul Western tonality in 1907, compositional music completely abandoned the theoretical anchors that had grounded it for centuries. It is impossible to overstate just how radical a break this was. The composer Anton Webern was not exaggerating when he gloated over tonality’s corpse: “We broke its neck.”



Twentieth-century modernism had other casualties, though, including the visible world in visual art. In the space of about fifty years, representation completely broke down. By the 1950s, Jackson Pollock and the Abstract Expressionists had made New York—not Paris—the cultural capital of the West, and today their paintings bring in tens of millions of dollars. They are dorm room posters. Stubbs’ question is a really good one: “Why has avant-garde music failed to attain the audience, the cachet, the legitimacy of its visual equivalent?”



 



*Fear of Music* is an intelligent book, but it doesn’t deliver on the promise of its subtitle, and part of the reason is Stubbs’ idea of what qualifies as an equivalent avant-garde. The visual arts had seen impressionist painters rubbing at the textures of representation throughout the late nineteenth century, but it wasn’t until Picasso that the structures began to crumble. Once you look at a woman and see an arrangement of geometries, you are no longer simply seeing the woman. You are seeing your own seeing, and from Picasso to Pollock seeing itself became the proper subject of painting. In 1958, the artist and critic Allan Kaprow wrote that Pollock had made it possible for painting to confront the senses—and therefore life—directly. Not to see things. Just to see.



Modernist music was after directness as well, but not through any new abstraction. Western music was abstract already, which was an important element of its status. Musical mimesis—flute trills for bird calls, for example—was lowbrow, and composers who did use the devices of program music usually ended up defending themselves. Beethoven insisted that his Pastoral symphony was not programmatic: “it is more an expression of feelings rather than a tone-painting.” This doesn’t prevent us from hearing the brook in the second movement, which Beethoven titled “By the Brook,” but it shows us how music in the West made a hierarchy of itself.



What modernism ended in music was the idea that music consisted of organized pitches, tones vibrating at particular frequencies that could be written down and then performed by any musician capable of reading the language. In 1913, the painter and composer Luigi Russolo laid out the new criterion: “[Music] comes ever closer to the noise-sound.” Russolo believed that the best model for modern listening was the battlefield, a place where the ear is much more privileged than it is in daily life. “From noise,” he wrote, “the different calibers of grenades and shrapnels can be known even before they explode…There is no movement or activity that is not revealed by noise.” Decades later, John Cage would agree: “It had been clear from the beginning that what was needed was a music based on noise, on noise’s lawlessness.”



Lawlessness is a good word for it. When we talk about sounds, we are talking about the domesticated part of the audible world. Sounds have names. They can be controlled by the people who hear them. They are often nice to listen to. Noises, though, only refer to themselves, and they are what we would rather not be hearing. Until the end of the nineteenth century, “musical” meant pitched sounds produced by certain kinds of instruments harmonizing with one another in certain ways. What modernism built was a much bigger tent of musical sounds—it let the noises in.



Looking to replicate war in the concert hall, Russolo designed and built his own instruments. None of his originals survived, but his diagrams show large wooden boxes with a metal cylinder on the side. One of them was called “the roarer.” Another one was “the scraper.” The moans, whirrs, skronks, and thuds that came out of them would have been impossible to write down using traditional notation, but what helped noise along in the first half of the twentieth century was the quickly increasing sophistication of recording technology. The phonograph and the studio opened up the very textures of sound itself to composition, and they also cut out the middleman—the performer—allowing composers to exercise complete control over the music they wrote. For all the talk of “revolution” and “opening up” that accompanies the musical avant garde, the composers themselves were often domineering types. A musician in rehearsal once asked Karlheinz Stockhausen, who claimed to have visited the star Sirius, “How will I know when I am playing in the rhythm of the universe?” “I will tell you,” he replied.



The final triumph of noise in music was announced by silence. John Cage’s most widely known and most radical composition is *4’33”*, in which the performer sits at the instrument mute and motionless for the period of time identified by the title. (There are at least six recordings available. I got mine on iTunes, with Wayne Marshall at the piano. It’s a good performance.) Of course, the world is never completely silent, and so what Cage’s piece highlights are the ambient sounds in the concert hall, which turns out to be a noisy place even when people are doing their best to be quiet. Whether *4’33”* opens your ears to the music of everyday life is a matter of taste. What’s simply fact, though, is that for the first time the idea of music had been inflated to encompass the whole sphere of audible phenomena. All noises became sounds.



Noise has created a lot of confusion among music historians, and it partially explains why Stubbs gets his equivalences wrong. For roughly five hundred years, the history and development of Western music had been charted along the lines of tonality. Instrumentation, industry, and aesthetics all played important roles, but the ever-changing list of permissible tones and harmonic progressions is what made it possible to connect Palestrina to Handel to Bach to Mozart to Beethoven to Schubert to Wagner to Strauss to Schoenberg. Cage broke that causal chain for good, and so a lot of people thinking and writing about the development of music in the last-half century really aren’t sure what it is they mean by “development” (you can count me in with that group). Developing, yes, but with respect to what?



 



Institutions can help to make these histories make sense. Stubbs, who is British, opens *Fear of Music* in the Tate Modern, instinctively—if not explicitly—recognizing that modern art would be completely impossible without the museums that house it. People on weekend trips to New York don’t go to de Kooning’s *Woman 1*. They go to MoMA, where they will pay their respects—or make quiet, cautious criticisms—to whatever the curators have put on display. As Stubbs writes, the crowds wandering Britain’s second most popular tourist destination (the British Museum wins by about one million visitors per year) look “rapt with boredom.” And still they come by the thousands. The Tate is what lends the works in its collection their cultural power. It is appropriately housed in a former power plant.



The modern museum era began in 1929, when Abby Rockefeller, Lillie Bliss, and Mary Quinn Sullivan—“the adamantine ladies,” as they came to be known among New York  society circles—thought up the Museum of Modern Art. It opened on November 7th, nine days after the stock market crash. It moved from place to place in its early years, but in 1940 MoMA secured its position as the authority on modern art with an enormously successful Picasso retrospective, the first ever in the U.S. By the end of World War II, it was generally recognized as an enormous success.



By that point the museum had also begun to collect a large number of works by Abstract Expressionist painters: Gorky, Calder, Stella, Motherwell, Gottlieb. In 1944, it sold off some nineteenth century paintings in order to buy Pollocks. The meteoric celebrity of these avant-garde painters looks surprising until you realize just who was working on their behalf. It was Alfred Barr, the director of Museum Collections at MoMA (as well as its first president) who persuaded the publisher Henry Luce to put Jackson Pollock in the pages of *Life* magazine. The profile, with its full-page photos of Pollock straddling the canvas, paint flying off his brush in electrified currents, turned him into a culture hero: America’s manly painter-cowboy.



The connection between modern art and a healthy American virility is everywhere in this period of time. One book on American museums, written in 1948, addressed the “problem” of modern art in this way:



 



How easy it is to see why a forward-looking period like ours should have an immense interest in its own image as projected by modern art. The term applies to all the products of the modern period, good and bad. (To be sure, when they are very bad, we simply do not classify them as art, just as we do not think of mentally or morally defective persons as ‘Americans,’ although, by nationality, they are that.)



 



As if the financial and cultural support of the world’s most powerful modern art museum were not help enough, avant-garde visual art was also heavily promoted by the US government. As the Cold War intensified, the CIA and State Department developed a vast cultural propaganda operation that Alfred Barr described as “benevolent propaganda for foreign intelligentsia.” The centerpiece of this campaign was the Congress for Cultural Freedom. As Frances Stonor Saunders writes in the book The Cultural Cold War: The CIA and the World of Arts and Letters, the CCF became involved in almost every sphere of American arts and letters: “journals, books, conferences, seminars, art exhibitions, concerts, awards.” It is missing the point to ask who was involved with the CCF because everybody was involved with the CCF. In the span of a few years, the government successfully weaponized American art.



One of the campaign’s goals was to cement the link between artistic freedom and American ideology. At the 1952 opening of “Masterpieces of Twentieth-Century Painting,” a CIA-funded exhibition in Paris, James Johnson Sweeney said, “On display will be masterpieces that could not have been created nor whose exhibition would be allowed by such totalitarian regimes as Nazi Germany or present-day Soviet Russia.” The show exhibited works by European painters like Matisse, Cezanne, and Kandinsky, but everything on display was owned by American individuals and institutions. The implications were clear: the cultural center of the West was no longer Paris. It was New York. With Pollock’s primal works at the leading edge, the American assault on European sensibilities was a huge success. Some European viewers were not so much impressed as terrified. “This strength, displayed in the frenzy of a total freedom, seems a really dangerous tide,” one Barcelona critic wrote. “Our own abstract painters, all the ‘informal’ European artists, seem pygmies before the disturbing power of these unchained giants.” Sweeney argued that modern art, with its thrashing individualism, “is useless for the dictator’s propaganda.” It turned out to be just great for the President’s.



Music had its own role to play in the American culture campaign. (It may have been a Cold War, but as Saunders notes, it was a total war.) The Masterpieces of the Twentieth Century festival had a musical segment as well, and more than seventy twentieth-century composers saw their work performed there: Berg, Schoenberg, Hindemith, Debussy, Mahler, Bartók, Poulenc, Copland. Much of the CCF’s musical programming was organized by Nicolas Nabokov, a handsome Russian émigre who circulated freely among friends, wives, and political agendas. Nabokov had been working for the government since 1945, when, as an employee in the music wing of the Information Control Division, he was told to “establish good psychological and cultural weapons with which to destroy Nazism.” It was a line of work that suited him, and he handled the transition from Nazis to Reds with ease. In 1954, he organized the International Conference of Twentieth Century Music in Rome, with a heavy emphasis on atonal composition. But while similar festivals of visual art generally drew enthusiastic reactions from audiences and critics, the music was not so warmly received. “We were deferential,” Susan Sontag wrote. “We knew we were supposed to appreciate ugly music.”



It is hard to imagine Susan Sontag talking about Mark Rothko with such dismissive condescension, but she is hardly the only cultural figure to talk down to experimental music. Norman Mailer once stumbled into a performance by the free jazz musician Sun Ra and his “Arkestra.” Sun Ra, working in a well-established tradition of black music which has been identified as “afro-futurism,” regularly claimed to have visited the planet Saturn. Stubbs and others have written eloquently about Ra’s radical brand of musical freedom, but Mailer credited the concert only with clearing out his head cold. He then called the music “strangely horrible,” which is even more damning than just “horrible.”



What Sontag and Mailer needed was an institution that could make sense of what they were hearing. The bad luck for them—and especially for composers—was that the American musical avant-gardists were almost all post-war figures, and this is exactly when the power and influence of the symphony orchestra began to wane. Throughout the second half of the twentieth century, orchestras have struggled with increasing desperation to maintain funding and relevance, but these mostly cosmetic efforts have failed. In the mid-nineties, London’s Royal Philharmonic announced big plans to update their image. An article in The *Independent *reported that concerts would include a video screen with close-up images of the conductor’s face, “drama, lasers, and maybe a camera right down the clarinet.” What the spokesman doesn’t mention was any plan to change the music, which is nothing new. By the end of the nineteenth century, symphony orchestras had already settled on programming that heavily favored Classical and Romantic composers like Mozart and Beethoven. Most major orchestras today make some concession to twentieth century experimental music, but those works are very rarely the centerpiece of a given performance. Many symphonies give off the impression of wishing that the last sixty years had simply never happened.



It was a different story in the nineteenth century. The orchestra as we know it today began to take shape in 1842, when the New York and Vienna Philharmonics were founded. Before then, the majority of orchestras were basically house bands, funded by and held accountable to monarchs and aristocrats. As the aristocracy crumbled in the wake of industrialization, however, orchestras became civic institutions, and they grew right along with the cities that provided the money. By the turn of the century, the best orchestras were internationally famous institutions. It became possible to distinguish between two groups playing the same piece. People talked about “the Philadelphia sound.”



Fame and funding invested famous orchestras with the power to direct musical taste. In *The Rest Is Noise: Listening to the Twentieth Century*, the critic Alex Ross evocatively depicts the respect accorded to famous composers by European audiences. In Vienna, Gustav Mahler could not walk down the street without cab drivers whispering excitedly to their passengers, and when Richard Wagner conducted the first complete performance of his massive Ring cycle, the audience included emperors from two continents. The celebrity status of classical musicians was an American phenomenon as well. When the celebrated tenor Enrico Caruso was arrested for groping the wife of a baseball player in New York’s Central Park Zoo, it made front pages around the country. Caruso said that a monkey did it.



Like MoMA does today, orchestras at the turn of the century helped audiences to make sense of a hundred years of instrumental music. That the orchestra was not equipped to make sense of the post-war musical avant garde has as much to do with unfortunate timing as anything else. Institutions succeed by cultivating relationships with the right group of individuals. Beethoven, Wagner, Strauss—these people needed orchestras, and orchestras have been needing them ever since. Likewise, Pollock and Rothko are foundations of the modern art museum. The real question isn’t why orchestras failed to embrace John Cage. That’s easy: they were already committed to the nineteenth century composers who had made them famous. John Cage was opposed to that kind of music in every possible way. It is hard to let go of a first love.



The real question is what MoMA will do when the Abstract Expressionists fall out of favor, which they inevitably will. Institutions have many virtues, but agility is not one of them, and already there are fractures in the world of contemporary art. Visual artists looking to get rich and famous don’t go to MoMA anymore; they hit the road, showing and selling at the dozens of bi-annual art festivals that have scattered themselves around the globe in the past decade. These festivals occasionally find their way into the news cycle if an artist sells for enough money, but they have not produced anything close to a household name. You need muscle to get into the history books. Stubbs thinks it’s strange that avant-garde music isn’t widely popular, but that’s not strange at all. Avant garde arts are confrontational, difficult, obscure, and deliberately opposed to the currents of mainstream taste. The real anomaly is the popularity of abstract visual art. As grand as Rothko’s luminous color fields may be, they didn’t do it alone. They had institutions to back them up.



 



 



 









Winter 2009


It wasn’t until about the fourth time that I drove back from New England to the Philadelphia suburb where I grew up that I began to notice the thickening light. It’s a subtle phenomenon. The visual recalibration doesn’t really kick in for sure until I’m past New York City, driving down the New Jersey Turnpike, and watching pine trees rush by on either side of the car. In Boston, the sunlight is pure and thin, and in the late afternoons it comes slanting in at a low wintry angle and turns white steeples the color of cantaloupe flesh. Mid-Atlantic sunlight is more substantial stuff––yellower, too––and it’s rich with dust or pollen or rain vapor. I like each kind at different times. If I’ve been home for more than a few weeks I get anxious to leave, and the splashes of northern light that set the Maples ablaze every fall are refreshing to the point of disorientation or even joy. But the light holds more in Wallingford––more heat, more water, but also some twenty years of layered familiarities, all of which fan out and crowd in every time I drive back. Home is uncomfortable, but it’s my everyday jacket.







The first time I heard the band Neutral Milk Hotel was in a high school philosophy class. Each student brought in a recording of what they thought was a good pop song, specifically good in that it measured up to David Hume’s criteria for aesthetic excellence (for public school, this was a pretty creative class). I brought Bruce Springsteen. My friend Peter brought Neutral Milk Hotel. He played “Holland, 1945,” the sixth song from their second album, *In the Aeroplane Over the Sea*. It begins with a second or two of upbeat strumming, and then there’s a goofy little count-off––“two, one-two-three-four”––at which point the song really begins with what sounds like a fuzzy explosion. The low-fidelity sound is completely intentional. Released in 1998, *In the Aeroplane Over the Sea* was recorded on a four-track tape machine, and there are moments on many of the album’s tracks when a high vocal line or screaming brass chord will overload the apparatus and set the whole sonic space buzzing, which is exactly what happened to my seventeen-year old brain as I sat in philosophy class listening to “Holland, 1945.” I could not have articulated the transformation, but I was aware that something inside had come wonderfully unhinged.



I bought the CD as soon as I could, and for a year or two *In the Aeroplane Over the Sea* was the greatest music I had ever heard. My mother, who sometimes tried to be interested in the music I liked, hated Neutral Milk Hotel, which was an added bonus. She hated everything about it that I liked: Jeff Mangum’s keening vocal overconfidence, the way the instruments whip every song into a frenzied volume contest, the insane melodicism. My father heard Neutral Milk Hotel one time that I can remember, and only just the first thirty seconds or so before he reached for the volume dial on our car stereo. I don’t think we said a word about it, and future car trips were soundtracked by an innocuous mix of Bob Dylan, The Who, and Steely Dan. The album did what it was supposed to do to my parents: it pushed them away. Liking music that makes the veins bulge in Dad’s forehead is one of the things that makes being a teenager worthwhile.



Pop music is generationally specific, much more so than other kinds of culture. Moviegoing is a lifelong habit, and children actually spend a lot of time trying to sneak into the violent, sexy films that their parents go see on the weekends. (The MPAA ratings system, authoritative and opaque, adds to the mystique; I remember shutting my eyes in the theater when Kate Winslet disrobed in Titanic, thinking that PG-13 made it literally illegal for 12-year-old me to be seeing her breasts.) Literature is either neutral or shared ground. Catcher In the Rye and On the Road have been teenage bibles for more than 50 years. I have the sense that I missed out on the real golden age of teenage rebellion, the ’50s and ’60s. Then, adolescents were members of the first generation to be fed on the explosion of pop culture that took place in the wake of WWII. As the Hollywood Production Code withered until its death in 1968, movies, which had always managed violence, began for the first time to get bloody. The Sexual Revolution had literature to match. Philip Roth and Bonnie and Clyde stirred up levels of parental outrage and indignation that I do not know and cannot really imagine. My bloody movies look like my parents’ bloody movies, only a little more so, and as a teenager my books were mostly theirs to begin with.



Pop music is the exception. The last three cultural figures who really made parents angry––the rapper Eminem, the rock musician Marilyn Manson, and Britney Spears––are all pop musicians. This has something to do with the nature of music, which, as sound, can never be completely ignored (there is no aural equivalent to shutting your eyes). The phenomenon isn’t only physics, though. People tend to figure out their musical preferences by the time they can’t go on calling themselves “young”––somewhere around age thirty––at which point the individual canon gets closed down. New music agitating for admission runs up against ears that are not only deaf but hostile: “Turn that crap down,” or, in my father’s case, he turned it down for me.



 He would never have unplugged the DVD player, and my mother never snatched a book out of my hands. For kids growing up in the last years of the twentieth century, pop music was the only available way to piss off your parents with culture.



The other part of what gets parents so mad about music is the fact that pop songs inspire teenage devotion in ways that books and movies can only dream about, and although taste differs wildly from teenager to teenager, everybody falls in love the same way. A 17-year-old who gets into Japanese art rock and a screaming tween at the Jonas Brothers concert are going to be completely different kinds of cultural adults, but they have more in common than they realize, and neither one could begin to articulate or justify their taste in a meaningful way. Teenagers who can explain what makes the music they like good are improvising liars. Their words are vague units of praise that filter in from friends and media. Their words should not be taken as anything more than affirmations of love.



But despite the fact that adolescents and language have a hard time getting along, teenagers are very good pop music listeners (they are also frequently great pop music makers). People become better readers, watchers, and lookers with age, but the ability to hear pop songs honestly decays over time and, almost always, eventually disappears completely. What makes adolescence and pop so well suited to each other is that inarticulate love, because pop songs are also bad with words. Lyrics matter to an extent––and, because they can be quoted, they’re much easier to describe and talk about than beats and tones––but even the greatest songs turn into crude sentiment when reduced to words on a page. Some musicians don’t realize this and become awful poets. (Joni Mitchell, from her poem “Bad Dreams Are Good”: “We have poisoned everything / And oblivious to it all / The cell-phone zombies babble / Through the shopping malls.”) What matters are the sounds. The explanatory power of a pop song operates largely outside language, and teenagers, who have no idea how to articulate the biological and emotional lurchings going on inside them, were made to have their lives explained by music. In the little office where my high school newspaper was edited and laid out, I worked with three guys who became some of my closest friends, and when they began to listen to The Magnetic Fields, I knew that I had to do the same. When I began to actually like The Magnetic Fields, I knew that something good was happening. Pop music told us that we fit together, which is something we could never have told ourselves. When adolescents fall in love, they make mixtapes. Pop music is teenage talking.



Jeff Mangum has said that *In the Aeroplane Over the Sea* was vaguely written around dreams he had about Anne Frank and her family during WWII, but lyrically this almost never gets more specific than on the first song I heard, “Holland, 1945,” where Mangum sings,







The only girl I ever loved



Was born with roses in her eyes



But then they buried her alive



 One evening, 1945



With just her sister at her side.







That the album would be about Anne Frank makes sense in that the catastrophe of her adolescence was reflected in the catastrophe of history that trapped her in that little attic. On the album, this collection of personal and historical earthquakes leaps out of the speakers like a circus. Mangum loved the carnivalesque aesthetic of early twentieth century Penny Dreadfuls, the kind of vulgar gothic atmosphere suggested by woodcuts and hand-set type, semi-handmade products of an industrial age. The album’s lyrics were printed on a single sheet of paper––broadside style––rather than in the traditional tidy booklet. The record’s influence has turned out to be huge. In the late eighties and early nineties, the sound of indie rock was a kind of self-conscious laziness, an awkward, witty ugliness that viewed the tidy traditions of pop music as lame jokes. Neutral Milk Hotel, at the very end of the century, took that sound, that racket, and wrote pop songs with it, songs with melodies and sing-along choruses. What’s followed in the last 10 years has been a much prettier kind of indie rock. Many of the bands following in Neutral Milk Hotel’s wake, however, have nostalgically treated history as a simpler time when communities were close-knit and knitting was an everyday practical task. Neutral Milk Hotel looked back and saw a chain of disasters, one following another in horrible inevitability. They channeled that kind of history––itself trapped in a perpetual, thrashing adolescence––through sex, confusion, and longing.



None of this occurred to me at 17 because none of it needed to. Nobody ever asked me to justify my musical preferences and as long *In the Aeroplane Over the Sea* kept explaining me to myself on a day to day basis I was happy. The everydayness of pop music is built into the medium; since most albums are a little less than an hour long, it’s possible to listen to a record hundreds of times in a single year, at which point the music enters into a kind of symbiotic relationship with the people and situations it renders intelligible for the listener. People pay an enormous amount of attention to the specifics of their listening: cracks in a CD jewel case, abrasions on the sleeve of a vinyl LP, the way a car stereo’s cheap speakers warp the sound of a particular song. This turns into fetishism at a certain point, but what lies at the root of all this obsessing is the fact that music refuses to be heard in a vacuum; it lives in its contexts. When I listen to *In the Aeroplane Over the Sea*, part of what I’m hearing is this:



The house here I grew up is a colonial pile of fieldstone with a big lawn, and it was my job to mow it during the summer. For the parts tucked in close to the house, I had a gray push mower with a black nylon bag hooked to the back which trapped the cut grass. Depending on how long it had been since I last mowed, I could walk back and forth for about 15 minutes before the bag filled up. Then I would roll the mower to a patch of woods that sloped down from the edge of our driveway and tug out clumps of grass with my free hand. This usually made me sneeze, and sometimes I would itch all over. For the more open spaces of the yard, though, it was possible to use our riding lawnmower. It had a yellow seat and a green chassis, and although it was also equipped with bags and a black chute which could be set up to collect the grass, I was usually allowed to leave them in the garage, which meant I could mow whole sections of lawn in big, uninterrupted swoops. I did this once or twice a week.



By the time Mr. Adams’ philosophy class and the school year had ended, I had memorized every word of *In the Aeroplane Over the Sea*. The difficult part was finding a time and a place to sing along. Singing in the house, where any family member could have heard me through my bedroom door, was out of the question. I could sing while driving a car, but everyday drives never lasted for more than 10 or 15 minutes, and what I wanted was to yell my way through the album from start to finish. At the beginning of the summer, I bought an iPod.



Once I wheeled the John Deere out of the garage onto the driveway, once I made sure to fill it with gasoline from the plastic tank with the yellow nozzle, once I was perched up on the seat, I threw a pair of enormous headphones over my ears, the kind that make big foam half-spheres and do a fair job of blocking out noise. With the album beginning to play, I stuffed the iPod into a cargo pocket and drove around to the back of the garage, where I began to mow the lawn.



The riding mower’s engine was not quiet, and with the blades turned on the machine made an incredible noise, which meant that I had to jack the volume all the way up in order to hear the music. More importantly, the mower’s racket made my own singing inaudible to anyone other than myself. It was actually more like halfway melodic screaming. Mangum’s voice peaks about two or three whole steps out of the highest parts of my vocal range, but the lawnmower gave me the luxury of not having to quiet down when my high baritone gave out. I sang in choirs in high school and never quite learned not to jut my head forward and tighten up my neck trying to sing high notes. Yelling incomprehensible code at nothing in particular, with those big headphones strapped from ear to ear, I probably looked like an autistic war veteran up close. From any distance, I was just somebody mowing the lawn, the big machine’s insect hum as appropriate and unremarkable in the suburbs as traffic in a city or surf on a beach.



Even with his voice like a fractured air raid siren, Mangum’s melodies are muscular, and so the songs on *In the Aeroplane Over the Sea* are desperate but sexy. It is not easy for teenagers to feel sexy, and Mangum is wonderfully gentle and strange about how two people learn to use their bodies. On “Two-Headed Boy” he sings, “Catching signals that sound in the dark / We will take off our clothes / And there’ll be lacing fingers through the notches in your spine.” He knows that sex is a threat, ominously repeating the words “semen stains the mountaintops,” on the song “Oh, Comely,” but he also understands adolescent sex as an alternate world, a woods, an attic. My girlfriend and I would set alarms all senior spring, waking up at four in the morning in her basement or our guestroom, and drive one another home, our fathers both inconveniently early risers. My house is old—it makes noises when you breathe, and so I had to work out a way of climbing the stairs slowly, resting my feet on the outer edge of each step and trusting as much weight as possible to the less creaky railing. I don’t have to sneak out of people’s houses anymore, and I don’t get grounded for staying out late. But fear and magic are partners, and one disappears with the other. I learned this riding the lawnmower, with headphones. 



I haven’t listened to *In the Aeroplane Over the Sea* very much since I finished high school. I hear “Holland, 1945” at parties a lot, almost always near the end of the night when everybody is dancing and wasted. Usually I dance along and yell the words, but one time I threaded my way out of the crowd and found my coat, and it wasn’t troubling at all. I had something else to do. What makes you dance is a song’s visceral relevance, and when I hear Neutral Milk Hotel now I almost always think about mowing the lawn. My listening doesn’t free-associate anymore. It has lost its creativity. My listening has lost its creativity. Nostalgia always does this: think of Humphrey Bogart in Casablanca, angry and boring, drinking his way through “As Time Goes By” over and over. Listening to *In the Aeroplane Over the Sea* now feels almost tacky, which is not to say that I don’t do it. Listening to it in Wallingford would be nearly vulgar.



I was at a recent holiday party when I heard that Neutral Milk Hotel’s guitarist, Julian Koster, would be showing up to sing Christmas carols. He walked in wearing a festive sweater––ironically festive?––and sat down near the piano, and then presented us with his saw, which he had named Badger. Using a bow, he played Christmas carols on the saw at an easy tempo. Between songs he would bend Badger towards the audience––his way of “bowing” to our applause. In an incredibly soft, childlike voice, Koster told us that Badger was eight years old, and that, like all saws––and surely we knew this, being educated college students––Badger would stay the same age for his entire life. He also told made-up stories about each of the carols, how they had been smuggled across the borders between European countries, cherished by clockmakers and peasants, and how the songs had eventually made their way into his possession. Right before I left, at the end of Badger’s third song, I thought to myself, “This may be related, in a distant way, to the music you made back in 1998, but it has nothing to do with the music I heard.” I left with two friends, and I walked back to my room through the cold to study. When I hear the album now, and especially when I hear the song that inaugurated my relationship with it, I hear it through the wavy haze that comes off the summer suburban asphalt. I see myself drenched in mid-Atlantic light. And I wake up from the adolescent fever dream––if only halfway––and I listen to something else.



 



THE HARVARD ADVOCATE
21 South Street
Cambridge, MA 02138
president@theharvardadvocate.com