In Warfare State: World War II Americans and the Age of Big Government, James T. Sparrow brings to life a lost world of liberalism (and its lingering discontent)
The following review was posted recently on the Books page of the History News Network site (and is currently its home page for the 8/29 edition).
In the lifetime of most contemporary Americans -- in the lifetimes of most Americans, period -- the prevailing opinion has been that when it comes to federal government intervention in the lives of ordinary citizens, less is more. Those of us with an even passing familiarity with U.S. history are aware that this has not always been so, and think of the middle third of the twentieth century in particular as a time when Big Government did not simply prevail, but was the prevailing common sense. And that this common sense took root during Franklin Delano's Roosevelt's New Deal of the 1930s.
In this important new book, however, University of Chicago professor James T. Sparrow corrects that perception in a significant way. It was not FDR's New Deal that really transformed Americans' relationship with their government, he says. It was FDR's Second World War. In the words of the title, what we think of as the welfare state was really a warfare state. Sparrow is not the first person to make such a case; scholars like Michael S. Sherry (In the Shadow of War, 1995) and Robert Westbrook, Why We Fought, 2004), have explored similar terrain. But Sparrow traverses it with a touch that is at once deft, informed, and imaginative. Rarely is so comprehensive an argument delivered in so concise a manner (about 260 pages).
The facts of the case are relatively straightforward. When it comes to things the scale of government spending, the breadth of federal taxation, and the role of bureaucracies in reaching shaping realms that ranged from advertising to surveillance, World War II dwarfs any previous moment in American history. One of the great ironies of this development, as Sparrow makes clear, is that it occurred at the very moment the New Deal was headed for political eclipse. Even more ironic, as he also makes clear, is that this assertion of state power was made by people who simultaneously affirmed liberal values of individual aspiration and political rights and who plausibly contrasted themselves with totalitarian powers whose hold on their citizenry was absolute.
But Sparrow's case is more subtle still. In a mode of analysis that harkens back to insights of Italian Marxist Antonio Gramsci, he's interested in the complex interaction between a national state that seeks to mold opinion and a public which resists and adapts, as well as accepts, the logic of a ruling elite. The U.S. government made demands on people -- by drafting them, regulating what they could be paid, and rationing what they ate. But to a remarkable degree, it enforced assent to such policies by relying on a combination of volunteerism, peer pressure, and propaganda.
At the same time, however, the the government and its people were involved in a complex negotiation over the price of such assent. That price could be understandable, even laudable, when it took the form of expectations that war veterans would be well cared for, literally and figuratively, when they came home. It could also be much less laudable, as when the government condoned racism against African Americans in the South or Asians in the West by avoiding fights over such issues in the name of Getting the Job Done.
Sparrow illustrates his argument with approaches that combine political, economic, and, especially, cultural history. He does a nice job with a 1943 Kate Smith broadcast, explaining why the singer was uniquely positioned, by virtue of her experience and persona, to persuade millions of Americans to defer gratification by buying war bonds. There's a particularly good chapter on how racial and ethnic humor gave those who indulged in it a way to criticize the government that might otherwise be considered unfair or even unpatriotic ("You kiss the niggers / and I'll kiss the Jews / and we'll stay in the White House / as long as we choose," went one piece of anti-FDR doggerel). He also does a lot with the House of Labor, tracing the way workers aligned themselves as extensions of soldiers at the front, even as they parried criticism at home -- and from many of those soldiers abroad -- that they were overpaid, greedy, or both.
Sparrow concludes the book by asserting that while the end of the war also meant the end of some of the most expansive dimensions of government intervention in the economy and U.S. society, its legacy would prove profound in shaping the collective persona of mid-twentieth century Americans, particularly a strong sense of institutional commitment that would be the touchstone of Baby Boomer as well as Neoconservative rebellions later in the century.
There are aspects of Warfare State with which one could quibble. Actually, Sparrow's argument is so nuanced that there are times he seems to flirt with capsizing it -- one could use much of the same evidence to show the limits of adherence to the federal government rather than emphasizing the degree to which it took root. (This, in effect, is the argument Barry Karl made in his 1983 book The Uneasy State.) It might have also been helpful if he did just a bit more with a comparative dimension -- how, for example, affirmations of war workers in the United States were similar to or different than virtually simultaneous Stakhanovite celebrations of labor in the Soviet Union, for example. But one finishes Warfare State with an appreciation of how beautifully wrought a piece of scholarship it is -- meticulously researched, gracefully written, and politically resonant. Notwithstanding the drawbacks of the era Sparrow chronicles with scrupulous attention, it is nevertheless hard not be be moved, if not nostalgic, about a moment of national purpose and hope whose absence has been replaced with one defined by a worrisome, and worsening ache.
Monday, August 29, 2011
Thursday, August 25, 2011
Elephants' memory
In Carthage Must Be Destroyed: The Rise and Fall of an Ancient Civilization, Richard Miles surveys the rise and fall of a superpower.
England and France. Greece and Persia. Hapsburgs and Ottomans. Imperial rivalry is as old as history itself, but some rivalries can truly be said to have changed the world. The great contest between the Mediterranean city-states of Rome and Carthage falls into that category. At the end of three Punic Wars stretching over a century (264-146 BC), Carthage was literally wiped off the face of the earth. But in this fascinating new history, University of Sydney historian Richard Miles re-constructs a civilization whose memory continues to stir imaginations -- particularly among those who suspect that their own is not immortal.
History, as we all know, is written by the victors (or the ancient Greeks). As Miles explains, most of what we know about Carthage is second-hand, and most of that is anti-Carthaginian. But he is deft in deconstructing such sources. As he also makes clear, he doesn't always have to: the truth is that the Romans needed the Carthaginians, at no time more than after they had been vanquished. There could be no myth of Roman power without a legendary adversary on which to justify it. If you're careful, patient, and epistemologically humble, the truth has a way of surfacing, like pottery fragments from an archeological site.
For the lay reader, one of the more surprising aspects of Carthaginian civilization is its syncretic character, deeply rooted in the Levant. The North African city was founded by Phoenician traders who had deeply imbibed Greek as well as Persian culture. A maritime people whose trade stretched from modern day Lebanon to Spain, its peninsular position right smack in the middle was just about ideal for dominating the Mediterranean oval. For centuries, the island of Sicily was a key staging base for such operations.
Perhaps inevitably, such a position engendered conflict with the Greeks. The Carthaginians were fortunate that Macedonian Alexander the Great looked east rather than west when he began his colossal string of conquests. But they didn't need him to Hellenize them; that process had begun long before. Miles pays close attention the the mythology surrounding the Greek god Heracles, who was fully integrated into a religious order alongside Persian-based deities like Melqart and Baal.
For a long time, Carthage and an an ascendant Rome -- which also enjoyed a fortunate slot in that Mediterranean oval -- cooperated in trade as well as in navigating the geopolitics of Magna Greacia (particularly the Corinthian colony of Syracuse). But by the third century BC their antagonism led the First Punic War (264-241). This conflict broke Carthaginian naval dominance of the central and western Mediterranean and destabilized Carthage from within, but was not completely ruinous. In its aftermath, the so-called Barcid faction launched a highly successful Iberian adventure that effectively became a new power base -- and a new threat to Roman hegemony.
It was the Second Punic War (218-201 BC) in which the legend of the Roman-Carthaginian rivalry really took root. And it was this war that gave the world one of the most remarkable leaders it has ever seen: the Carthaginian general Hannibal, who achieved the stupefying feat of leading a huge army, complete with elephants, over the Alps and embarking on a 15-year occupation of of greater Italy. As one might expect, Miles explains how Hannibal achieved military mastery in the most catastrophic defeat in the history of the Roman republic, the Battle of Cannae (216 BC). But he also does an exceptionally good job of illuminating Hannibal's political gifts. A Hellenically-educated Greek speaker, Hannibal brilliantly exploited the religious mythology of the ancient world in ways that challenged the basis of Roman power ideologically no less than militarily. Miles pays careful attention to Hannibal's rituals, pronouncements, and evidence like coinage to document his strategy, vividly bringing him, his countrymen, and the notably multicultural society that spawned both into focus.
Ultimately, however, Hannibal was unable to break the Latin hold on Italy, or to crash the gates of Rome. Partly this is a matter of predictable logistical strains. Partly, too, it was a matter of internal Carthaginian politics, in which Hannibal's provincial power base in Iberia proved to be a handicap. But his ultimate defeat was also a matter of the worthy adversary who learned from, and adapted, Hannibal's own tactics. In carrying the war back to Carthage, this general pried Hannibal out of Italy and earned the title that made him famous: Scipio Africanus.
Unlike the first two, the Third Punic War (149-146 BC) was a tawdry afterthought that generated significant internal dissent within Rome. Unwilling to tolerate the truly remarkable resilience of its former rival, expansionist senators consistently sided with Numidian aggression on the African coast and ultimately demanded capitulation so draconian that the Carthaginians effectively felt they had no alternative but to fight to the death. Miles argues that it was not coincidental that the similarly storied city of Corinth was also destroyed by the Romans in 146 BC; a voracious hegemon would no longer contemplate the existence of even a symbolic rival.
Ironically, this victory would haunt the Romans long afterward. They had pledged to destroy Carthage (in the famous words from which this book takes its title) and swore it would never be resurrected. But Julius Caesar considered founding a new Roman city there before his assassination, and his adopted successor, Augustus, followed through on the plan (cleverly displacing his ambitions beyond the politically fraught terrain of Italy). By that point, the history of Rome was being written by Romans in Latin, not Greek. And by the end of the second century, a bona fide African, Septimus Severus, would found a dynasty within what had become an empire. The Mediterranean world was Roman. Everyone else just lived in it.
Would Western civilization turned out differently had Carthage prevailed rather than Rome? Yes, but not that different. In part, that's because, as Miles shows us, Carthage was far from the Other that historians like Polybius and Livy would have us believe. It's also because as an empire that also began as the colonial pod of a seafaring people, the United States is less exceptional than we might imagine. Hail, Hannibal.
Saturday, August 20, 2011
Bush tales
In Bush's Wars, Terry H. Anderson tells a very familiar story (but keeps it brief)
The following review was posted recently on the Books page of the History News Network site.
It is often said that journalism is the first draft of history. Bush's Wars is presented as the first major comprehensive study of the U.S. wars in Iraq and Afghanistan, an effort to weigh the legacy of President George W. Bush. This is how the blurbs and publicity for the book position it, and the way Terry H. Anderson puts it in his introduction: "to 'figure out,' in Bush's words, the history of the defining policies of his presidency -- and to do it during his lifetime."
But Bush's Wars is more a report of the journalism on those wars than a scholarly assessment in its own right. Strictly speaking, a piece of academic scholarship would draw on primary source research and advance an argument that had never been systematically articulated before. Bush's Wars distills an already voluminous literature into a 240 page narrative (whose footnotes are batched a little too aggressively to track sources all that easily). Its point of the view, that the Afghan war was bungled, and that that Iraq was both launched under false pretenses and bungled, has long been the conventional wisdom in U.S. society at large. So the book doesn't really have a lot to offer in the terms on which it presents itself.
Perhaps I should be praising it with faint damnation. Bush's Wars is actually a useful little volume that may well have a long shelf life for two reasons. The first is that there is indeed nothing like it: a piece of one-stop shopping that surveys its subject in a way that manages to be both wide-ranging and succinct. The second is that while there's little here that your garden-variety news junkie wouldn't already know, there are undoubtedly a large number of people who lived through the era without knowing much about it, and a growing number of people who were too young to really remember it. It is for those people -- i.e. college students -- with whom the book should find a home as what it really is: a course adoption text.
To wit, Anderson, who teaches at Texas A&M, starts the book off with two introductions: the first a 16-page overview history of the Islamic world; the other a longer one that covers the regime of Saddam Hussein, the rise of the Taliban and al Queda, and U.S. policy in the region. From there, he offers chapters on 9/11 and the Afghan War, the efforts of the Bush administration to justify the overthrow of Hussein, the invasion itself, and the rise of an insurgency. Only after that does he return to Afghanistan, which gets much less attention than Iraq does. This is nevertheless a well-paced narrative that touches on all the major bases.
What it doesn't do, and what we still need, are studies that are less about what Bush did than ones which examine why his administration was able to get away with it. Did changes in the structure of American journalism allow the administration's mendacity to succeed in ways that it might not have otherwise? (Consider, for example the record of the BBC relative to to that of U.S. networks and newspapers.) Was the American electorate more credulous than it had been since the Vietnam era? What larger geopolitical shifts occurred while the United States exercised is unipolar hegemony? What does the way the war was ginned up and fought suggest about the state of the U.S. armed forces? My guess is that we will get ambitious efforts to answer such questions. But they will probably take more time than Bush's Wars took to write. "The Iraq story post-2003, this is still chapter one," former U.S. ambassador to Iraq Ryan Crocker said (metaphorically) in 2009. "This is a very long book."
In the meantime, we have Bush's Wars. It should prove handy.
The following review was posted recently on the Books page of the History News Network site.
It is often said that journalism is the first draft of history. Bush's Wars is presented as the first major comprehensive study of the U.S. wars in Iraq and Afghanistan, an effort to weigh the legacy of President George W. Bush. This is how the blurbs and publicity for the book position it, and the way Terry H. Anderson puts it in his introduction: "to 'figure out,' in Bush's words, the history of the defining policies of his presidency -- and to do it during his lifetime."
But Bush's Wars is more a report of the journalism on those wars than a scholarly assessment in its own right. Strictly speaking, a piece of academic scholarship would draw on primary source research and advance an argument that had never been systematically articulated before. Bush's Wars distills an already voluminous literature into a 240 page narrative (whose footnotes are batched a little too aggressively to track sources all that easily). Its point of the view, that the Afghan war was bungled, and that that Iraq was both launched under false pretenses and bungled, has long been the conventional wisdom in U.S. society at large. So the book doesn't really have a lot to offer in the terms on which it presents itself.
Perhaps I should be praising it with faint damnation. Bush's Wars is actually a useful little volume that may well have a long shelf life for two reasons. The first is that there is indeed nothing like it: a piece of one-stop shopping that surveys its subject in a way that manages to be both wide-ranging and succinct. The second is that while there's little here that your garden-variety news junkie wouldn't already know, there are undoubtedly a large number of people who lived through the era without knowing much about it, and a growing number of people who were too young to really remember it. It is for those people -- i.e. college students -- with whom the book should find a home as what it really is: a course adoption text.
To wit, Anderson, who teaches at Texas A&M, starts the book off with two introductions: the first a 16-page overview history of the Islamic world; the other a longer one that covers the regime of Saddam Hussein, the rise of the Taliban and al Queda, and U.S. policy in the region. From there, he offers chapters on 9/11 and the Afghan War, the efforts of the Bush administration to justify the overthrow of Hussein, the invasion itself, and the rise of an insurgency. Only after that does he return to Afghanistan, which gets much less attention than Iraq does. This is nevertheless a well-paced narrative that touches on all the major bases.
What it doesn't do, and what we still need, are studies that are less about what Bush did than ones which examine why his administration was able to get away with it. Did changes in the structure of American journalism allow the administration's mendacity to succeed in ways that it might not have otherwise? (Consider, for example the record of the BBC relative to to that of U.S. networks and newspapers.) Was the American electorate more credulous than it had been since the Vietnam era? What larger geopolitical shifts occurred while the United States exercised is unipolar hegemony? What does the way the war was ginned up and fought suggest about the state of the U.S. armed forces? My guess is that we will get ambitious efforts to answer such questions. But they will probably take more time than Bush's Wars took to write. "The Iraq story post-2003, this is still chapter one," former U.S. ambassador to Iraq Ryan Crocker said (metaphorically) in 2009. "This is a very long book."
In the meantime, we have Bush's Wars. It should prove handy.
Monday, August 15, 2011
Jim is on his annual summer family vacation, which, as usual, involves some corner of New England (this year it's slated to be Massachusetts and Vermont). One of the pleasures it affords, in addition to the structured opportunity for leisure activity with friends and loved ones, is the chance to catch up on reading that's not work-related, in particular books that have slipped through the cracks amid a teaching/writing/reviewing regimen. This year, that means the opportunity to finally get to at least one of the three books in Stieg Larsson's celebrated "Milennium trilogy," featuring the mysterious Lisabeth Salander and her sidekick, Mickael Blomkvist, two investigators of civic corruption. Larsson, who never lived to see the global phenomenon his work has become (after being rejected by multiple publishers, the books have sold nearly 30 million copies) is clearly a master of the thriller genre.
Best to all for a relaxing respite as the intimations of summer's end emerge over the calendrical horizon. Future posts in this site will include a new book about the ancient Carthaginian empire, the Great Railroad Strike of 1877, and a continuation of the "Sensing the Past" series on the work of Jodie Foster as historian.
Best to all for a relaxing respite as the intimations of summer's end emerge over the calendrical horizon. Future posts in this site will include a new book about the ancient Carthaginian empire, the Great Railroad Strike of 1877, and a continuation of the "Sensing the Past" series on the work of Jodie Foster as historian.
Thursday, August 11, 2011
Machine dreams -- and nightmares
In Alone Together: Why We Expect More from Technology and Less from Each Other, Sherry Turkle explores the downside of our networked lives
Over the course of the last quarter century, Sherry Turkle of MIT has become the sociologist-cum-philosopher of human-computer relations. This inquiry began in 1984 with The Second Self: Computers and the Human Spirit, which was published just as personal computers were entering the collective bloodstream. Life on the Screen: Identity in the Age of the Internet arrived in 1995, and was again ahead of the curve, talking in depth about the "Multiple User Domains" (MUDs) that we've come to know as chat rooms. Alone Together is presented as the final installment of a trilogy of what Turkle calls "the inner life of devices." It works well as a point of entry to Turkle's body of work in tracing the questions -- she's less good on answers -- raised by the advent of our digital lives. It also suggests that in some ways, she's played out the string.
Alone Together is really -- and may well have been best published as -- two books. The first is in effect an inquiry into the coming age when robots will be a practical, and, perhaps, pervasive, part of our everyday lives. As she's done all along, Turkle pays particular attention to children's toys, not only because devices like Tamogotchis and Furbies were harbingers of more sophisticated devices, but also because she's keenly aware that the technological socialization of the young will have important implications for society as a whole. But she's (now) especially attentive to the other end of the demographic spectrum: the use of robots as devices, particularly psychological devices, for the care and company of the old. At least superficially, the logic seems irresistible: machines can perform tasks more efficiently and cheaply than people, and in many cases (like that Alzheimer patients, for example), artificial care, and caring, makes sense.
Turkle, however, is deeply skeptical of this approach. She notes a kind of slippery slope logic: technological options that seems like they're better than nothing become positive goods and then inevitable. She wonders whether whether such devices will let younger generations off the hook emotionally and corrode our collective sense of humanity. And she worries that even raising such questions will increasingly fall into the realm of understandable but unrealistic, before they become simply irrelevant.
In the second half of the book, Turkle shifts her gaze away from humans' interactions with machines and instead on their mediated relations with each other. So much of her work has involved peering around corners from her perch at an elite institution at the cutting edge; here she seems immersed in the world of Blackberries, texting, Facebook, and the related phenomena that seem thoroughly embedded into contemporary life. Here her concerns parallel those about robots: that tools like texting that once seemed as useful substitutes for direct communication have now replaced it. That social networking is a mere shadow of the real thing. That innovations designed to make our lives easier have instead become the source of slavish addictions. Turkle frets that young people don't like to make phone calls anymore. She frets that Facebook, which presumably connects people, actually fosters loneliness. She frets that people take refuge in games and avatars and chat rooms rather than deal with their problems. She frets . . . .
"There is a danger that we will come to see these reductions in our expectations as the new norm," she writes of our tendency to displace our interactions with people through social media. "There is the possibility that chatting with anonymous humans can make online robots and bots and agents look like good company. An there is the possibility that the company on online bots makes anonymous humans look good." To which one finally feels compelled to say: "Duh." It's sort of like reading a book about about the impact of the automobile that has chapters on the dangers of car crashes, high insurance costs, the impact on the environment, and teenage entitlement. All real enough, and worth talking about. But the picture here seems a bit lopsided, and after a certain point one becomes impatient for concrete suggestions, which are largely lacking. Turkle thinks workers in geriatric care should be better paid. But that's not exactly dazzling public policy advice. Actually, I'm less worried about robots providing unsatisfactory medical care to the elderly or disabled than a where regime decides such people are just too much trouble, period. Such scenarios are hardly unimaginable, because they're rooted in history, not futurology.
And history is change over time. That almost always means trade-offs. At times it seems Turkle, trained as a psychoanalyst, believes that technological change should not have a potential negative impact on people. But of course it must; the power for good, which is stinted here, almost always means the power to do harm. For the moment at least, we have the freedom to act on our own behalf as it concerns our networked life. Discretion has always been and remains a deeply human attribute, albeit one difficult to achieve. Let the surfer beware.
The following review was posted recently on the Books page of the History News Network site.
Over the course of the last quarter century, Sherry Turkle of MIT has become the sociologist-cum-philosopher of human-computer relations. This inquiry began in 1984 with The Second Self: Computers and the Human Spirit, which was published just as personal computers were entering the collective bloodstream. Life on the Screen: Identity in the Age of the Internet arrived in 1995, and was again ahead of the curve, talking in depth about the "Multiple User Domains" (MUDs) that we've come to know as chat rooms. Alone Together is presented as the final installment of a trilogy of what Turkle calls "the inner life of devices." It works well as a point of entry to Turkle's body of work in tracing the questions -- she's less good on answers -- raised by the advent of our digital lives. It also suggests that in some ways, she's played out the string.
Alone Together is really -- and may well have been best published as -- two books. The first is in effect an inquiry into the coming age when robots will be a practical, and, perhaps, pervasive, part of our everyday lives. As she's done all along, Turkle pays particular attention to children's toys, not only because devices like Tamogotchis and Furbies were harbingers of more sophisticated devices, but also because she's keenly aware that the technological socialization of the young will have important implications for society as a whole. But she's (now) especially attentive to the other end of the demographic spectrum: the use of robots as devices, particularly psychological devices, for the care and company of the old. At least superficially, the logic seems irresistible: machines can perform tasks more efficiently and cheaply than people, and in many cases (like that Alzheimer patients, for example), artificial care, and caring, makes sense.
Turkle, however, is deeply skeptical of this approach. She notes a kind of slippery slope logic: technological options that seems like they're better than nothing become positive goods and then inevitable. She wonders whether whether such devices will let younger generations off the hook emotionally and corrode our collective sense of humanity. And she worries that even raising such questions will increasingly fall into the realm of understandable but unrealistic, before they become simply irrelevant.
In the second half of the book, Turkle shifts her gaze away from humans' interactions with machines and instead on their mediated relations with each other. So much of her work has involved peering around corners from her perch at an elite institution at the cutting edge; here she seems immersed in the world of Blackberries, texting, Facebook, and the related phenomena that seem thoroughly embedded into contemporary life. Here her concerns parallel those about robots: that tools like texting that once seemed as useful substitutes for direct communication have now replaced it. That social networking is a mere shadow of the real thing. That innovations designed to make our lives easier have instead become the source of slavish addictions. Turkle frets that young people don't like to make phone calls anymore. She frets that Facebook, which presumably connects people, actually fosters loneliness. She frets that people take refuge in games and avatars and chat rooms rather than deal with their problems. She frets . . . .
"There is a danger that we will come to see these reductions in our expectations as the new norm," she writes of our tendency to displace our interactions with people through social media. "There is the possibility that chatting with anonymous humans can make online robots and bots and agents look like good company. An there is the possibility that the company on online bots makes anonymous humans look good." To which one finally feels compelled to say: "Duh." It's sort of like reading a book about about the impact of the automobile that has chapters on the dangers of car crashes, high insurance costs, the impact on the environment, and teenage entitlement. All real enough, and worth talking about. But the picture here seems a bit lopsided, and after a certain point one becomes impatient for concrete suggestions, which are largely lacking. Turkle thinks workers in geriatric care should be better paid. But that's not exactly dazzling public policy advice. Actually, I'm less worried about robots providing unsatisfactory medical care to the elderly or disabled than a where regime decides such people are just too much trouble, period. Such scenarios are hardly unimaginable, because they're rooted in history, not futurology.
And history is change over time. That almost always means trade-offs. At times it seems Turkle, trained as a psychoanalyst, believes that technological change should not have a potential negative impact on people. But of course it must; the power for good, which is stinted here, almost always means the power to do harm. For the moment at least, we have the freedom to act on our own behalf as it concerns our networked life. Discretion has always been and remains a deeply human attribute, albeit one difficult to achieve. Let the surfer beware.
Monday, August 8, 2011
Symbolic revolt
In Clothed in the Robes of Sovereignty: The Continental Congress and the People Out of Doors, Benjamin H. Irvin describes an American Revolution that didn't quite work out
Clothed in the Robes of Sovereignty tells the story of an unsuccessful experiment: the attempt by the infant government of the United States to create a semiotics of the American Revolution. We all know that the Founding Fathers were masters of the English language (one part of their patrimony they could never forsake). The attendant attempt to create a national system of symbols and rituals to go along with manifestos like the Declaration of Independence preoccupied figures no less than John Adams and Benjamin Franklin. From the mid-1770s to the mid 1780s, a period spanning the formation of the First Continental Congress to the Treaty of Paris, government leaders declared holidays, struck medals, built monuments, created currency, and took other steps to culturally institutionalize their government. But while some of these steps toward creating what Benedict Anderson has famously called "imagined communities" had an effect temporarily, very few of them ever took root.
As Benjamin Irvin, assistant professor of history at the University of Arizona, explains, there are a number of reasons why. Perhaps the most important in his view is that that the people -- make that the People -- had an unofficial vote in the adoption of collective symbols, and didn't passively accept what their leaders handed them. So a parade might turn into a riot, for example. Ordinary people could also send messages of their own. Irvin's first chapter describes an episode when Congressional leaders were forced to cancel a ball to be held in Philadelphia to mark the arrival of Martha Washington, because they were warned that such frivolity, which appeared to contradict Congress's own pronouncements about frugality, led to threats that the tavern where the event was to be held would be attacked. Irvin uses the phrase "the people out of doors," which has become something of a buzz phrase among scholars of the period, to describe such dynamics.
Perhaps the most vivid of Irvin's case studies involves Franklin's creation of U.S. currency. A former engraver, he was the obvious choice for the task, and one to which he brought a distinctively Poor Richard sensibility. Franklin's bills included hands trying up uproot thorny bushes (of oppression), harps with thirteen strings, busy beavers, and thrift-minded phrases like "Mind Your Business," all reprinted in the book. Alas, the American people didn't buy them, literally or figuratively. Continental currency depreciated rapidly, and ultimately had to be replaced (under contentious terms) because its image had decayed so severely.
A second reason why Revolutionary symbolism came up short is that opponents of the new regime had resources of their own. Irvin describes the efforts of clergymen, poets, and other Tory figures who satirized, lampooned, or otherwise attacked efforts to create language and symbols for the new nation-state. Such opponents often resorted to racial, class and gender imagery, casting Patriots as henpecked husbands, uncouth hillbillies, or people little more civilized than savage Indians. (An early emblem of the United States had a frontiersman with a tomahawk; that was soon dropped.)
Finally, the leadership elite of the Revolution had their own internal tensions, even contradictions. Though Irvin believes it was sometimes exaggerated, lingering Puritan strains in New England culture lent a spartan air to Revolutionary imagery, and led to injunctions against cultural practices like theater and gambling, which were more common and acceptable in the South. And republican notions of simplicity often clashed with the imperatives of foreign policy, where the Americans grappled with unfamiliar French opulence and a desire to be taken seriously that required demonstrations of grandeur on their part.
By the closing years of the Revolution, Congress had largely exhausted its resources, financial and otherwise. Its attempts to buy goodwill from the Continental Army with swords and commendations proved no substitute for adequate pay, and the strains between the military and politicians were dangerously near the breaking point. When victory came, it was the army and the French, not Congress, that led the way in celebration.
In years to come, the defining symbols of American life, such as the National Anthem, imagery of Uncle Sam, and other pieces of iconography would emerge more indigenously. Ironically, the one durable cultural practice of the Revolution -- commemorating Independence Day -- had gone into eclipse when before war was over, and there was uncertainty about just when to celebrate it. (Adams believed July 2 would be the day that would be immortalized.) It would have been helpful for Irvin to run this story a forward a bit more, and help us understand a little more clearly how a national semiotic order finally did take shape. He also might have done more with a surprising obvious omission here: the evolution of the American flag.
Clothed in the Robes of Sovereignty is nevertheless a carefully researched and elegantly executed book. The individual chapters are usefully segmented and can stand on their own, but the whole also adds up to more than the sum of its parts. This is a fine addition to the cultural history of the American Revolution.
The following review was posted recently on the Books page of the History News Network site.
Clothed in the Robes of Sovereignty tells the story of an unsuccessful experiment: the attempt by the infant government of the United States to create a semiotics of the American Revolution. We all know that the Founding Fathers were masters of the English language (one part of their patrimony they could never forsake). The attendant attempt to create a national system of symbols and rituals to go along with manifestos like the Declaration of Independence preoccupied figures no less than John Adams and Benjamin Franklin. From the mid-1770s to the mid 1780s, a period spanning the formation of the First Continental Congress to the Treaty of Paris, government leaders declared holidays, struck medals, built monuments, created currency, and took other steps to culturally institutionalize their government. But while some of these steps toward creating what Benedict Anderson has famously called "imagined communities" had an effect temporarily, very few of them ever took root.
As Benjamin Irvin, assistant professor of history at the University of Arizona, explains, there are a number of reasons why. Perhaps the most important in his view is that that the people -- make that the People -- had an unofficial vote in the adoption of collective symbols, and didn't passively accept what their leaders handed them. So a parade might turn into a riot, for example. Ordinary people could also send messages of their own. Irvin's first chapter describes an episode when Congressional leaders were forced to cancel a ball to be held in Philadelphia to mark the arrival of Martha Washington, because they were warned that such frivolity, which appeared to contradict Congress's own pronouncements about frugality, led to threats that the tavern where the event was to be held would be attacked. Irvin uses the phrase "the people out of doors," which has become something of a buzz phrase among scholars of the period, to describe such dynamics.
Perhaps the most vivid of Irvin's case studies involves Franklin's creation of U.S. currency. A former engraver, he was the obvious choice for the task, and one to which he brought a distinctively Poor Richard sensibility. Franklin's bills included hands trying up uproot thorny bushes (of oppression), harps with thirteen strings, busy beavers, and thrift-minded phrases like "Mind Your Business," all reprinted in the book. Alas, the American people didn't buy them, literally or figuratively. Continental currency depreciated rapidly, and ultimately had to be replaced (under contentious terms) because its image had decayed so severely.
A second reason why Revolutionary symbolism came up short is that opponents of the new regime had resources of their own. Irvin describes the efforts of clergymen, poets, and other Tory figures who satirized, lampooned, or otherwise attacked efforts to create language and symbols for the new nation-state. Such opponents often resorted to racial, class and gender imagery, casting Patriots as henpecked husbands, uncouth hillbillies, or people little more civilized than savage Indians. (An early emblem of the United States had a frontiersman with a tomahawk; that was soon dropped.)
Finally, the leadership elite of the Revolution had their own internal tensions, even contradictions. Though Irvin believes it was sometimes exaggerated, lingering Puritan strains in New England culture lent a spartan air to Revolutionary imagery, and led to injunctions against cultural practices like theater and gambling, which were more common and acceptable in the South. And republican notions of simplicity often clashed with the imperatives of foreign policy, where the Americans grappled with unfamiliar French opulence and a desire to be taken seriously that required demonstrations of grandeur on their part.
By the closing years of the Revolution, Congress had largely exhausted its resources, financial and otherwise. Its attempts to buy goodwill from the Continental Army with swords and commendations proved no substitute for adequate pay, and the strains between the military and politicians were dangerously near the breaking point. When victory came, it was the army and the French, not Congress, that led the way in celebration.
In years to come, the defining symbols of American life, such as the National Anthem, imagery of Uncle Sam, and other pieces of iconography would emerge more indigenously. Ironically, the one durable cultural practice of the Revolution -- commemorating Independence Day -- had gone into eclipse when before war was over, and there was uncertainty about just when to celebrate it. (Adams believed July 2 would be the day that would be immortalized.) It would have been helpful for Irvin to run this story a forward a bit more, and help us understand a little more clearly how a national semiotic order finally did take shape. He also might have done more with a surprising obvious omission here: the evolution of the American flag.
Clothed in the Robes of Sovereignty is nevertheless a carefully researched and elegantly executed book. The individual chapters are usefully segmented and can stand on their own, but the whole also adds up to more than the sum of its parts. This is a fine addition to the cultural history of the American Revolution.
Thursday, August 4, 2011
Non-fond farewell
In A Long Goodbye: The Soviet Withdrawal from Afghanistan, Artemy M. Kalinovsky explains that wars sometimes take a while to end, because occupiers can afford to take their time
There appear to be people who would like this book to be, in effect, Why the U.S. Will Fail in Afghanistan. Such people include the publicity department at Harvard University Press, whose press release for the book cites the "suspiciously familiar" set of reasons Artemy Kalinovsky cites for the Soviet debacle there. They also include investigative journalist Seymour Hersh, whose blurb for the book suggests he did not actually read it, since his remarks focus on this angle, which comprise about three (very good) pages on the subject. There are of course very good reasons, in both marketing and intellectual terms, for viewing The Long Goodbye through that lens (it is, after all, why I picked it up). But such a perspective also distorts what this book is and why it is valuable.
A more relevant, if still somewhat nationally narcissistic, historical analogy is more relevant: The U.S. and Vietnam. Before a few years ago, the comparison was downright proverbial: the Soviet decade-long (1979-89) adventure in Afghanistan was the USSR's Vietnam, the imperial incursion that brought a hegemon to its knees. Some would say it was actually worse, since it precipitated the end of the Soviet Union itself.
Kalinovsky does engage this analogy (a little). And he sees merit it. Certainly, he would agree that both Afghanistan and Vietnam posed knotty military problems (though he is among those who believes that the Soviet 40th army acquitted itself well). And that both generated dissent at home and disenchantment abroad. But the emphasis here is the reverse of what one typically sees in discussions of Vietnam: for the Soviets, maintaining credibility with its allies and the Third World were primary, while managing public opinion was not a serious issue until the war was almost over.
Kalinovsky notes that the Soviet regime of Leonid Brezhnev was ambivalent at best in its decision to intervene in an internecine quarrel between two leftist factions in Afghanistan in late 1979, neither of which commanded a popular majority in that fractious country. Ironically, most of Soviet experience in there involved trying to temper the excesses of the ruling regime and steer it in a more pragmatic direction that involved power sharing among themselves and outsiders. The Soviets realized almost immediately that they'd made a mistake, and by 1982 were already formulating strategies to leave. But -- and this is important to Kalinovsky -- it took a long time for the withdrawal to actually take place because the Soviets could afford, literally and figuratively, to bide their time. The war may have been, in the memorable words of Mikhail Gorbachev, "a bleeding wound," but it was not a fatal one.
Actually, Gorbachev is the pivotal figure in The Long Goodbye. He inherited the war when he came to power in 1985 and was determined to end it. But "determined" in this context is a decidedly relative term, whose pace was contingent on other circumstances, primary among them the U.S.-Soviet relationship, which took a frosty turn in the years after the invasion but began to improve once Gorbachev took the helm in the second half of the 1980s. Soviet diplomatic strategy had been to tie a withdrawal from Afghanistan to an agreement that Pakistan and the United States would revoke their (overlapping) support for the mujahadeen forces attacking the Afghan regime. But in the negotiations that led to the breakthrough Geneva Accords of 1988, Gorbachev decided in effect to make Afghanistan a loss-leader, to announce his seriousness in resolving general Cold War tensions in the form of a unilateral Soviet withdrawal.
It would be too strong to suggest that this strategy backfired, because it's far from the clear that the alternative was any better. It's safe to say, however, that it didn't turn out the way Gorbachev hoped. The U.S. interpreted the Soviet move as a sign of weakness, and refused to budge on the mujahadeen. Gorbachev, in turn, felt compelled slow his pace and make counter-moves in withdrawing, both to shore up the sinking morale of allies in Europe and (especially) the Third World, and to parry the thrusts of a building conservative reaction on his right in Moscow. He also had to contend with long-term tensions between the military and the KGB in Afghanistan, whose conflicting agendas became more acute and paralyzing in the closing years of the war. But while Afghanistan was paradigmatic of Gorbachev's problems generally, it was never really more than a pawn in his losing game before getting pushed aside by Boris Yeltsin, who felt no obligation to prop up Afghanstan once the USSR dissolved in 1991.
Kalinovsky believes Gorbachev pretty much did the best he could under the circumstances, though thinks the last Soviet leader got something of a free pass as domestic criticism of the war as a senseless waste of life grew in an increasingly free media environment. He also believes that the Soviet invasion was, as imperial adventures go, relatively moderate and understandable: Afghanistan, after all, shares a border with the Soviet Union, in a Muslim corner of the world that was increasingly volatile in the last quarter of the twentieth century. (He does not, however, think that Muslim minoriites within the Soviet Union itself were ever really a serious problem.) Moreover, the Soviet occupation was hardly the ideologically driven crusade Cold War conflicts sometimes appeared to be. Real nation-building went on, even if the Soviets did more harm than good.
Still, at the end of the day, the moral of his story holds as true for the 20th century as it does the thirteenth, nineteenth, or twenty-first: you just can't win in Afghanistan. In his conclusion, Kalinovsky notes that like Gorbachev, Barack Obama is a reformer who inherited a war he wants to end, and who, like Gorbachev, is dallying in doing so. If ever there was a case where history is destiny, this would appear to be it.
The following review was posted recently on the Books page of the History News Network site.
There appear to be people who would like this book to be, in effect, Why the U.S. Will Fail in Afghanistan. Such people include the publicity department at Harvard University Press, whose press release for the book cites the "suspiciously familiar" set of reasons Artemy Kalinovsky cites for the Soviet debacle there. They also include investigative journalist Seymour Hersh, whose blurb for the book suggests he did not actually read it, since his remarks focus on this angle, which comprise about three (very good) pages on the subject. There are of course very good reasons, in both marketing and intellectual terms, for viewing The Long Goodbye through that lens (it is, after all, why I picked it up). But such a perspective also distorts what this book is and why it is valuable.
A more relevant, if still somewhat nationally narcissistic, historical analogy is more relevant: The U.S. and Vietnam. Before a few years ago, the comparison was downright proverbial: the Soviet decade-long (1979-89) adventure in Afghanistan was the USSR's Vietnam, the imperial incursion that brought a hegemon to its knees. Some would say it was actually worse, since it precipitated the end of the Soviet Union itself.
Kalinovsky does engage this analogy (a little). And he sees merit it. Certainly, he would agree that both Afghanistan and Vietnam posed knotty military problems (though he is among those who believes that the Soviet 40th army acquitted itself well). And that both generated dissent at home and disenchantment abroad. But the emphasis here is the reverse of what one typically sees in discussions of Vietnam: for the Soviets, maintaining credibility with its allies and the Third World were primary, while managing public opinion was not a serious issue until the war was almost over.
Kalinovsky notes that the Soviet regime of Leonid Brezhnev was ambivalent at best in its decision to intervene in an internecine quarrel between two leftist factions in Afghanistan in late 1979, neither of which commanded a popular majority in that fractious country. Ironically, most of Soviet experience in there involved trying to temper the excesses of the ruling regime and steer it in a more pragmatic direction that involved power sharing among themselves and outsiders. The Soviets realized almost immediately that they'd made a mistake, and by 1982 were already formulating strategies to leave. But -- and this is important to Kalinovsky -- it took a long time for the withdrawal to actually take place because the Soviets could afford, literally and figuratively, to bide their time. The war may have been, in the memorable words of Mikhail Gorbachev, "a bleeding wound," but it was not a fatal one.
Actually, Gorbachev is the pivotal figure in The Long Goodbye. He inherited the war when he came to power in 1985 and was determined to end it. But "determined" in this context is a decidedly relative term, whose pace was contingent on other circumstances, primary among them the U.S.-Soviet relationship, which took a frosty turn in the years after the invasion but began to improve once Gorbachev took the helm in the second half of the 1980s. Soviet diplomatic strategy had been to tie a withdrawal from Afghanistan to an agreement that Pakistan and the United States would revoke their (overlapping) support for the mujahadeen forces attacking the Afghan regime. But in the negotiations that led to the breakthrough Geneva Accords of 1988, Gorbachev decided in effect to make Afghanistan a loss-leader, to announce his seriousness in resolving general Cold War tensions in the form of a unilateral Soviet withdrawal.
It would be too strong to suggest that this strategy backfired, because it's far from the clear that the alternative was any better. It's safe to say, however, that it didn't turn out the way Gorbachev hoped. The U.S. interpreted the Soviet move as a sign of weakness, and refused to budge on the mujahadeen. Gorbachev, in turn, felt compelled slow his pace and make counter-moves in withdrawing, both to shore up the sinking morale of allies in Europe and (especially) the Third World, and to parry the thrusts of a building conservative reaction on his right in Moscow. He also had to contend with long-term tensions between the military and the KGB in Afghanistan, whose conflicting agendas became more acute and paralyzing in the closing years of the war. But while Afghanistan was paradigmatic of Gorbachev's problems generally, it was never really more than a pawn in his losing game before getting pushed aside by Boris Yeltsin, who felt no obligation to prop up Afghanstan once the USSR dissolved in 1991.
Kalinovsky believes Gorbachev pretty much did the best he could under the circumstances, though thinks the last Soviet leader got something of a free pass as domestic criticism of the war as a senseless waste of life grew in an increasingly free media environment. He also believes that the Soviet invasion was, as imperial adventures go, relatively moderate and understandable: Afghanistan, after all, shares a border with the Soviet Union, in a Muslim corner of the world that was increasingly volatile in the last quarter of the twentieth century. (He does not, however, think that Muslim minoriites within the Soviet Union itself were ever really a serious problem.) Moreover, the Soviet occupation was hardly the ideologically driven crusade Cold War conflicts sometimes appeared to be. Real nation-building went on, even if the Soviets did more harm than good.
Still, at the end of the day, the moral of his story holds as true for the 20th century as it does the thirteenth, nineteenth, or twenty-first: you just can't win in Afghanistan. In his conclusion, Kalinovsky notes that like Gorbachev, Barack Obama is a reformer who inherited a war he wants to end, and who, like Gorbachev, is dallying in doing so. If ever there was a case where history is destiny, this would appear to be it.
Monday, August 1, 2011
Wild, Wild Universe
Foster in a zesty Western, and a soggy Sci-Fic flick
The following post is part of a series of on Jodie Foster specifically, and Hollywood actors generally, as historians.
The following post is part of a series of on Jodie Foster specifically, and Hollywood actors generally, as historians.
Foster followed Sommersby with another historical movie, this one the comic western, Maverick (1994), based on the old TV series from the late fifties and early sixties that starred James Garner as the title character. This time Garner, who acted with Foster in One Little Indian, plays the father of that character, whose role is filled by Mel Gibson. Maverick was written by a past master, William Goldman, and it’s exceptionally deft in its light quick pace, and a series of plot twists that resemble a card trick, which is appropriate, since the plot concerns a set of poker-playing grifters. Foster plays Annabelle Bransford, a sexy con artist who keeps pace with a fast pack of players that also includes Alfred Molina as Gibson’s antagonist. Indeed, she wins the trump card in the final scene of the movie, a good-natured feminist ending for a movie that is by far the lightest of Foster’s career, and one that she has remembered with great affection ever since. It was during this movie that she befriended Mel Gibson, a man she cast, and stood beside, seventeen years later amid the publicity surrounding The Beaver.
Maverick was a popcorn flick; Foster’s other movie of 1994 was considerably more ambitious, and ranks among her best performances: the title role in Nell, for which she received her third academy award nomination. (In a bravura performance, Foster manages the arresting task of emptying her face of expression for much of the movie.) Though this was a pedigree project with major talent – including Liam Neeson, his wife Natasha Richardson, and the esteemed British director Michael Apted – Nell is a fairly predictable undertaking most notable for the way it once again manifest’s Foster’s emphasis on the malignant forces that subvert the designs of even the best intentioned people. Nell is the story of a young woman who has lived her whole life in the remote woods of North Carolina, insulated from contact with modern civilization. She was conceived as a result of rape, and had a twin sister who died in childhood. When her mother dies, and her death is reported to an unscrupulous grocery delivery boy (Jeremy Upham), Nell’s existence is discovered by local authorities. A local doctor, played by Neeson, tries to assist her, and to do this, he enlists Richardson, a specialist in autism. They realize, however, that Nell is not mentally handicapped; instead her strange language is a combination of emulating her stroke-stricken mother and the special language she shared with her twin. Yet their attempt to serve as surrogate parents for Nell is complicated by medical professionals who seek to institutionalize her, as well as people (like the grocery boy) who wish to exploit her vulnerability. In other words, institutions do not only do more harm than good, they also fail to protect the vulnerable from those who operate outside and against them. After a dramatic hearing in a trail-like setting, Nell wins her freedom and can return to the wild with her new family, the Rousseau-like logic of the film finally affirmed.
Foster essentially repeated that message in a different form three years later in Contact, a tedious science-fiction movie in which she stars as an astronomer who teams up with Matthew McConaughey, unconvincingly cast as a theologian. They battle the skepticism – and, more importantly, the greed and hatred – of those, inside and outside their fields, who reject their fervent belief that alien life exists. The soggy New Age overtones of the film, which include the astronomer’s contact with her long dead father, suggest something of a dead end in Foster’s career-long assertion that goodness and decency are locked in mortal combat with irrational malice. Fortunately, in the ensuing years, her work began to texture this paradigm and portray characters and situations with more internal complexity.
Next: The King and Jodie
Next: The King and Jodie
Subscribe to:
Posts (Atom)