I phoned in comments to the Kojo Nnamdi show about the historical context of the net neutrality debate. They took my call about 18 minutes into the show. You can listen here.
Jordan Shapiro studies how video games help us think. According to his op-ed on Forbes.com — adapted from a speech — one thing video games can do is help our schools teach students to think more about their place in society:
What we do in schools–the fundamental reason for education–is to transmit, from one generation to the next, the systematic descriptions of the universe that our civilization finds most useful.
But, in his estimation, schools are organized in a way that actually prevents students from turning that knowledge into actual citizenship. The basic problem is that schools are still organized along a ‘heroic’ model, with strict hierarchies between teachers and students. This is, for Shapiro,
A kind of top down thinking that is, at its core, undemocratic. If we want to educate citizens to participate in a democratic society, it seems counterproductive to do it in undemocratic ways.
I find this ironic because, in my opinion, the key reason we educate is to provide citizens with the ability to look at themselves and see how they function within a larger system. The only reason we even bother with school at all, is not to create laborers or workers, but rather to create democratic citizens. In other words, school is about social impact.
I tend to agree. In fact, this is a well-established critique of our education system, and has been for at least a hundred years. John Dewey expressed more or less the same concerns in “My Pedagogical Creed” in 1897. Paolo Freire made the same arguments against the Brazilian education system in Pedagogy of the Oppressed in 1968. Where Shapiro differs from these accounts is in his remedy:
The promise of game based learning is that it may help us create intelligent, thoughtful, kind, caring, compassionate, and engaged participants in this experiment we call American Democracy
Here I disagree. I don’t think game-based learning can help us create engaged participants — or, at least, I think it can help in the weakest sense, in that game-based learning is neither necessary nor sufficient to create engaged participants.
The problem, as I see it, is that game-based learning merely substitutes one set of rules for another. The rule structure of the video game replaces the rule structure of the classroom. I play video games: I know it’s possible to have games with flexible, expansive rulesets; it’s also possible to have rulesets that are terrible (that means you, Goldeneye). But the thing about playing the game is that you don’t have much say in what those rules are.
What democratic citizens-in-training need is the opportunity to create their own rules. Shapiro almost gets this, saying “we forget that we have the power to construct our own rules” — but he does not explain how video games will help teach students to construct those rules.
As it happens, I have just finished the first draft of my book, Politics (how people rule); notional cover art, left. I spend most of a chapter talking about how people learn rules, how kids learn to make rules, and how our schools fail to teach rules.
One of the most important aspects for children’s rule-making ability — their social creativity — is unstructured play. When kids make up their own games, with their own rules, they learn to shape social facts. Video games compete directly with unstructured play. It may be that I have never seen the kind of video games Shapiro has in mind, but I have a hard time seeing how they could improve upon unstructured play.
If we want to make schools less hierarchical, to teach students social creativity and rule-making, there are cheaper, proven ways to do so. One thing we can do is protect recess: give kids time during the day to run around like maniacs making up their own crazy games. A lot of schools are cutting recess, even though it might be just as important as math to kids’ development as active citizens.
Second, we should be encouraging teachers to let students help make rules for the classroom, rather than imposing their rules unilaterally. I describe in the book how I let my high-school students design their own syllabus for a civics class I taught. The class was no easier for them than if I had imposed a syllabus, but the students were more invested in the rules.
Third, there has been some good research on the effect of student governments, mostly by Dan MacFarland and colleagues at Stanford. Weak student governments make weak citizens. We can help students learn democratic engagement by giving the real governments with real power and real accountability. Most of our schools fail to do this.
None of these is sufficient; all are probably necessary. Moreover, each of the three does more to foster students’ ability to create rules than any video game. I don’t think video games are bad, and I think learning games can help students, but to teach democratic citizenship we need to rethink how we teach rules, and what rules we teach.
From Everett Rogers’s The Diffusion of Innovation (3rd ed.), p. 184:
During the third cycle of hijackings, in late 1971 and 1972, the FAA gained a marked superiority over each new wave of re-invented hijacking techniques, and the success rate dropped to only 29 percent. This occurred because the mass media voluntarily agreed to black out details of hijacking techniques. Psychiatrists who studied hijackers found that notoriety was one of their major motivations, so the media stopped publicizing the names of the air pirates. Once the desire for national publicity was blocked, the rate of attempted hijackings began to fall off.
The Daily Beast has an article by Ted Gioia titled, Why Do We Hate Hipsters So F’ing Much? — which is a good question, but Gioia mostly gives up at the end. “There are no hipsters anymore,” he writes “…they are merely a figment of your imagination.” Okay, sure — but who keeps leaving PBR cans everywhere?
The sad fact is, there are hipsters and they are annoying and we do hate them — and rightly so. It’s not garden gnomes drinking all that PBR. Gioia gets off on the wrong foot by tracing the origins of the word to the 1940s and the black community, and there were hipsters then — but those aren’t the hipsters we have now. They are different animals: the meaning of the label has changed.
The difference is that 1940s hipsters had a clear and original sense of what it meant to be cool. Jazz was cool. Marijuana was cool. The Man was not cool. That was then.
To understand today’s hipsters, we have to understand what it means to be cool — and especially how hard it is, lately. For starters, figuring out what it means to be cool is confusing. In the last twenty years, the mainstream sources of cool — music, movies, TV — have lost their grip as the Internet has allowed millions of people to express their own ideal of cool.
So on one hand, technology is allowing all different kinds of cool — but that technology also allows big corporations and media interests to jump on that cool and make it uncool. What’s cool today might be a sellout tomorrow — to wit, OK GO. This means the cost of keeping up with cool has accelerated, and it’s really expensive to keep up. You have to see the latest bands, buy the latest clothes, and go the best places — and it all might change next week.
The cacophony of would-be taste-makers and cool merchants means it can be really hard to figure out what’s good and what’s crap, and there will always be somebody arguing the opposite. There is so much different cool out there it can be exhausting.
In some places, this problem is especially acute. Those places tend to be second-tier neighborhoods in cities, where lots of people from different places with different ideas about cool — but not a lot of money — end up living. These people end up wearing shitty clothes drinking shitty beer in shitty bars listening to shitty bands, because that’s what they can afford. But at the same time, all these competing notions of cool gnaw at them, plague them with restless anomie, leaving them confused and uncertain of their place in the world.
So they decide: I know this stuff isn’t cool, but the fact that I’m doing it anyway is cool. I’m not invested in it — I’m just doing it because it’s kind of funny. I’m doing all this… ironically.
Now they aren’t drinking PBR because it’s cheap — but because it’s ironic. They’re not wearing stupid thrift store t-shirts because they’re cheap — but because they’re funny. They’re not listening to a band that nobody has heard of because it’s free — but because it’s cool. Which is fine, so far as the hipsters need this because they can’t afford it.
But the problem is that where irony makes everything cool, nothing is ever really cool. So for the rest of us, who might want to wear nice clothes or drink good beer or listen to bands that do not suck, we can’t be cool unless we do so ironically. Hipsters really do ruin everything.
For hipsters, there is no authentic cool — the genuine enjoyment of anything that brings pleasure is not cool. Hipsterism says to the world: if you like something because you think it’s good, that’s not cool. But wear an insanely stupid mustache or an ugly tweed vest — that’s cool, because it’s ironic how stupid it is. Irony as cool is almost a form of social protest, but without any discipline or integrity to make it substantive.
Worse, irony becomes a cover for all sorts of things that are clearly bad and stupid. Everyone knows PBR is not a good beer. Everyone knows Urban Outfitters’ clothing is poorly made and over-priced. Everyone knows big mustaches are kind of stupid looking.
But the black hole of irony is not limited only to things that do not matter. If it were, nobody would hate hipsters. Instead, it sucks down whole swathes of social meaning. We all know racism is uncool, but if you’re doing it ironically — well, that’s just hipster racism: “it’s a JOOOOOKE“. Duh. Everyone knows sexism is bad, but if Lena Dunham does it — well, that’s ironic. She’s not on TV. TV isn’t cool. She’s on HBO.
At root, holding up irony as the ultimate cool means you can’t genuinely give a shit about anything or anyone. Being earnest is uncool. Sincerity is uncool. Caring is uncool.
Which, at last, is why we hate hipsters so f’ing much: their ‘irony is cool’ ethos frees them to condemn the rest of us as uncool, without ever putting their own sense of cool at risk for scrutiny. It gives them all of the benefits of being cool, without requiring any of the bravado or authenticity that made 1940s hipsters cool.
The modern hipster affect is dishonest, cowardly, and it is what makes the rest of us want to round up all those hipster weasels and, I don’t know, send them Haiti to plant trees or play with orphans. Make them do something authentic and genuine, at least; free them to live their lives where cool doesn’t matter.
With any luck, the meaningful life lesson they would learn is: it’s okay to like things that other people do not like. It’s okay to like things that everybody likes. It’s okay that I like tweed jackets and listen to Dr. John and think Love, Actually is a really fantastic film. It’s cool to not chase someone else’s definition of cool.
What’s not cool is pretending that things you know aren’t cool are in fact cool just because you don’t know how to find your own way of cool — or worse, pretending that the things you do think are cool aren’t cool but hiding behind ‘irony’ so someone else won’t think you’re not cool.
If you are one of those people, you are hipster and you suck — even if you won’t admit it, as Gioia discovered. We know you’re out there, and we really do f’ing hate you.
And now you know why.
Lindsey Stocker is a 15-year-old Canadian high school student, recently become famous for her short shorts:
Grade 11 student Lindsey Stocker and her classmates at Beaconsfield High School in Quebec were told to stand up in class last week so a pair of school officials could look at their outfits.
Put your arms by your sides, and if your shorts or skirts don’t reach your fingertips, you’re in violation of the school’s dress code, the officials told the students.
In protest, Ms. Stocker put up posters around her school criticizing the dress code policy with exactly the mix of self-certainty and naivete that makes fifteen such an awkward age. Said posters:
Don’t humiliate her because she is wearing shorts. It’s hot outside. Instead of shaming girls for their bodies, teach boys that girls are not sexual objects
This act of civil disobedience got her suspended. My initial reaction to this story — which many people will share — was: “kids these days…” and a sense that maybe she earned it.
As it happens, I’ve been thinking a lot about fashion lately, and the rules that govern fashion, and why those rules exist and what they mean. And I think there is more going on here than just a bratty kid being petulant.
I mean, why would it be important to a 15-year-old girl to wear a specific length of shorts? Why would it matter enough to spur protest?
As a point of fact, it was not hot outside. The above-linked report says the incident happened the week prior the week of May 30th: the high temperature that week in Beaconsfield, Canada was 75° — and most of the other daily highs were in the 60s.
In any case, if the weather is hot, tight jean shorts are not usually a good choice to keep cool. A better choice would be athletic shorts (e.g. Umbros), which usually have longer inseams, anyway. So this is not about the weather.
Part of it, I learned on further reading, was the ham-fisted and humiliating way the school officials went about enforcing the policy, with none of the tact and delicacy you’d expect from, say, German train police hunting freeloaders.
But imagine that Ms. Stocker was wearing a t-shirt that said something obscene — say, “Fuck The Mounties”, or something like that. We generally believe in a right to free speech (even in Canada), yet nobody would contest the administrators’ right to send ask her to change clothes.
The rights that adults enjoy in society do not usually extend to students on a school campus, for more or less good reasons. Students face limitations on the time, place, and content of their speech that would otherwise be unacceptable. This is all legal and reasonable, in that protection of children is the school’s first responsibility.
Ms. Stocker’s argument is that the administration’s actions did not protect her, but instead sexualized her body inappropriately. That is, she believed her shorts said nothing about sexuality, until the school officers told her classmates as much — and that this was humiliating and wrong.
While I can sympathize with her to some extent, it is also worth asking whether Ms. Stocker’s assumption holds water: do her shorts say anything, even symbolically, about her sexuality? Barring the weather, why would it be important to her to wear that particular pair of shorts?
I think the answer has to lie in a combination of identity and politics, embedded in Ms. Stocker’s fashion sense — perhaps so deeply she doesn’t see it.
As most people who survived high school will attest, how you dress is maybe the most important means of self-expression available. School is a tightly regulated environment, and you simply aren’t allowed that much leeway — except in your clothes.
When high school students get dressed — or even shop for clothes — they are performing a subtle and complex calculation about how those clothes will look to their classmates. This is not a conscious process, necessarily — but it is part of the deep social entrainment that young people learn from a very early age in the microsociety of school. This is so common that clothes are instant indicators for high school archetypes in popular culture: the prep, the goth, the jock, the nerd, the skater, etc — and most people can readily picture what each kid is wearing.
The most obvious reason Ms. Stocker wanted to wear her short shorts is that her friends were probably wearing the same thing. A certain style of short is part of the social identity and sense of self.
If that were it, the administrator’s actions would be inexplicable. But there is something else here, in what those shorts symbolize about their wearer.
For a teenage girl, a short inseam can be a signal of independence and self-definition. It says ‘I dress myself’, that the wearer has a degree of autonomy from her parents (who would like to know if she is really leaving the house in that outfit?) — much the way the miniskirt advertised a degree of liberation to modern women in the 1960s. Short shorts tell the world: nobody controls my body, nobody tells me what to wear. And yes — Ms. Stocker should be allowed the right to wear clothing that says as much.
But the reason this works is precisely because the female body — including the thighs — have been so sexualized for society. Short shorts — or miniskirts — could not carry the message about independence and self-determination otherwise. The exposed thigh announces the owner’s rejection of society’s traditional and restrictive ideas about feminine modesty and sexuality, and her independence to make her own ideas about her body.
Which is awesome — really. But…
It’s also undeniably a message — a form of symbolic speech — that the school has the authority to regulate. And it may be the case that while Ms. Stocker is absolutely dead-to-rights about the inappropriateness of her school’s response, it still isn’t her fight to fight — as a student and a minor.
Imagine, as an ad absurdum, Ms. Stocker came to school bare-chested to protest the sexualization of her body. For a grown-up, we ought to allow that sort of demonstration, but I doubt anybody would want or even expect a high school to tolerate it. Somewhere between burqa and total nudity there is a line at which high school dress codes become a good idea, for justifiable child-protection reasons.
The fact that we see that line (even if we draw it variously) means it is not the school’s fault for sexualizing Ms. Stocker’s body. They were, in their clumsy and authoritarian way, trying to protect her — which is part of their job, even if they’re really bad at it.
I think most adults get, at a pretty basic level, that a 15-year-old girl’s body is already sexualized — at least in North America. To some extent, that is a necessary consequence of recognizing teenagers as humans with a capacity for sexual expression. This capacity is legally recognized insofar as Ms. Stocker will be able to give consent (at 16) to any sexual relationship; in the meantime, she can legally consent to sex with anyone ages 13 to 20. In fact, until 2008, the age of consent in Canada was 14. And it changed not to protect girls, but because a teenage boy was caught having (legally consensual) sex with a child predator from Texas.
Granted — however — that the sexualization which teenage girls (and women generally) endure is in gross excess of whatever is intrinsic to their ordinary human sexuality. And I do mean ‘gross’. It’s really, really unfortunate — but it’s also hard to undo. I can’t see how letting 15 year-olds wear whatever they want would solve the problem.
I think Ms. Stocker is already quite aware that she is coming of age in a world where her legs, thighs, shoulders, breasts, and buttocks are implicitly sexualized — and I can see how that alarms her. It is massively unfair: women did not make these rules, do not even get a choice, but they are a cultural minefield every woman must navigate. I want to say that North American Anglophone society (Canada and the U.S.) has made great strides in this department, but we have a long way to go.
Worse, I do not think there are any real strategies to solve this problem in a large-scale way. Women (i.e. adults) should wear whatever they want — but even though clothes can have symbolic meaning, those messages are not always clear. So every woman ends up having to worry about whether her outfit will be misinterpreted by some troglodyte asshole walking past her on the street — or, for that matter, a school administrator.
I will guess that even 50 years since its introduction, for every man who sees the miniskirt as a symbol of female empowerment, there are three more thinking, “look at the stems on that one!” And, to be fair, there are also still educated, independent women who see the miniskirt as a symbol of extreme slutitude. Those people are out there, and they totally suck, but I don’t know what to do about them apart from waiting for them to die out.
So I totally agree with Ms. Stocker that the sexualization of the female body — especially teenagers’ bodies — is a problem in society, and one we should all be working solve. But I think she was maybe being a bit disingenuous, or naive, about where her shorts fit in to that puzzle. As stupid as her school was, I end up on their side: they should and do have the authority to regulate the length of her shorts. I don’t think the underlying problem would be solved by them surrendering that authority.
I do think the problem will be solved by brave, smart adult women — the sort of woman that Lindsay Stocker is going to be very, very soon. And I think we can help her by paying more attention what our clothes say about our bodies and our identities, and the many, many mistakes we make in inference and assumption about how others dress — and especially by rejecting the current default assumption that a lightly-dressed woman is advertising her sexuality.
I am sorry for Ms. Stocker’s humiliation, and sorry that — as a grown-up — I have not done more against this problem, but I agree that this is not her problem to solve.
The recent acquittal of George Zimmerman has focused attention on Florida’s violence problem, especially with respect to the “Stand Your Ground” law. The Tampa Tribune produced an excellent investigation of use of the statute in criminal cases, showing that the law has been used to free drug dealers and gang members involved in fatal violence. The law has certainly not produced a decrease in lethal violence: Florida’s murder rate has gone up since its enactment. Stevie Wonder has announced he will boycott the state unless it is repealed.
“Stand your ground” is a bad law, but there is a clear logic to it: the 2nd Amendment makes it legal to own guns, and self-defense is a deeply held legal principle. Once possessed of a right to own weapons, why should you not be allowed to use them in self-defense? For a legislator who believes in the broadest interpretation of the 2nd Amendment, it makes obvious sense. The point of this post is not to debate what the 2nd Amendment means, but to describe the consequences of that broad interpretation for society. Those consequences are generally negative.
Two years ago Steven Pinker published Better Angels, in which he argues that human history has seen a dramatic curtailment in the use of violence. He attributes this to civilizing factors which we might simply call ‘governance’, and describes how those factors make violence unthinkable. The principle vehicle for this change has been the nation-state, famously defined in part by Weber as the political organization that “upholds the claim to the monopoly of the legitimate use of physical force in the enforcement of its order”. ‘Physical force’ translates to violence — it is this monopoly on violence which helps makes violence unthinkable. As Pinker demonstrated, this is an ongoing process, by which the scope of legitimate private violence is more and more restricted. The state has a strong incentive to do so, because any private violence is ultimately a threat to the state.
What Pinker missed, as I have argued, is that this process is then reflected back on the state. As the government restricts citizens’ violence, the citizens restrict the governments’ violence. Witness the decrease in the scope of legitimate state violence: public executions are gone, and increasingly executions are banned. Even in Florida, the electric chair was replaced with (presumably) more humane lethal injections. The ‘civilizing trap’ is a virtuous cycle that has helped make ours the most peaceful era in human history, outside of perhaps Eden. This has been true across the developed world, and much of the developing world, as it is true generally for humanity. There is still plenty of violence in the world, but much less than our ancestors endured.
In the U.S., however, the cycle has moved more slowly. One reason, perhaps the most important, is that the 2nd Amendment is an effective drag on the reduction of violence by the government. Private violence is not only thinkable, but explicitly condoned by the U.S. Constitution — at least in the broad reading of the amendment. The reluctance of American lawmakers to address head-on the 2nd Amendment means that violence is rather thinkable by private citizens, and every victory for 2nd Amendment supporters is a sanction for further violence in our society. When lawmakers stop a gun control law or enacts a “Stand Your Ground” law, the message sent to citizens is that private violence is okay.
Florida’s law-makers are apparently surprised that this happened in their state. The logic of “Stand Your Ground” was clear; few seem to have guessed that the law would lead to an increase in legitimate fatal shootings, if not an increase in killings overall. That is, I doubt the legislators who voted for the law believed it would legitimize many of the sorts of killings where it is now used as a defense. But the tacit message was also clear: in Florida, private violence is legitimate. The boundaries of that legitimacy are now proving more and more fuzzy. “Stand Your Ground” seems to have been designed to solve a narrow problem, without consideration for the broader implications for private violence (although some people in Florida did warn that it would get out of hand).
The interaction between gun law and private violence is not at all straightforward. Concealed-carry laws are widely seen as pro-gun; a Federal court recently struck down Illinois’ ban on concealed carry, in a decision hailed as a victory for gun rights. That is true, up to a point, but concealed-carry laws do two important things: first, they strongly encourage people to hide weapons; second, they usually come with a certification requirement. These can both serve to diminish the thinkability of violence, at least for persons not carrying weapons (although the classes tend to encourage the person carrying the weapon to think even more about violence). Absent an open-carry rule, concealed-carry may serve to hide violence, in effect to make it less obvious and thus less conscionable: “out of sight, out of mind”. This is a much slower, more subtle process — and not at all the same causal mechanism — for reducing violence than direct deterrent effects posited by concealed-carry enthusiasts, whose claims are not supported by evidence. Florida is not an open-carry state — which is probably the next step in its devolution.
This is not just a problem for Florida, but for the country as a whole. When gun advocates argue for their right to protect themselves, they are arguing that they have a legitimate right to use violence. I suspect that many of them do so in good faith, believing sincerely that this is simply about their own protection and their own rights. That much is reasonable and decent. However, it does not acknowledge the social consequences of that message: to say that I have a right to violence is to deny the state its monopoly on legitimate violence. Absent the government’s ability to narrowly draw the limits of legitimate violence, the ‘right to bear arms’ becomes a license to violence as broadly construed as the arms-bearer wishes, and an individually-rational decision about violence becomes a societal problem.
Meanwhile the return leg of the cycle, by which government is restrained from violence by its citizens, also grinds slowly. A citizenry accustomed to thinking about violence may tolerate a great deal of it from government: wars, death penalty, massive imprisonment, pervasive surveillance, economic ruin. It is no surprise that the people most vocal about gun rights tend to be most approving of government violence — the very horrors the 2nd Amendment was intended to protect them from. And of course there are racial and other factors at work, but the underlying legitimacy of violence as both a private and public activity is crucial.
Thus the 2nd Amendment, in its broad interpretation, hinders the peace and harmony we wish for our society. It is the critical glitch in our operating system, the fatal fault that we cannot fix. The apparent intent of the article — to protect people from their government — is now laughably obsolete, even as it meanwhile prevents government from protecting people through a robust monopoly on legitimate violence. The 2nd Amendment denies the government its monopoly as it denies citizens they tranquility they expect from their government. It is not coincidence that Florida is compared to the ‘Wild West,’ another place where government authority was weak and citizens could not rely on its protections.
Florida is the current poster-child for this problem, but only that. To the extent that the arc of history has bent away from violence, and government monopolies on legitimate violence have made civilization possible, then every expansion of the 2nd Amendment and every victory for ‘gun rights’ is a step back from civilization, a step backwards in history.We cannot fix the Floridas in this country without a full reckoning of the costs and consequences of the 2nd Amendment for our society.
Since DOMA made gay marriage a thing, Matt Lewis of the Daily Caller has come out for polygamy, backed up by the Economist’s Democracy in America. So if you can marry anybody you want, why not marry more than one anybody you want?
The short answer: equal protection. There is no way to arrange polygamy that does not lead to inordinate legal complexity, unless the parties are legally unequal. And the government has a very real and legitimate interest in keeping marriage as simple as possible.
The first thing to realize is that there are two possible forms of polygamy: first, in which a unit of marriage is the relationship between two people, but each person can enter into multiple marriages. So when a husband marries three wives, that represents three legally distinct marriages, each between the husband and one wife.
Alternatively, a marriage is a single contract, joined by multiple people, in which there is only one relationship with multiple people. The above husband has a legal relationship with each of his wives, but each of the wives also has a legal relationship with each of their wives. I am being heternormative, but there is no reason why there could not be multiple husbands and wives in the same, of course.
We’ll call these the ‘unit’ and the ‘contract’ versions of polygamy. Contract polygamy ends up being simpler, so I’ll deal with that first.
It seems easy enough to say that as many people as want to can enter into a single ‘contract’ marriage. In this model, the contract allows for equality because each member has the same power to enter into the contract. The problem comes when one of those people wants to exit: does that dissolve the contract entirely, or just the person’s responsibilities per the contract? What if that person is the primary earner? What if there are children involved? Imagine the whole thing collapses — what obligation do the spouses owe to one another, in terms of alimony and child support? What does members of the spouse who are not biological parents owe to the children of other members of the marriage? Do they get visitation rights?
All of this would have to be negotiated, legislated, or litigated. I am not saying it’s impossible, but however much effort went into the front end of such an arrangement — the contract spelling out everyone’s obligations — the likelihood that it would explode in court, with each spouse getting a lawyer and the result dragging out until the heat death of the universe, is very high, and probably exponentially higher the more people are party to the contract.
And that’s the easy version of polygamy.
In the ‘unit’ marriage, we have to remember that equal protection means that any law allowing multiple wives must also allow multiple husbands (yes, this discussion is heternormative). And if everybody can marry anybody, the math gets complex very quickly — and the law gets intractable.
The basic problem is that if I can marry two women, and they can marry two men, and those men can marry two women, et cetera et cetera — then, in theory, there is nothing to keep everybody from being linked to one another through a network of marriages.
Let’s keep it simple: imagine the network is of marriages is square: You are married to women who are also married to the same other husband. You want to divorce one of your wives: how does that affect her relationship to her other husband, and his other wife ( who is still your wife)? What legal obligations does that entail? Do you owe her support, even if her other husband makes three times what you do?
In social network analysis, a group of folks who are connected to each other, but not to anyone else in the study population is called a “component“. Components can be “giant” — that is, include most of the people in the population. Here’s a graph from Kara Makara’s research on high school social networks; the cluster in the center is a “giant component”, which accounts for most people in the high school, along with a few scattered smaller components around the periphery.
Now imagine this graph represents a network of marriages — and now one person near the center of the giant component wants to get a divorce. Does he have any obligation to his wife’s children by another husband? Do his wives’ have any obligation to his ex-wife? Does he have any obligation to his wives’ other husbands?
Once again, the legal implications become immensely complicated. No government wants to deal with this — probably no government can afford to deal with this. That goes for unit and contract polygamy, both. Polygamy leads to prohibitively complicated legal relationships, and that quickly becomes a massive problem for legislatures and courts.
Incidentally, these sorts of graphs can also be used to model epidemics: to the extent the government has an interest in preventing sexually transmitted diseases, and to the extent that the privileges of marriage are an incentive for monogamy, there is also a public health rationale for monogamy that informs the government’s case against polygamy.
As far as I can tell, the only way to make polygamy legally simple is to greatly restrict women’s rights in those relationships — or men’s rights, but in practice polygamy always ends up restricting women’s rights. It’s partly because polygamy is an intrinsically sexist idea, but also because the restriction of women’s rights is necessary to making polygamy a manageable practice. You don’t get legally feasible polygamy with equal protection for women (and men).
Equal protection means no polygamy — that the government has a strong and compelling interest against recognizing the practice. That does not mean it should be illegal, necessarily (I think it probably should), but that it should not be legally recognizable.
So let’s stop imagining that the end of DOMA opens the door for polygamy. The alternative is likely to create a quagmire of legislation and jurisprudence in which even the Supreme Court will find no traction.
The following are a few thoughts I did not want to see die with my academic career. I offer them for the benefit of former classmates and colleagues…
I spent a lot of time in my IR training talking about paradigms, but came to the conclusion that it’s mostly crap. Our ‘paradigms’ are really just theories of the state (see below). There is an obvious incentive to inflate the importance of those theories, to pretend they are paradigms and incommensurable, but they are not — and in fact, they generally share a common paradigm.
That paradigm is ‘policy-relevance’ — the idea that political science (and especially IR) has as its referent and primary responsibility the state (here and throughout I mean primarily US political science). Realism, Neo-liberalism, the English School, Constructivism, etc. — all IR paradigms — share this investment in the state. It’s why we call it International Relations — although we really mean inter-state relations. In political science more broadly, you see this same commitment as an indifference to micro-politics — local government, non-official governance, all those nooks and niches where people do politics without getting paid for it.
The alternative is a political science focused on people — that is, ‘people-relevance’. Jim Rosenau was already mapping out this new paradigm twenty years ago, and I think it’s going to eventually take over the discipline. His view was labelled ‘post-internationalism’ by others, but he no longer felt the word ‘international’ was of much use; he preferred to think and write about ‘world politics’, by which he meant that just about anybody could participate. People-relevance gives a broader, richer, deeper, and — frankly — more interesting understanding of politics.
A couple years ago I wrote a note to myself summarizing my understanding of politics as follows:
1. Politics are something people do.
2. Politics are ideational — that is, composed of ideas held by people.
3. The ideas in politics are rules fundamentally concerned with conflict and regulating violence.
This is, needless to say, endlessly incommensurate with the way politics are presently understood. Yet to me, this is the only account of politics that makes sense, and the only account adequately responsive to the demands of democratic society. More on the second point in a moment; the third is discussed at length in my paper, Violence in International Relations.
As far as I can tell, political scientists adopted this paradigm — ‘policy-relevance’ — from German academics of the early 20th century. I believe Woodrow Wilson, among others, has written about the importance of political science to a well-functioning government. Weber and Morgenthau also have a preoccupation with politics as government. The tension between idealism — now discredited in IR — and realism is really a tension between English and Prussian understandings of the place and purpose of political science with respect to the state. I wanted very much to write a paper tracing that tension thoroughly, but never got around to it.
The principle problem with this import is that it comes from one of the most toxic societies in history: Prussia. The totality of the demands of the Prussian polity on its society is hard to overestimate, but its consequences for the world are most visible in the two world wars of the last century. To draw our understanding of politics from such a society seems like a terrible mistake, especially if we seek to be relevant to a democratic society.
This is not to say that theories of the state have no place in a people-relevant political science. We should not, however, pretend these theories are primarily descriptive. They are rather strongly prescriptive, as well: neo-realism tells us not just what matters to governments, but also what should matter, which is why Mearsheimer and Walt felt empowered to explain the detrimental effects of the Israel lobby on American politics. Likewise constructivism tells us that norms do matter, but also that norms should matter — that governments should pay attention to those norms. Without that implicit prescriptive aspect, these theories would be policy irrelevant.
When it looked like I might be allowed to teach an MA class in Intro to IR theory, I was already well into ‘people-relevance’, so I thought about how to organize the course appropriately. I decided that each student probably entered the class with a latent theory of the state — what states do, what they should do, etc — and that the importance of IR theory to MA students was only insofar as they were helped to understand each others’ understandings of the state. That is, in the policymaking community it matters far more what the people around you think of the state, than what the state really is or does in any objective sense. So the class would spend a week talking about the theories they already had, before we went on to official IR-approved theories — with the likely result of significant overlap. By the end of the class, students should have been able to think rigorously about their own theory of the state, as well as assess the latent (or explicit) theories used by policymakers, without sweating the artificial barriers those theories usually have in academic IR.
Point 2 in my summary above uses the word ‘ideational’ to mean ‘of or relating to ideas’. You can also use the word ‘ideal’ to mean the same thing, but ‘ideal’ is discredited in IR (by Prussianists). I knew at some point in my career trajectory I would be described as a neo-idealist, and that it would drive me nuts, but it is an obvious short-cut.
There is a persistent misconception in the scientific community that ideas are not real, therefore not accessible by science. But there is a scientific position which allows ideas to be part of reality: scientific realism. Here is one definition, from Godfrey-Smith’s Theory and Reality:
We all inhabit a common reality, which has a structure that exists independently of what people think and say about it, except insofar as reality is comprised of thoughts, theories, and other symbols, and except insofar as reality is dependent on thoughts, theories, and other symbols in ways that might be uncovered by science.
[…] One actual and reasonable aim of science is to give us accurate descriptions (and other representations) of what reality is like. This project includes giving us accurate representations on aspects of reality that are unobservable.
It takes only a moment’s thought to realize that the social world is exactly that reality comprised of thoughts, theories, and other symbols — which I summarize as ‘ideas’. The trick is to appreciate that these thoughts, theories, and symbols do also provide structure to the world, and in ways which might be accessed by science. And once one accepts that fact, the importance of people in generating and sustaining ideas becomes manifest; only through people do the structures of the physical world map onto the social world. Going back to the theories point above, this means that states are made up of people’s theories about the state; what states can and cannot depends on people’s (or at least someone’s) ideas about what states can and cannot do.
Which is to say that I do not think you can be a ‘political scientist’ and not subscribe to scientific realism at some level; the cognitive dissonance will fog your mind. And you can’t be a scientific realist without appreciating the centrality of ideas to the social world. We are all idealists — or should be.
Another way to appreciate the difference between policy-relevance and people-relevance is by looking at what we teach. People-relevant political instruction actually has a name: civics. And it is the bastard stepchild of political science, usually locked in the basement and rarely talked about. Look at the last APSA program: how many panels had ‘civics’ in the title?
Imagine a chemistry class taught in a lab full of equipment — beakers, bunsen burners, test tubes, chemicals — never used. You would think something wrong, right? Yet most intro to political science classes do exactly that. Politics are something people do, and they start doing it well before college; even the most remedial student brings to class some mental equipment for the forming, parsing, and maintaining of rules. Yet political science classes usually do not use this equipment at all.
Classrooms are highly political environments: there are rules, both formal and informal. There are layers of rules — University rules, class rules, social norms, etc. There are people who can think about those rules, and even make new rules. There is even a very solid, anti-democratic hierarchy. If we wish to teach people about politics, the classroom is the obvious place to start. I have spent a lot of time thinking about how to use that equipment to good effect — how to make political instruction more people-relevant, more civic.
One of my favorite ideas is the negotiated syllabus. Of course, every class has a certain scope of material that should be covered, but little about that material demands one form of assessment over the other. Most teachers simply hand their students a syllabus — in effect saying, ‘here are the rules, take it or leave it’. That may be entirely appropriate for a science or literature class, but in political science — where learning about rule-making is the whole point — it is a wasted opportunity. Instead, students should be allowed to help make the rules, and one way to do that is to negotiate the syllabus.
I tried this when I taught civics to high school students: I gave them a partial syllabus that showed what content I expected to cover, but told them they needed to work out the rest of the syllabus — tests, quizzes, attendance policy, etc — in negotiation with me. These were all bright, well-educated kids taking an elective in civics; I thought this would be an easy assignment. Instead, they flipped out — they hated it, were terrified of it, thought I was trying to ruin their lives. One student — a self-described anarchist — scolded me in class for ‘playing politics’. It was all I could do not to laugh. Another student dropped the class. Their proposal was 50% participation, plus some quizzes and written assignments, and they were afraid I would veto it. I would have accepted 100% participation.
In retrospect, I could have explained the assignment better initially, and I should have started the class by saying I was not out to ruin their lives. Nonetheless, I think the students benefited from the negotiated syllabus. It forced them to think about the rules of the classroom from the very first day, and to take an active role in shaping those rules. It forced them to participate in making the rules, and I would do it again — along with a number of similar active-learning ideas I tried in that class (syllabus here).
This way of teaching politics is more challenging to the students, who are already experts at passive learning by the time they get to college. It is also less work (and crazy fun) for the professor, because making better use of the existing equipment means you don’t have to bring as much stuff into the classroom. Also also, it is more responsive to what young people should be learning about politics, and how politics relates to their lives.
The Nobel Prize
With all deference to Elinor Ostrom’s intellect, the hullabaloo over a political scientist winning the Nobel Prize was misplaced. Political scientists don’t need the Economics prize, because we already have the Nobel Peace Prize. Several people have won the prize for doing things that are firmly within the purview of a political scientist’s career (especially those invested in policy relevance), even if they did not call themselves political scientists. Ralph Bunche is the best example, but Elihu Root and Norman Angell are arguably good examples; Henry Kissinger, less so; Frédéric Passy was a French political economist who won for his work on international peace.
Of all the Nobel Prizes, the Peace Prize is the lowest hanging fruit. The depth of recent winners’ actual contributions to world peace is not all that impressive (which is another argument for it being a politics prize). I believe part of the reason is that ‘policy-relevance’ (at least in the U.S.) blinds us to the possibilities of peace, and more so to the potential for our own agency and our discipline to further that peace. The only reason Kissinger won the Peace Prize was because he was told it was in the national interest to end the Vietnam War; had Nixon continued the war, Kissinger would have abetted him. Peace was only policy-relevant after the US had lost that war.
Which to say that a political scientist, with hard work, only a little moral vision, and some measure of luck, could well find himself or herself in a position to win a Nobel Prize. Had Jim Rosenau ventured outside academia earlier in his career, he would have been an easy winner. Certainly, a trained political scientist should be better equipped than a non-specialist to do the right sort of work. Of course, the discipline does not provide any incentive to do that work, and is maybe even suspicious of ‘activism’ — but a Nobel Prize would trump those suspicions. In the worst case outcome, the also-rans would have spent their career doing meaningful (if unrecognized) work for peace. It seems eminently doable, but few political scientists are even trying, as far as I can tell.
My post on Eagle Scouts returning their badges has lead to my being more involved in the efforts to overturn the BSA’s ban on gays and lesbians. The aims and methods of those efforts are, of course, somewhat sensitive, but I think it important to discuss what is not being done, and why not, to clear up some misconceptions. What follows is true for the organizations I am involved in — others may have different views, of course.
1) Nobody is asking the Boy Scouts as an organization to end discrimination.
The current proposal before the Scouts, and to which the advocacy organizations are directing their efforts, is called the ‘local option’. It would allow troops which discriminate to continue discriminating, which in practice means most Boy Scout troops would ban gays and lesbians. It would also allow troops which do not discriminate to not discriminate; this means the many troops chartered to UCC, PC-USA, and other relatively open denominations would be allowed to determine their membership as their confessions require. It would also mean that MCC and Reformed Jewish congregations would be able to charter units — both denominations have severed ties with the Scouts because of their policies.
There are some people who think the Scouts should ban discrimination entirely, but this is misguided for a number of reasons. Let me say that I think discrimination against gays and lesbians is wrong, and should be illegal in most contexts. So I am against discrimination.
However, the crux of the ‘local option’ argument is that the BSA national leadership has violated the troop-charter relationship. Advocating for a blanket non-discrimination rule essentially concedes that the BSA has the right to override that troop-charter relationship. This is not in keeping with the traditional model of Scouting; troops were always supposed to have a specific relationship with their chartering organization, and to reflect to some extent the values and identities of those organizations. This is an important point, which I will come back to.
Moreover, a blanket non-discrimination policy would likely destroy Scouting as an institution. In particular, Scouting depends especially on more conservative churches like the LDS and Catholics; these faiths will never accept a blanket non-discrimination policy. If they pulled their troops out, Scouting as a national movement would be over.
The only way to effect a ban on discrimination in Scouting would be through the Congressional charter — that is, the law that makes the Boy Scouts of America the only Boy Scouts of America. You could, in theory, have inserted into the charter a requirement that the Boy Scouts not discriminate against gays and lesbians. Our society has to come a long way before that is even remotely feasible. For starters, there is no Federal law against discrimination based on sexual orientation — although I believe there should be.
Despite my views on discrimination, I happen to think it is right that the Boy Scouts reflect our general backwardness on this issue. The Boy Scouts are a civic organization primarily, and bigotry is unfortunately part of our civic life. The great strength of the Boy Scouts system — the troop-charter relationship — is that it exposes young people to a broad spectrum of American life. If we think young people will be exposed to bigotry latter in life, I can think of no better place for a person to encounter that bigotry and learn how to respond to it constructively than the Scouts. It isn’t going to happen in schools, much less in churches. So perverse as it may seem, there are good reasons to allow bigotry to persist in Scouting — that is, to allow troops to discriminate against gays and lesbians if that is their desire. Only the ‘local option’ allows Scouting to reflect our society in full.
That said, the ‘local option’ is not perfect. There are still some unanswered questions as to how non-discriminant troops would interact at the Council and national level with those which do discriminate; would openly gay leaders be allowed in summer camps, or at Philmont? And units which do not discriminate will need to do some work to decide how they are going to teach their kids to respond to bigotry within the Scouts. But these are resolvable issues.
2) Nobody is asking the Boy Scouts to allow nontheists.
The ‘local option’ has a necessary logical consequence: for those chartering organizations which are not churches, especially public schools and civic organizations, they have no real reason to prohibit nontheists from participating. But Scouting quite prominently invokes a “duty to God”, which is not part of the discussion concerning the local option.
The reason ‘duty to God’ is not on the table is that it would further antagonize the religious elements resisting gays and lesbians. A secondary reason is that in practice ‘duty to God’ means very little. To meet the incredibly low threshold of religious sentiment necessary in Scouting, one can profess belief in the Force, or tree fairies, or the Great Master of All Scouts; so long as one recognizes a ‘higher power’. You might think that actual practicing theists would find this incredibly insulting, but it does not come up. By analogy, one might be tolerably gay in the Scouts so long as one professed an indeterminate interest in female anatomy.
I happen to think that there is a far more in common with my beliefs (practicing Presbyterian) and a principled agnostic (or even atheist), than there is between my beliefs and, say, a Scientologist (allowable by Scouting). I also think there is a very easy way to integrate nontheists into Scouting, which I will demonstrate:
Scout Oath Scout Oath (nontheist)
On my honor, I will do my duty On my honor, I will do my duty
to God and my country, to my neighbor and country,
to obey the Scout law, to obey the Scout law,
to help other people at all times, to help other people at all times,
to keep myself physically strong, to keep myself physically strong,
mentally awake, mentally awake,
and morally straight. and morally straight.
See how easy that was? The point being that although this is very real question, very much affected by the ‘local option’ logic, with a strong legal case against discrimination, and a fairly straightforward solution, the problem of nontheists in Scouting is not at stake. The conservative churches in Scouting can rest assured that the ‘local option’ debate is not in any way concerned with ‘duty to God’.
3) Nobody is asking the Boy Scouts to allow girls to earn ranks.
There are already girls and women in Scouting — as Den Mothers and other leaders, but also as members of Explorer Units and Venture crews. The girls are there, in small numbers: they just can’t earn ranks. Girls can’t be Eagle Scouts.
Obviously a church or a school which does not discriminate against gays and lesbians is not likely to discriminate against girls and women; the ‘local option’ has a necessary logical implication here, as well. Nor is a society which seeks to end homophobia well-served by an institution that promotes sexism; the connection between the two is more than tangential. Yet the problem of gender equality is not a question at stake in the present debate.
The lack of interest in gender equality in Scouting is probably due to the existence of the Girl Scouts, which are their own thing in sort of a separate-but-equal kind of way. (Quiz: what’s the Girl Scout equivalent of the Eagle Scout?) The Girl Scouts were of course the result of the Boy Scouts’ sexism, and their separate organization made sense in pre-World War I America, when women generally were not part of American civic life. Things are a bit different now — which is not to say that the Girl Scouts are part of the problem, but that our civic institutions should reflect the society we live in, if not those we wish to live in. Women, much more than gays and lesbians, have come into important roles in American society, and the Boy Scouts do not reflect that fact.
Which is to say the Boy Scouts are still part of the problem: still a very sexist organization. Some people will tell you that it is important that young men develop manliness with other young men in a space protected from the influence of women (and here I am thinking of young men as ages 13 and up). This is an understanding of gender at least as ill-informed and mystifying to me as homophobia, but as practiced in Scouting simply means tolerating and encouraging sexism. It’s not even that sexism is actively promoted, but that most Scouters don’t even recognize it when it occurs. I have seen girls in Scout units subject to harassment, and young women working in Scout camps treated as second-class workers, with none of the responsible adults viewing this as a problem. The problem is not just that the women suffer, but that the boys learn that such behavior is socially acceptable. I learned the joke about not trusting anything that bleeds for seven days and doesn’t die at Scout camp, well before I understood what it meant (and even longer before I realized it isn’t funny). As long as girls are prevented from full participation in the Boy Scouts, that organization will be turning out male leaders conditioned to treat women as their inferiors.
The only way to really eliminate sexism in Scouting is to fully include girls in the program. I am not arguing for the end of the Girl Scouts, and I do believe there may be value in having a protected space in society for girls, and even boys under the age of 13. I am rather arguing for the destruction of the protected male space, which necessarily requires introducing girls and women as equal participants. So long as women are not allowed to earn ranks, and non-sexist charter organizations are not allowed to enroll girls in their troops, that space will continue churning out young sexist men.
If we see the primary benefits of Scouting as a civic and social, addressing the sexism in Scouting is at least as important as ending the homophobia. Yet nobody is arguing the Boy Scouts should allow girls to earn ranks. The reason is that most of the people advocating for the ‘local option’ are men, especially gay men, and generally former Scouts. I am also a former Scout, male, albeit straight, and I can say that it is very difficult for men who love Scouting to come to grips with the organization’s sexism (much like white people have a hard time seeing racism, especially those who enjoy whites-only country-clubs). This, more than the homophobia, is the problem that makes me pause when imagining my own (hypothetical) son in the Scouts.
It is also the case that the two denominations advocating against the local option, the LDS and the Catholics, are also committed to sexist practices, and so advancing the cause of women’s equality might be seen as further antagonism of those faiths. In any case, allowing girls to earn ranks is not part of the debate.
The debate over Scouting right now is focused very narrowly on the ‘local option’ — whether churches and other organizations should be allowed to enroll gay Scouts, and gay and lesbian Scouters. The debate does not involved a blanket ban on discrimination, nor concern the place for nontheists and girls in the organization. The decision to allow the ‘local option’ is incredibly narrow, and only a marginal improvement in the Boy Scouts relation to society at large. It is such a small change that it should not be controversial, but it is. For those seeking to understand the controversy, it is important to appreciate what is at stake — and especially what is not.
Watching Downton Abbey with Norbert Elias
My wife watches “Downton Abbey”, therefore I watch it, though I find it vaguely annoying. Recently she asked me, because of my obvious expertise in all things under the sun, “how many aristocrats are left in Britain?” No idea, darling; I believe Labour ate most of them.
A few days later, I happened to meet a Welsh woman, and so I put the question to her. “Oh, lots,” she said. And what do they think of Downton Abbey? “Well, it’s not as popular in Britain as it is in the US. I don’t watch it, but then I don’t care much for soap operas.” Soap opera? But it’s on public television — Masterpiece theater, no less.
I have been thinking about that exchange — now, of course, it seems so obviously a soap opera — and I am prepared to hazard the guess that British and American viewers have a very different experience of the show. The difference is primarily in attitudes towards class, especially understanding of the British class system, and Americans’ comparative lack of familiarity with that system. That is, I think most Americans who enjoy the show end up unintentionally endorse some ideas which they should find — and some Brits do find — abhorrent.
This discussions focuses primarily on the most recent episode, with some spoilers, so do yourself a favor and watch that episode first, then come back to this post, then go back and watch the episode again.
So in the most recent episode, the big event is Sybil’s death (the way-hot one, for the Spike viewers out there). You probably felt sad watching her die, as you’re supposed to. And you probably were annoyed that her father insisted on taking the advice of the upper-class physician over the more middle-class physician who correctly diagnosed her pre-eclampsia. And you probably suspected that, to some extent, Sybil was killed by the British class system, and that’s a horrible tragedy. Now ask yourself; how is Sybil worse for the class system than Ethel? You remember Ethel, right? We’ll come back to her.
Your reaction to Sybil’s death was probably very normal: her father made a horrible mistake relying on the advice of a social peer, rather than deferring to someone lesser in status, and her death was a senseless, needless tragedy. But there is a another way to understand Sybil’s death: as a necessary sacrifice to order. Keep in mind that her father is as much a creature — and protector — of the class system as anybody else in the show. And Sybil posed a horrible threat to that system by her actions: marrying a chaffeur, running off to Ireland, being tangentially involved in the independence movement there, being thoroughly happy, et cetera. In order to preserve social order, Sybil had to die. (This author I think misunderstands the episode.)
If this seems harsh, consider the politics of the show’s creator, Julian Fellowes: he’s a Tory, and “Downton Abbey” is his paean to ‘better’ days. This is, after all, a show in which the central tension is whether some rich people will have to move to a somewhat smaller house. But of course, the house and the families plight is symbolic of the social order in which they live — a grossly inegalitarian order anchored by extremely wealthy people in obscenely large houses. In the show that social order is shown to be rapidly eroding, and yet time and time again the family is saved by
the hand of God Fellowes, and allowed to stay in their gigantic house.
Sybil — by being the bravest, kindest, most free-spirited character in the show — was a threat to the Downton order; her death was punishment for her allowing viewers to think against that order. Every other character who poses a threat to the existing order is depicted negatively: Branson, the chaffeur who marries Sybil, is depicted as hapless, angry, and often sulking. Edith (the other one), who writes a newspaper column suggesting — gasp — women might vote, is played as mildly retarded. Matthew Crawley, heir to the estate and aristocrat pretender, is kind of a dolt.
And then there’s Isobel Crawley, the resolutely middle-class mother of Matthew, played as a frumpy busybody by a woman 15 years older than Elizabeth McGovern (Lady Cora — the MILF, for the Spike viewers). That actress, Penelope Wilton, was born in 1946; McGovern was born in 1961. Maggie Smith (the chicken lady), who is understood to be so old as to be paleontologically interesting, was born in 1934. And granted, actors play roles older than they are — usually because that character is supposed to be hot and awesome. The great difference between Isobel Crawley and Lady Cora is deliberate: it’s not just chance that one is a MILF and the other is a grandmother.
Fellowes does this because he wants to convince you the world “Downton Abbey” represents is better than what came next. This is him in an interview on Fresh Air, pining for the old ways:
“Well, I mean, we live in an era where there are sort of no rules for anything anymore. But of course the good thing about rules is you always know what you’re doing. You always know what you should wear. You always know (unintelligible), when you’re supposed to get there, what you’re supposed to do when you do get there. You know, we’ve lost that kind of security. I think that that is one reason why, you know, the show appeals because it seems to show a more ordered and kind of ordained world. In fact, of course, that is largely a myth. It was a world where all sorts of, as I’ve said, things were bubbling just beneath the surface. But nevertheless in terms of your daily life, what you wore when you got up, what you called people, what you did next, I think it was sort of easier to follow the plot than in our own time.”
Rules equal security. Give credit to the man for admitting it is a myth (even if he rolls that back in the very next sentence), but in “Downton Abbey” the myth is very much front and center. And in that myth, the worst thing you can possible do is try to change the rules. It is simply not true, as Fellowes claims, that we have no rules today. Instead we have different rules, rules that took the place of the rules in “Downton Abbey”, and it is incumbent on the viewer — especially the American viewer — to decide whether those rules are better or worse than the rules of “Downton Abbey”. Fellowes tips the scale against the present rather heavily.
Which brings us back to Ethel: a subplot in the series, something to show how meddlesome Isobel Crawley is. Ethel was a servant at Downton, fell in love with an officer being treated there during the war, had sex with said officer, was then spurned (I’m not sure how that played out — I missed an episode or two), fell into prostitution, felt obliged to give up her son, and now has been hired as a cook by Mrs. Crawley. The incumbent cook refuses to work with a fallen woman, and so quits; the Downton servants then forbid any junior servants (footmen, maids) to enter Mrs. Crawley’s house. You are meant to feel that Ethel’s fall, while perhaps unfortunate, is entirely deserved and her would-be redemption intrusive and offensive. So would you say that the rules provided Ethel with any security?
And in fact, “Downton Abbey” grossly underplays the extent to which that social order oppressed members of the lower classes, and also women. This excellent essay explains that Downton joins a long list of British cultural exports in understating that oppression. Trying to grasp the extent of that injustice by watching Masterpiece Theater is like trying to understand slavery by watching “Hell on Wheels”. Americans, I think, simply don’t understand that system, where many British viewers get it at a visceral level — and find its cloying depiction in Downton off-putting. (In my experience, most Britons are likewise clueless about American race relations.)
It is not simply that Americans should identify with the lower classes. In fact, I expect most Americans viewers are in the middle to upper middle class. But most Americans would also identify themselves with democracy over aristocracy, and here things get more complicated. The rules that exist in Downton — the rules Fellowes pines for — exist to sustain the aristocracy. They do not promote social welfare more broadly, but rather the interests of the very rich and very powerful, over the rest of the people in Britain (and at the time, much of the world as well).
Which brings me to Norbert Elias, German sociologist and later British citizen. Elias argues that what we consider ‘civilized’ behavior in fact owes its origins to the struggle between feudal lords and monarchs (that is, struggle within the aristocracy). As monarchs became more powerful, they also became important referents for manners and customs, as aristocrats adopted monarchical habits and fads in an effort to ingratiate themselves with the king. Some of these habits persist today, most notably in the arcane rules surrounding polite dining (the silverware, the rules about using napkins, the order of dishes served).
Everything about the rules governing “Downton Abbey”, and thus nearly everything we are supposed to admire about Downton — the clothes, the furnishings, the parties, the manners — derives from the Crawley’s desire to position themselves in aristocratic society. You can hardly overstate the extent to which monarchical attitudes and values pervade the characters in the show. If there were no king, nobody would know what to wear. Moreover, these rules depend rather strongly on some degree of oppression: if you are going to change jackets for every meal, you had better have a valet and someone to do your laundry, at minimum. And to the extent that we admire the trappings of aristocracy unreservedly, we are endorsing a system that largely sucks for everyone not in the aristocracy (and even most women in the aristocracy, as well).
Consider Ethel once again: why would it matter to the servants of a Downton whether their former colleague is hired as a cook somewhere else? Because Ethel the prostitute is the farthest a person can get from the King, and the servants of Downton know they are paid to do everything possible to protect and advance the social position of their employers. Those most committed to the social order (e.g. Carson) are most adamant in their refusal to help or even acknowledge Ethel. According to the Downton rules, the people upstairs don’t get the fancy clothes unless the Ethels are kept far away from the people downstairs. Ethel is effectively dead to the staff at Downton, and hardly anybody mourns her.
So now the American viewer has to choose: do you stand with Fellowes and the monarchical order? Or do you think things are better? If you’re a certain kind of Briton, you can do the former just fine. But if you’re an American, your people have very famously rejected reliance on monarchy as an ordering principle for social life. From that position, it becomes very hard to defend the perspective Fellowes is promoting; the cognitive dissonance involved approaches violence.
The good news is that you can still watch and enjoy “Downton Abbey”: you simply have to choose your heroes. For the American viewer, Isobel Crawley should be the hero of Downton — the person doing the most to address the injustice and oppression of the class system. Branson is another hero, as is Edith. These are people challenging the order, trying to change it for the better. Fellowes doesn’t intend these people to be the heroes — they’re all portrayed as twits and slugs — but the viewer can root for them nonetheless. What Fellowes intends as flaws in these characters are virtues in democratic society: Mrs. Crawley’s earnest disregard for pomp; Branson’s bravery and awkward candor; Edith’s developing insistence on self-expression. To the American viewer, the plight of Ethel the prostitute should be every bit as tragic as Sybil the aristocrat — or what’s a democracy for?
This perspective also lets us explain the fairly obvious: Lady Mary (the hot one), the would-be heroine of the show, is a raging bitch. Her dedication to the social order is how we are told she is the heroine, but that dedication is absolute to the point of tyranny. She will do anything, trample anyone in order to preserve Downton as is. Any hint her husband gives of allowing or — God forbid — encouraging change at Downton is met with a temper tantrum. Her affection for said husband apparently exists only to legitimize her, so that she is free to play her real part in show: the staunchest protector of Fellowes’ fantastic vision. When you reject that vision, Mary is obviously the villain.
Joining the ranks of baddies is Carson the butler, Lady Cora, and Lord Grantham. These characters are all committed to the old order, and taking pains to preserve it. The dowager countess (the chicken lady) gets a pass the way old people get a pass on racism: one expects nothing else from her.
I think the attraction to many Americans in “Downton Abbey” is that we long to live beautiful, graceful lives, like the upstairs characters in the show. While there is much that is beautiful in “Downton Abbey”, we should watch mindful that this beauty comes at a staggering cost. Nor is it the case that aristocrats alone possess the ability to live beautiful lives. The show is fundamentally an infomercial for aristocracy, a slant American viewers should find unpalatable. Yet many people who should know better flock to it, because we simply don’t grasp the rules of that world, and don’t appreciate what goes unsaid and unseen in the show.
We don’t have to celebrate a world in which rules are absolute, redemption is impossible, and woman have no real place, in order to watch “Downton Abbey”. We can watch while rejecting Fellowes’ vision, rooting for the underdogs — Mrs. Crawley, Branson, Edith, Ethel — as they struggle to make beautiful, or at least meaningful, lives of their own. We can hope that someday the Crawleys will lose their great big house — or at least be forced to open a gift shop. We can aspire to our own lives of beauty precisely because the rules which govern that world are so unknown to us.