You might have seen the recent New York Times story about proposals that “trigger warnings,” a popular term for descriptions of potentially disturbing subject matter, be added to college syllabi to alert students before exposing them to such material, often in order to prevent post-traumatic responses. Examples cited in the Times of the sort of subjects that might merit a trigger warning include the anti-Semitism in The Merchant of Venice and the suicide in Mrs. Dalloway; one might also mention, as Jay Caspian Kang did in a later essay arguing against such warnings, Lolita as, in one interpretation, “the systematic rape of a young girl” by a much older man. You can likely come up with some examples from your own reading.
Well, as that article details, there are objections galore to this; some critics have declared trigger warnings an imminent threat to academic discourse, forcing professors to retreat to “safe,” harmless texts (whatever those are supposed to be). One free speech advocate quoted by the Times argues, “It is only going to get harder to teach people that there is a real important and serious value to being offended. Part of that is talking about deadly serious and uncomfortable subjects.”
The underlying logic in that statement, however, is that the professors are the ones who know best when it’s time to talk about those subjects, and that students should have no other option but to have those conversations on the professors’ terms. If a student doesn’t feel emotionally ready to confront the subject matter, too bad; the experience, this argument proposes, will ultimately be to his or her benefit, no matter how traumatic it feels in the moment.
As a rhetorical position, that’s not entirely unsympathetic. Stepping outside the classroom for a moment, I’m reminded of Sam Fuller’s film Verboten! (1959), set in post-WWII Germany. In one of the film’s pivotal scenes, a German woman takes her teenage brother, who is falling under the sway of neo-Nazis, to the Nuremberg trials, where documentary footage from liberated concentration camps is shown as evidence against the leaders of the Third Reich.
Fuller, who saw firsthand the atrocities at Falkenau, was deliberately confronting American audiences with raw images of the Holocaust, refusing to let them minimize the extent of what happened. “It’s something we should see,” the woman declares when her brother tries to look away. “The whole world should see.”
But who decides what the whole world should see? And by what criteria?
23 May 2014 | theory |
The argument over people writing for online media outlets without compensation has been going on for a long time, but it recently became more pronounced thanks to a highly publicized email exchange between freelance journalist Nate Thayer and an editor at the Atlantic website. TL;DR: She asked if he’d be willing to edit down a piece he published elsewhere so she could run it as an Atlantic blog post—noting, “We unfortunately can’t pay you for it, but we do reach 13 million readers a month”—and he strongly objected to that offer; to paraphrase his subsequent comment to an interviewer, exposure doesn’t pay the bills.
Over the next few days, it’s felt like everybody’s had a response to this incident. Another digital editor at The Atlantic, Alexis Madrigal, sympathizes with Thayer—having been a struggling freelance writer himself—but argues that, right now, the best business model online media’s been able to come up with is one that puts writers at serious disadvantage. “In most cases, even great reported stories will fizzle, not spark,” Madrigal writes, speaking specifically of the traffic those stories generate and the extent to which they sell ads. “They will bring in 1,000 or 3,000 or 5,000 or 10,000 visitors. You’d need thousands of these to make a big site go.” And who can afford to pay for, and publish, thousands of those stories?
“Even a small blog, with one person at the helm, is going to need, say, 100-150 posts a month,” he continues. I think this is debatable, but it’s definitely a model that’s out there for a certain type of news/issue-oriented blog, so let’s go with it. Next, I’m going to toss some numbers out here, rather than the specific numbers he uses: Let’s say a 250-word blog post is worth $40-50, and go up to $100-150 for a longer (500-600 words) piece, of which you’ll run one a day, and we’ll assume 20 publishing days to a typical month. If you relied strictly on freelancers, this could put your monthly editorial budget anywhere between $5200 and $9500—although since you’d be likely to set aside at least one-third of the blogs to be produced in-house, let’s say $3500 to $6300 a month. Can you guarantee your advertisers $6300 worth of visibility each month? And keep in mind: I’m just talking about pieces that are no longer than a typical magazine sidebar or, at most, a one-page article—we haven’t even come close to the longform journalism of which Thayer’s article would have been an example.
Madrigal explains the shortcomings with this model well, and as the conversation gets around to “well, what if we didn’t pay some of the writers?” he offers some justifications, including exposure—later in the week, in a separate Atlantic post, Ta-Nehisi Coates admitted upfront he’d accepted exposure in lieu of cash for his earliest appearances at that blog, and he was upfront about why it worked for him: “I could not convince editors that what I was curious about was worth writing about. Every day I would watch ideas die in my head… What the internet offered was the chance to let all of those ideas compete in the arena, and live and die on the merits. And [The Atlantic] was offering a bigger arena.”
9 March 2013 | theory |