June 8, 2009

100,000,000 Missing Women

Posted in Global discourse, Women's issues tagged , , , , , , , at 11:09 am by Maggie Clark

Two years back I happened upon the Global Media Monitoring Project, a survey conducted every five years to determine who makes the news, and who makes it into the news, on the basis of gender. The 2005 iteration of this survey received data from 76 different countries, monitoring 12,893 news stories (radio, TV, and print), including 25,671 sources, and presented by 14,273 news personnel; and the results were profound:

  • Women are dramatically under-represented in the news

  • Only 21 percent of news subjects — the people who are interviewed, or whom the news is about — are female. Though there has been an increase since 1995, when 17 percent of those heard and seen in the news were women, the situation in 2005 remains abysmal. For every woman who appears in the news, there are five men.

  • Women’s points of view are rarely heard in the topics that dominate the news agenda.

  • There is not a single major news topic in which women outnumber men as newsmakers. In stories on politics and government only 14 percent of news subjects are women; and in economic and business news only 20 percent. Yet these are the topics that dominate the news agenda in all countries. Even in stories that affect women profoundly, such as gender-based violence, it is the male voice (64 percent of news subjects) that prevails. [emphasis mine]

  • As newsmakers, women are under-represented in professional categories

  • such as law (18 percent), business (12 percent) and politics (12 percent). In reality, women’s share of these occupations is higher. For instance, in Rwanda — which has the highest proportion of female politicians in the world (49 percent) — only 13 percent of politicians in the news are women.

  • As authorities and experts women barely feature in news stories.
  • Expert opinion in the news is overwhelmingly male. Men are 83 percent of experts, and 86 percent of spokespersons. By contrast, women appear in a personal capacity — as eye witnesses (30 percent), giving personal views (31 percent), or as representatives of popular opinion (34 percent).

  • Women are more than twice as likely as men to be portrayed as victims:

  • 19 percent of female news subjects, compared with 8 percent of males, are portrayed this way. News disproportionately focuses on female victims in events that actually affect both sexes — accidents, crime, war. Topics that specifically involve women — sexual violence, domestic violence, cultural practice — are given little coverage.

    And the list goes on.

    Now, I have read much in the past two years that confirms women’s issues are not solely the domain of women writers — that men can, in fact, write stories about matters that profoundly affect womankind. Jeffrey Gettleman’s “Rape Epidemic Raises Trauma of Congo War” was a devastating and desperately needed piece about the under-reported incidence of rape as a weapon of war. Alex Renton’s “The Rape Epidemic” provided an outsider’s account of systemic abuses in Haiti. And for all The Globe and Mail‘s sensationalizing of the case, articles like Robert Matas’ “Week 24: Pickton demonstrated how he strangled prostitutes, witness says” made sure we knew full well who Robert Pickton was, and just how many lives he destroyed.

    Moreover, for all the benefits of having a woman talk to other women about sensitive cultural and personal matters, there are the practicalities of a war-torn world to consider, too: Some are simply not safe for foreign women (let alone local women) — and though all journalists can be expected to run grave risks when visiting difficult countries (as Euna Lee and Laura Ling, sentenced to 12 years hard labour in North Korea, recently discovered), those risks are markedly higher for women — both in terms of being targeted in the first place, and in the context of just what can be done to a woman, once targeted. We stand out. We’re generally smaller, with less comparative strength. We can become the personal property of our captors, married off or forced into lives of prostitution. And we can be raped into pregnancy, or else gang-raped for months until we perish. These aren’t just sickening possibilities: they’re maddening ones. And if the gentlemen’s club of inside intel wasn’t enough to make reporting on many parts of the world hard enough, these facts make it damn near impossible to have women representing women with any degree of equality in matters of extremely gendered global conflict.

    But as I read yesterday’s cover story for The Toronto Star, “How did 100,000,000 women disappear?” I found myself too numb for anger, too numb for tears. 100 million women — not all lost at birth, no, though so many cultures kill off female children as often as they can; and not all lost from “accidents” inflicted by families forcing the newlyweds’ to pay their dowry debts; and not all lost from violence most heinous and inhuman; but so many lost over the course of a lifetime from basic, gendered neglect, and the prioritization of access to aid to the males instead.

    Such sweeping and senseless losses, in such sweeping and senseless numbers, makes the true message of the GMMP all too clear: If our primary coverage of women is as victims, then all we will find are more victims. Many, many, many more victims.

    And while there are justifications, yes, for why women do not do more to report on the suffering of fellow women worldwide, there is absolutely no justification whatsoever for why we do not do more to report on the empowerment of women worldwide. It needn’t be so blatant as this; one needn’t write that a woman’s career was a win for all women — but talk, at least, of that career: follow it. Report on it. Introduce more female experts. Cover subjects that preoccupy women throughout the world. It’s not rocket science, but it requires dedication, and patience.

    It’s so simple, in fact, it’s almost painful to state it: Women are victims because of how little they are valued, and how easy it is to devalue them.

    Change this perception, and you change the world — too late, perhaps, for the 100 million dead and gone in the world today.

    But in time, perhaps, for the next 100 million. Or more.

    Advertisements

    May 30, 2009

    Twitter: Cons and… pros?

    Posted in Business & technology, Global discourse, Public discourse tagged , , , , at 1:20 pm by Maggie Clark

    It’s no secret that I’m not a fan of Twitter. I have no account, and despite the number of friends who “tweet” with vigour, no desire to acquire one. If I can conveniently ride out this latest bandwagon to the next, Google Wave, I’ll consider myself very lucky.

    From this vantage point, it’s very easy to seize upon any awful news about Twitter and twist it to further my stance. Which is what I was quick to do, when I learned Ashton Kutcher and wife Demi Moore (with 3 million Twitter followers between them) tweeted last week that they would have to leave the site in protest if Twitter pursued plans to make a reality TV show out of the website.

    Yes, you read that right: Twitter has in many ways usurped the role of paparazzi, allowing celebrities more direct control over their interaction with fans (so we can all follow the tedious minutiae of their day-to-day lives) — and even leading celebrities to do the unthinkable: post pictures of themselves in less than flattering lights. They’ve become, in other words, almost human.

    But, hey, there’s no money in that sort of social convergence, right? So why not turn that nigh-on-egalitarian collective into citizen paparazzi, pitting twitterers against one another in an epic competition to stalk celebrities through the website? Wouldn’t that be fun?!

    Do I have a deep and abiding concern for Ashton Kutcher and Demi Moore? No. Do I find it typical of the application to progress actively in directions that yield financial gain at the expense of the community itself (and the welfare of members therein)? Yes.

    Heaven knows, Twitter wouldn’t be the first website to invade people’s privacy. One need look no further than the origins of Facebook — the initial website a vicious Harvard version of Hot-or-Not? entitled “Facemash,” drawing from the official photos of students at the university and tasking site visitors to decide which student in a pair was hotter than the other — to realize that, even in our purported age of enlightenment, technological advancements don’t always emerge from altruistic roots.

    So yes, many a time the social benefit needs to be generated by those participating therein. But there’s fighting tendencies towards elitism and exclusion in supposedly egalitarian circles, and then there’s fighting a company seeking to change much of its original premises.

    Users of LiveJournal, for instance — a blogging site that has remained conspicuously off the grid despite the readiness of most sites to link up through Facebook, YouTube, Digg, de.licio.us, VodPod, and other aggregation modules — know the latter fight all too well. Though founded on a pro-user model wherein developers promised to listen to the needs of actual users, and protect them from the pressures of outside interests, LiveJournal eventually found itself compromising these promises time and again — and not just for financial gain.

    Many of these changes arose from a simple transition of ownership: for instance, when Six Apart first bought Danga Interactive, Livejournal’s operator, it introduced a sponsored ad system — despite the site’s earlier promise of remaining advertisement free — and eliminated basic accounts for half a year so only paid users could be assured of ad-free space, before eventually reversing the decision. (The above link has a far more nuanced list of compromises therein.)

    But Six Apart’s real mistake was mass suspending a slew fan fiction accounts, among other accounts deemed in conflict with the obscenity category in its Terms of Service. Had the company issued warnings, so said communities could properly label and restrict access to controversial content, there might not have been such an uproar; as it was, however, this scandal most assuredly played some role in Six Apart’s decision to sell LiveJournal to SUP, a Russian company interested in the product because of LiveJournal’s huge Russian contingent — and which has since carried on in the tradition of trying to get users to pay for products they’re used to receiving for free.

    And yet, oddly enough, the case of LiveJournal allowed me some measure of perspective in response to Twitter’s misfiring play at a reality TV show — because when LiveJournal was sold to SUP, it wasn’t added costs users feared: it was the possibility of censoring and curtailing the expansive voices of Russian dissent that had gathered on the website. As the SUP owner is closely tied to the Russian government, many feared that the sale would serve to break down the walls of freedom of speech and, well, a kind of assembly that had emerged in LiveJournal’s walls.

    Similarly, Twitter has done incontestable good in providing a public forum for countries that otherwise lack the same extensive rights to freedom of speech and assembly. In countries like Moldova, for instance, Twitter provided a means for outsmarting government censors, allowing protesters to co-ordinate a rally against “disputed legislative elections.”

    And you needn’t ask Jean Ramses Anleu Fernandez if he thinks governments are starting to realize Twitter’s democratic power: For a single tweet urging citizens to withdraw all their money from the state-run bank in response to charges of government involvement in a series of related murders, the Guatemalan faces a ten year sentence for “inciting financial panic.”

    Even Starbucks has reason to dread Twitter, the make-up of which allowed a promotional topic (#starbucks) to be “hijacked” by critics of the company’s union-busting tactics.

    Of course, no new technology is completely safe from censorship — especially from pros. So, yes, China censors Twitter content — big surprise there! Nonetheless, Twitter’s use and reach in many other regions is quite striking, and deserves to be taken into account.

    At the end of the day, though, I still chafe at the direction in which Twitter leads journalistic narrative. It especially dismays me that while we as a society claim awareness of the complexity of contemporary socio-political and cultural issues, members of the media have nonetheless latched on to a medium that allows no more than 140 characters to summarize the gist of any one story.

    As a big proponent of the philosophy that writers teach readers what to expect of the media (i.e. with an excess of short pieces acclimatizing readers to shorter attention spans), this seems an agonizing exercise in the death of sustained interest. Studies like this one, amply represented in graph form, serve only to confirm the frenzy with which Twitter allows people to latch on to, and then drop off from, topics of note.

    So, no, you won’t find me on Twitter. Like I said at the start, I’m hoping to ride out this service to the next big thing. But in the meantime, is Twitter really all that bad?

    Like so much of Web 2.0 technology, it depends what its users make of it.

    May 20, 2009

    Participatory Government Online: Not a Pipe Dream

    Posted in Business & technology, Global discourse, Public discourse tagged , , , , at 8:13 am by Maggie Clark

    In an undergad political science course a few years back, I recall being challenged to present explanations for public apathy in Canadian politics. Out of a class of some thirty students, I was the only one to argue that there wasn’t apathy — that low voter turnout among youth was readily offset, for instance, by far higher youth turnout in rallies, discussion forums, and the like. Youth were absolutely talking politics: they just weren’t applying this talk in the strictest of official senses.

    My professor always did love such counterarguments, but my classmates never seemed to buy them. Rather, many argued that the “fact” of disengagement was not only accurate, but also healthier, because it meant that only those who “actually cared” about policy would set it. (We were working, at the time, with figures like only 2 percent of the Canadian population being card-carrying party members.) Many of these same students likewise believed that economics was not only the ultimate driving force in our culture, but also the only driving force that could lead; and also that true democracy was unwise because only a select few (I could only assume they counted themselves among this number) were able to govern wisely.

    At the time, Facebook was two years old. YouTube was one. And the online landscape, though unfurling at a mile a minute, was still light years from its present levels of group interaction. My sources for the presentation in 2006 were therefore an uncertain medley of old and new media: news articles and statistics; online party forums and Green Party doctrine.

    I didn’t have at my disposal, for instance, incredible videos like Us Now, a documentary encapsulating the many ways in which average citizens — seeing truly accessible means of interacting on a collective level with their environment — are achieving great success breaking down the representative government model to something much more one-on-one.

    Nor did I have The Point, which provides anyone with an account and an idea the means to start a campaign, co-ordinate fundraising, organize group activities, and otherwise influence public change. (Really, check it out — it’s fantastic.)

    And most regrettably of all, I didn’t have the Globe and Mail‘s Policy Wiki.
    This last, I just discovered yesterday on BoingBoing.net, when they noticed the Globe and Mail’s newest project on the website: The creation of a collectively developed copyright law proposal, to be sent to Ottawa for their consideration on July 1, 2009.

    As a huge policy geek, and a member of the new media generation to boot, I saw this as a goldmine of opportunity — and yet there is plenty else on the website for other policy development, too: discussion forums and wiki projects alike. So of course, in my excitement, I sent the link to a few members of the old generation — only to receive a curious collection of responses, dismissing the above as an exercise in anarchy, while simultaneously criticizing old-school committees as never accomplishing anything properly.

    Well, old guard, which is it? Is our present model of representative government failing us in certain regards, and should we thus try to engage different policy-building models? Or is the same model which, despite early challenges to legitimacy, created an online encyclopedia as powerful as the Encyclopedia Britannica, by its very nature as an open-source community project unfit for political consideration?

    Us Now makes the point that the internet’s promise of a more dynamic and accessible global community has had many false starts (spam, scams, and the proliferation of child pornography rings come personally to mind). But long before we became cynical of the internet’s capacity to improve our social impact, we as a society were already well used to doubting the potential of our fellow citizens to act intelligently and in the pursuit of the communal good. You can thank Machiavelli’s The Prince, Italo Calvino’s Crowds and Power, and bastardized readings of Adam Smith’s The Wealth of Nations in part for this.

    A little while ago, however, I got around to reading John Ralston Saul’s The Unconscious Civilization, a CBC Massey Lecture Series essay collection about the rise of the management class and the utter reversion of the democracy/free market equation to the extent that the notion of democracy itself has suffered massive political distortion. Written just before the first real explosion of online communal projects — be they open source software, open-access socio-political groups, or information-dissemination tools — what Saul wasn’t able to account for in his work was the balancing force of technology itself. Rather, when he wrote these essays, technology was still very much a cornerstone of continued economic distortions in lieu of real democracy. Now, though, it’s clear that technology created through the corporate model has itself emerged as a platform for participatory government — and thus also as the undoing of those same, hierarchical economic forces. Coming full circle is fun!

    So, to get back to this matter of “trusting in the intelligence of individuals, and their capacity to act in the common good,” yes, there is a lot of circumstantial evidence to the contrary on the internet. Heaven knows, for instance, that the low-brow interactions which inspired CollegeHumor.com’s We Didn’t Start The Flame War are in fact a daily, persistent reality online, and make up a substantial percentage of commentary therein.

    Yet any parent will tell you that the way to raise a responsible child is to give her responsibilities to live up to; a child entrusted with none will invariably continue to act like one. So rather than using, as a test of our group potential online, those sites that in no way engender a sense of responsibility for our actions, why not look at those sites that do — like ThePoint.com, and the Globe and Mail Policy Wiki?

    Furthermore, if our current model of representative government no longer yields the level of public engagement we crave (read: in the ways the government wants to see), maybe it’s because citizens at large haven’t been given the opportunity to feel like real participants at all levels of the democratic process. And maybe, just maybe, the internet not only can change that perception, but already is.

    After all, those same students who, in the comfort of a political science classroom just three years back, so boldly proclaimed that collective decision making was a waste of time? You’ll find every last one on Facebook and LinkedIn today.

    May 6, 2009

    Calm before the swine

    Posted in Global discourse tagged , , , , at 9:59 am by Maggie Clark

    There is reason to think positively about the strength of citizens en masse. There is reason, too, to think positively about the benefits of our new networking technologies. And one need look no farther for proof of this than the confrontation between panic and perspective in relation to the swine flu epidemic.

    Swine flu had, and still has, all the earmarks for a perfect shock story: The strain, H1N1, afflicts the healthy, the strong, by over-stimulating the immune system’s response. It’s an inter-species mutant, so you can imagine the inference that it must surely be three times as strong as its avian, human, and swine strain predecessors. And the outbreak has been tied to Mexico — just one more illegal immigrant to worry about, right? (It’s even being called the “killer Mexican flu” in some circles.)

    As I write this, according to the Canadian Public Health Agency, there are 165 reported cases of this H1NI strain in humans in Canada. The U.S. claims 403 cases, and between the two of us we have exactly two confirmed deaths. According to WHO statistics (current to May 5) Mexico has 822 cases, with 29 deaths; in the whole world, 21 countries share a collective case count of 1,490, with no other confirmed deaths.

    If scientists declare that the strain has established itself outside of North America, the flu will reach pandemic status. In theory, that sounds terrifying, but really, the meaning extends no further than the fact that the illness can be found across the globe. The term pandemic says nothing, for instance, about how lethal or non-lethal said condition is; and though some sources are fond of speculating worst case scenarios, this means that the death rate is still very low. How low? Let’s take the U.S. numbers to illustrate: Annually, there are some 200,000 cases of hospitalization due to typical flu types in the U.S. — and 36,000 deaths. By this measure, swine flu has a long way to go before being anywhere near as serious a threat as its local, home-grown competitors.

    And yet all this, for me, isn’t where it gets interesting. Not even close. Rather, what continues to surprise and impress me is our capacity for self-regulated response to the initial panic invoked around this illness. Yes, the media was talking up a storm about Influenza A H1N1. Yes, doomsday speculation was abounding. And yes, many industries — sanitation and pharmaceutical groups especially — have profited greatly in terms of market shares and business from all this panic.

    But also abounding was — and still is — a countering force of calm. And it takes some truly extraordinary forms: For instance, mainstream news articles taking other articles to task for the lack of coverage about all the good news we have about Influenza A H1N1, and ethical deliberations about whether or not laughing at this illness (its name, its origins) is acceptable. And then there’s the really fun stuff: Stephan Zielinski applying the amino acid sequence for Influenza A H1N1 to ambient music. Gizmodo posting a hauntingly beautiful video demonstration of how the virus gets released. xkcd.com aptly encompassing the typical range of responses to Swine Flu on Twitter.

    In other words, for all the panic we’ve had thrown at us about this illness, many have responded with a measure of fearlessness at least a hundred times as infectious. Does this mean everyone is rid of that panic? No, of course not: these reactive trends are often regional and compartmentalized due to varying interests and complex investments. The mass killing of all pig herds in Egypt, for instance — a perfectly rational response to a disease that, at that time, had no cases of pig-to-human infection manifested in the world, and absolutely no cases of human infection in the country itself — leaves huge consequences for the pig farmers, who with 300,000 animals killed have lashed back at the government in the form of protests: doubtless this panic attack on the part of officials will leave a long list of social consequences in its wake.

    But think back, for comparison’s sake, to our global reaction to SARS — the extreme panic, the devaluation of tourism in heavily affected cities and regions, the dramatic quarantining procedures. Globally, the disease racked up 8273 cases, with 775 direct deaths (a death rate of 9.6 percent, weighted heavily toward seniors). Though clearly a more serious disease than Influenza A H1N1, the overall death rate of Americans due to seasonal influenza was still much higher; and yet our panic was long-standing and far-reaching, in large part because we were given no room for questions of doubt: only more panic.

    Similarly, I’m not convinced the relative calm in this case emerged from the ground up: rather, I suspect news articles first had to present seeds of doubt about this issue, as forwarded by scientists reacting to the extent of media spin. I think room for doubt had to emerge from these sources first; and then the average reader, artist, and blogger could follow after — in turn serving to create more room to maneouver, rhetoric-wise, in future works by the mainstream media. But regardless of speculation about just how, and in what order, these groups fed off each other — the scientists, the media, and the participatory citizenry as a whole — what’s more striking is that they fed off each other at all to produce this ultimately calming effect.

    We have, in the last 8 years, kicked ourselves over and over again for allowing flimsy excuses for war-mongering to stand; for allowing freedoms to be stripped from us in the name of security; for permitting, in general, the hard polemics of with-us-or-against-us to divide the population. And rightly so: When we go along with fear-mongering, we can be, en masse, pathetic excuses for an advanced and critically thinking civilization.

    But cases like our reaction to swine flu should likewise give us cause for hope — and should be treated as such, with praise for measured response wherever it emerges. For as much as we can act like sheep if treated like sheep, it nonetheless takes precious little in the way of tempered social rhetoric for us to realize our own, independent engagements — fearless, inquisitive, and inspired alike — with the world instead.

    May 1, 2009

    Death by any other name

    Posted in Military matters, Public discourse tagged , , , , , at 9:57 am by Maggie Clark

    Major Michelle Mendez, a Canadian soldier stationed in Afghanistan, was on her second tour in the region when found dead in her sleeping quarters at Kandahar Airfield. Hers marks the third death of a Canadian woman, and the 118th fallen Canadian, in Afghanistan since our involvement in the conflict began. The media has done an exemplary job of presenting Mendes in the respectful light afforded all Canadian soldiers lost in this conflict — and perhaps with extra care, too, because hers marks the second female fatality in as many weeks — but one word is pointedly absent from all talk of her “non-combat death”:

    Suicide.

    According to the Canadian military, an investigation into the circumstances of her death is still ongoing: evidently the possibility of her firearm accidentally discharging has not been entirely ruled out, though The Globe and Mail reports that “a Canadian government source said ‘all evidence points toward a self-inflicted gunshot wound.'”

    The prominence of this story, and the blatancy of the aforementioned omission, have piqued my interest. The debate about whether or not to talk about suicide in newspapers, and in what ways, with which emphases, has been waged for decades. The argument ultimately centers on two points: the quest for greater public understanding, and the fear of inducing a copycat effect among readers. To this end, there are fierce defenders of different approaches — each backed by their own body of research and professional opinion. Last year The Newspaper Tree wrote an editorial responding to reader concerns over the term’s use in relation to one case: therein they noted that certain organizations of mental health professionals agreed it was better to tell readers the cause of death, but that the stories needed to be presented with the “valuable input of well-informed suicide-prevention specialists” in order to be effective. In that same year, Media Standards Trust published a firm condemnation of suicide stories, citing the high statistical correlation between published stories and copycat suicides.

    My problem with the omission approach, however, is its selectivity: Suicides are deemed taboo, but the publishing of violent domestic deaths? murder-suicides? school shootings? isn’t — and all of these stories arguably pertain to people in even more disturbed mindsets (one, because I do not hold that everyone who commits suicide is “disturbed” in the sense of having lost their ability to reason; and two, because their acts take the lives of others, too). A recent Times article asked if the copycat effect was being felt here, too, pointing to the lone study that has been completed to date on the theme. The article also developed a short history of the copycat effect in media, which reads as follows:

    The copycat theory was first conceived by a criminologist in 1912, after the London newspapers’ wall-to-wall coverage of the brutal crimes of Jack the Ripper in the late 1800s led to a wave of copycat rapes and murders throughout England. Since then, there has been much research into copycat events — mostly copycat suicides, which appear to be most common — but, taken together, the findings are inconclusive.

    In a 2005 review of 105 previously published studies, Stack found that about 40% of the studies suggested an association between media coverage of suicide, particularly celebrity suicide, and suicide rates in the general public. He also found a dose-response effect: The more coverage of a suicide, the greater the number of copycat deaths. (See pictures of an exhibit of Columbine evidence.)

    But 60% of past research found no such link, according to Stack’s study. He explains that the studies that were able to find associations were those that tended to involve celebrity death or heavy media coverage — factors that, unsurprisingly, tend to co-occur. “The stories that are most likely to have an impact are ones that concern entertainment and political celebrities. Coverage of these suicides is 5.2 times more likely to produce a copycat effect than coverage of ordinary people’s suicides,” Stack says. In the month after Marilyn Monroe’s death, for example, the suicide rate in the U.S. rose by 12%.

    Journalists have a responsibility to the living. We have a responsibility to give readers the best means necessary to make informed decisions about the world around them. This also means doing the least amount of harm. In the case of suicide, this measure of harm is difficult to assess at the outset, as even the very language of the event is against us. To “commit suicide” bears with it the gravitas of an age when suicide was deemed a crime, not a tragedy — and not, in some cases, a release from untreatable pain. To “take one’s own life” is a step up — dramatic, but delicately put — though it is unclear if one term is preferable to the other in keeping the copycat effect to a minimum.

    That effect itself also plagues me, because I have to wonder if it occurs in part because there isn’t enough reporting: if all suicides were listed as such (3,613 in Canada in 2004; 32,439 in the U.S. — roughly 10/100,000 for each population), and those suicides were contextualized by similar tallying of all deaths (drownings, the flu, and other causes of death with much higher population tolls) would that copycat effect drastically diminish over time?

    I can only speculate. Meanwhile, another telling question has a more interesting answer: Can the news provide the requisite depth and breadth of coverage on mental health issues without the direct mention of suicide? In answer, I refer you to this piece from The Globe And Mail, which delicately tackles mental health in the Canadian military as a hot topic arising from Mendes’ “non-combat death,” while the Canadian Press approaches the issue from the vantage point of the female chaplain who presided over Mendes’ ramp ceremony.

    There are, then, ways to nod to the issues surrounding suicide without using that word directly. But are they enough? Or does the omission of the word, in conjunction with so much open commentary about related issues, create a different reality — one in which suicide, lacking its public face, becomes at best a vague and theoretical matter?

    These are difficult questions, and they grow more difficult when addressing systemic suicides — as exist among many Aboriginal communities in Canada, as well as among military personnel — and when suicide strikes the very young. To whom does the journalist owe her ultimate allegiance: the grief-stricken families, the immediately affected communities, or the public at large? How can we use the fact of suicide to better our understanding of this world we live in? Are we forever doomed to make things worse by the mere mention of suicide’s existence?

    Two days ago I watched Rachel Getting Married, a film about a woman who comes home from rehab to take part in her sister’s wedding. A great many difficulties unfold as this woman struggles with guilt and self-hatred, coupled with depression and suicidal tendencies. Watching this film, I registered numerous “triggers” in myself, and cycled for a day and a half back into certain, terribly familiar mental routines. It was then, as I reminded myself that most people likely wouldn’t have had the same reaction to this stimulus, that it struck me: I will never be completely rid of these thoughts, these propensities to cycle between contentment and depression. Anything — a movie, a newspaper article, an off word from a close friend — might trigger them, and then it will be my responsibility to take control of these impulses: acknowledge them, experience them, and move past them.

    I know, too, that eight percent of Canadians live with depression, and that at least 16 percent will experience a period of depression at some point in their life. I know I’m on the lucky side of this spectrum: I’ve learned how to counter the anxiety that often pushes depression to the brink, and after years of very extreme engagements with my mental health issues, they are manageable for me. I know this isn’t the case for everyone. I think to myself, what if someone in a much more agitated or suggestible state of mind watched this film instead — or others, with far more tragic endings? What if that was all it took, and the film pushed them to the brink?

    Yes, a film or song or book could move someone to suicide. Most likely, it has already happened a lot. In short, anything could be a trigger; anything might be the last straw. But art, like the media, has as its higher purpose the construction of conversations about the world we live in, and how we live within it. So if there is a way to address suicide directly in the news — with the aid of suicide prevention experts; with a fully conveyed understanding of the context in which suicide operates; and with absolute respect for the families and friends each instance affects — I think we need to take it. To do otherwise, for me, is to leave each victim as alone in death as they surely felt in the lives they chose to end.

    And honestly, that’s just not something I can live with.

    April 18, 2009

    The Heart of the Matter: A Shifting Social Discourse

    Posted in Global discourse, Public discourse tagged , , , , , at 2:57 pm by Maggie Clark

    A very important transition is occurring in North America, and I suspect it will still be another year or so until we grasp its full implications. Just a few weeks back, Chinese financial leaders suggested changing the world’s standard currency from the dollar to a global currency reserve, and UN economists have since backed this proposition. This move would mark a shift away from the U.S. as the source of global financial stability, and towards a preexisting global discourse that will at last be given its own voice, even if North American still plays a large role in the debate.

    I suspect the same is very much true for socio-religious discourse: While George W. Bush was in office, the rise of right-wing Christianity in conjunction with the U.S.’s wars in Afghanistan and Iraq launched a polemic debate between Christians and Muslims — a West meets Islam, “U.S.” vs. them affair. Moreover, the rise of a particular brand of Christianity — politically-motivated Evangelical Christians — created in its own right a series of related conflicts on the home front, such that Evangelical resistance to the theory of evolution in classrooms, global warming in government policy-making, expansive rights for women and the LGBT/IQQ community, and various issues pertaining to “morally acceptable” content on national airwaves garnered excesses of media attention and political sway.

    Now, though the politically-motivated Evangelical Christian community still amounts to a sizable social force, the media portrays a very different, more long-standing socio-religious battle: the conflict between Israel and the Arab world.

    In this ideological warfare, North America undoubtedly still plays a crucial role, but in the last few years this role has shifted from one of proactive engagement to one of passive response. The U.S. has always been deemed pro-Israel, regarding the country as a beacon of hope for stability and the eventual spread of democracy in the Middle East. However, the U.S. simultaneously relies upon strong business relations with nations in the Arab world, and to this end has equally supplied many such countries with arms, money, and the maintenance of dictatorships that suited U.S. interests. This has always made its involvement in the region self-motivated.

    Post 9/11, that involvement necessitated a stronger alliance with those who would fight against U.S. enemies in Afghanistan; later, it also meant stronger alliances with those who would support Americans in Iraq. But times have changed. Immigration from the Arab world into Europe created stresses from which controversial national leaders and extreme anti-foreigner stances have emerged. The two-state solution between Israelis and Palestinians, once a viable discourse with its very own “road map” to peace, is no longer a welcome solution for many in the region. And here in North America, every political decision is becoming increasingly mired in questions of perceived Islamophobic, Zionist, anti-Semitic, pro-Israeli, pro-Palestinian, anti-Israeli, anti-Palestinian, pro-terrorist, and anti-terrorist allegiances.

    This is not by any stretch of the imagination to argue these terms weren’t bandied about before — of course they were. But what has been lost in recent months, from a socio-religious context, is a sense of North American values having any measure of relevance in the debate. Even terrorism is not being engaged as something feared again on home soil; rather, those terms, like their aforementioned brethren, time and again reroute discussion to the matter of the Middle East.

    An excellent example of this arose quite recently, in the matter of George Galloway. Galloway is a five-time British MP expelled from the Labour party for extremely controversial comments made in response to Britain’s invasion of Iraq. He has toured Britain and the U.S., working with many causes: some clearly humanitarian, many others complicated by statements that have brought UN condemnation upon him, and actions that have blurred the lines between humanitarian aid and front organizations for personal gain. (I won’t make a habit of this, but there are so many controversies pertaining to his views, actions, and travels that I’m going to recommend reading his Wikipedia entry — no one mainstream article on the man comes anywhere near as close.) On March 20, 2009, he was denied entry into Canada, on the basis of his ties to Hamas: though he has gone on record stating that he does not agree with Hamas, Galloway gave the government $45,000. As Hamas is on Canada’s list of terrorist organizations, this was enough to deny him entry, though Canadian immigration ministry spokesman Alykhan Velshi’s comment on the issue is a little more dramatic than that:

    The Telegraph — Immigration ministry spokesman Alykhan Velshi said the act was designed to protect Canadians from people who fund, support or engage in terrorism.

    Mr Velshi said: “We’re going to uphold the law, not give special treatment to this infandous street-corner Cromwell who actually brags about giving ‘financial support’ to Hamas, a terrorist organisation banned in Canada.

    “I’m sure Galloway has a large Rolodex of friends in regimes elsewhere in the world willing to roll out the red carpet for him. Canada, however, won’t be one of them.”

    Galloway contested the ban, lost, but got around the ruling by being broadcast via video-link from New York to Canadian locations. And so life went on, with the news turning to “Tea Parties” in the U.S. and Canadian outrage towards the Afghani rape law. Yes, we have plenty of political matters to attend to at home; there is no shortage of issues. But the question posed by the high profile case of Galloway — to say nothing of audience reactions to North American portrayals of recent Israeli-Palestinian disputes and Somali pirates– remains: Which is the greatest? Not in the world at large, per se, as so many cultural wars are played out on that stage every day — but here, at home, in North America? Does our ultimate socio-political investment lie with home turfs, and all the multicultural challenges upon them, or quite literally with foreign lands, and the conflicts waged there instead? If the latter, does this tie our future directly to their outcome? What are the implications (not necessarily negative!) of a national discourse set primarily by the happenstance on foreign soil?

    April 17, 2009

    Making allowances for human nature

    Posted in Public discourse tagged , , , , at 8:28 pm by Maggie Clark

    What better way to spend Easter than reading the Bible — am I right?

    It’s not the most likely thing for an atheist to say, but I’ve been mulling over the application of Bible verse to contemporary beliefs: in particular, as they pertain to Evangelical stances on issues like climate change. As a friend sagely reminded me, all religious culture comes first — then canon is interpreted to fit it. This is why so many Bible verses might be accepted in one generation, and ignored in the next: Other aspects of human culture change, and with those changes, our engagement with the original texts is also transformed. (One need look no further than the treatment of slavery in the Old Testament to recognize that Abrahamic faiths pick and choose which “hills” they’ll defend in public practice; there are other, social factors that play in to the application of faith.)

    And yet, alongside reading the Old Testament, this past weekend I picked up a CBC Massey Lecture series installment — five short lectures from the 1980s by Nobel Laureate Doris Lessing, known best for literature with strong political and feminist leanings. This volume of hers, Prisons We Choose To Live Inside, tackles a most curious social juxtaposition: the fact that we are, as a civilization, more aware than any generation before us of overarching trends, tendencies, and themes in human nature — and yet just as unable as individuals to apply this knowledge to our everyday lives. Lessing herself was drawn up in a Communist party as a young woman; this was in direct response to the egregious abuses of power enacted in her childhood nation, apartheid-torn Zimbabwe (then Rhodesia), but as Lessing develops in her lectures, there was just as much propaganda and groupthink necessarily at work among her chosen group as among the corrupt society those Communists were striking out against.

    From such personal experiences and relevant academic experiments, Lessing develops the argument that all groups have this propensity towards thinking themselves in the right, and all dissenters as in the wrong; and that this righteousness furthermore flies in the face of the temporary nature of all human resolution. However, argues Lessing, if we were only to make ourselves more aware of the transience of our beliefs — and more willing, too, to accept as human nature the inclination to various trains of thought (polemic argument, for one; and with it an “us vs them” mentality) we might be able to maintain more critical thought even as time entrenches us in one camp, or one label, above all else. We might even be able to make a greater difference in the world: Lessing writes at one point about how the broad condemnation of war will never suffice to eradicate its existence if we don’t acknowledge and accept that some people do, and always will, actively enjoy the exercise of war itself. These more complex analyses are harder, yes, but likely more useful in effecting real-world change, and so at the very least merit an attempt.

    But to return to the Bible: Lessing notes that religious and political beliefs share a common propensity towards absolutism and fanaticism — an observation we are all too often loathe to make, though the acceptance of this similarity might help us learn to better converse with those whose viewpoints differ from our own. The depressing truth is that most people are so long trained in empty rhetoric, and so short on the experience and tools needed to engage in formal debate, and most of all so comfortable in their own righteous certitude as to see no reason to second-guess their way of thinking, that even getting everyone to engage in open dialogue is a pipe dream in and of itself.

    And yet, let’s say it could be done. What would that look like? How would it be achieved?

    These are the questions I was asking myself while poring through the Old Testament this Easter, because I’m still holding out hope that some measure of formal debate might be attained if we in the media are willing to engage believers on their “home turf.” The problem is, is that home turf the religious texts themselves, or the empty rhetoric that often passes for argument in public spheres? (I’m referring here to the singing of songs in response to critical inquiry, the rattling off of catch-phrases, and all in all the extreme use of circular and straw man fallacies to avoid scholastic scrutiny of the verses themselves.)

    In the case of climate change, my starting point was simple: Is there any reason Christian Evangelicalism can’t be united with theories of climate change? For many years now, a culture of vehement denial has been maintained in these communities, but why? Does climate change necessarily threaten the precepts of Christian belief? Is it necessarily a challenge to the faith of so many Americans?

    From what I’ve been able to discern, there are a few places — some obvious, some less so — where the existence of climate change seems, at least on the surface, to be a threat. The most obvious is a sense of entitlement: Many believe their god gave them this land, and all that exists upon it, to do with as they would. This permits a rather regal lifestyle upon the earth — one in which the fruit of one’s labour may be applied to whatever one deems fit. If climate change has a human origin, and with it comes the cry for the curtailing of excess, this would to many seem a direct challenge to that entitlement. Worse still, it threatens a sense of hierarchy on the planet: God, then man, then the beasts of the earth, then everything else. If the preservation of one species suddenly trumps man’s full enjoyment of god’s gifts, how can that not be considered a threat?

    This is where Bible-reading comes in: I wondered if that entitlement were as textually concrete as many Evangelicals make it out to be. True, in the Genesis story the world is created with man its crowning achievement… but that’s Eden. And humankind gets kicked out of it. Much of the New Testament ennobles man’s place at the top of the planetary food chain, but there’s really nothing to suggest that man should feel entitled, after the Fall, to a world as stable and nurturing as Eden. And, after all, Christian nihilists (those who see no intrinsic good in humanity, or this life, without the presence of a god) already regard this world as bleak and secondary — so why can’t the instability of the environment, and human responsibility for the quality of the land they live on, be reconciled with Evangelical thought?

    I suspect the answer lies in a deeper threat felt by Evangelicals: namely, that climate change — and with it, the threat to the stability of human life on Earth — has grave consequences for proponents of intelligent design. Evolution presents elements of the world, and all who dwell within it, as “just good enough” — with first successful drafts, as opposed to perfect creatures, being the product of evolution. But intelligent design is argued from a position of precision and perfection, with the human eye especially (bewilderingly, too, for it has many weaknesses and blind spots) used to argue for the “impossible complexity” of the world we live in. From this standpoint, it’s easy to see where climate change can be threatening: If humankind could so easily tip the balance so as to make the world inhospitable, so much for that perfect construction!

    And yet, here too, it’s so easy to spin the message so as to fit Evangelical parameters: God gave us a world built so that its fate is determined by human action. Gay marriage = hurricanes, floods, and stabbing death on Greyhound busses (okay, that last is a little extreme). Gluttony and greed = deforestation, unchecked industrialization, and climate change. Causal, not just correlative, relationships are the lifeblood of much religious thought: in a sphere of argumentation that already permits leaps of faith to fill in where empiricism fails, there is no intrinsic reason for Evangelical belief to side against the existence of climate change.

    So where does this leave the matter of critical discourse? Well, if it were possible to foster open dialogue about such issues, the aforementioned route seems the likeliest to succeed. But more importantly, I think it has to succeed: in the last week alone we’ve seen much in world news highlighting the need to address intersections between religion and human rights, but still the topic remains taboo. Why? Is it really impossible to talk about the differences between religion and culture, group and individual, or contextual and universal rights without brewing a maelstrom of polemics, empty rhetoric, and broad accusations of various -isms and -phobias from the general public?

    Lessing would argue that it is impossible to avoid these manifestations of human nature — but that even then, it is still possible, with an awareness of past behaviours and social constants, to react to these inclinations in a way that counteracts what would otherwise have us forever defining ourselves, and others, in uncompromising blacks and whites.

    I really hope she’s right.

    March 14, 2009

    Why Aren’t We Standing Up To Ad Hominem Attacks?

    Posted in Public discourse tagged , , , , at 11:42 am by Maggie Clark

    In the wake of the Jon Stewart / Jim Cramer controversy, which I feel was not so much overly hyped as, in its polemic framework, erroneously hyped, a striking point remains unmade: Where was the condemnation of mainstream ad hominem attacks?

    Specifically, Joe Scarborough of Morning Joe, by launching a heated attack on Stewart on his show, provides a good example of the kind of argument held by Stewart’s critics all throughout the week of this controversy: Many called him on being a comedian, and condemn him for having critical opinions in this capacity about the statements of others.

    People have responded to this condemnation, yes. They have done so by arguing Stewart isn’t just a comedian, noting his strong history of media criticism and notable appeals to journalistic ethics. Not one person in the mainstream media has said, however, “Even if he were just a comedian, would that make his criticism any less valid?”

    And that’s a problem. It’s a problem because while many fallacies are very difficult to police (being of the subtler variety), ad hominem attacks are pretty straightforward. Moreover, the ad hominem fallacy in this case is an attack on freedom of speech (in the U.S.) and freedom of expression (in Canada), because by its very nature it implies some people’s arguments are less valid simply because of who is making them, and yet it’s made by people in positions of power — people who, as members of the media, should be empowering everyone to hold them accountable for failure.

    So while Scarborough was attacking Stewart for daring to make a critique of CNBC while simultaneously being an entertainer, he (and others like him, in print as well as on TV) was also encouraging the unquestioned use of this fallacy. And that’s dangerous, because Scarborough has privileged access to both the airwaves (which gives him access to millions) and, from his association with notable news organization, a measure of legitimacy (which gives him an edge over pundit bloggers). He is part of a system which sets a standard for casual, daily discourse in North America — and he, like many of the people in these roles — is failing to promote fair, reasoned, empowering conversation in this realm.

    The ways in which print, TV, and online articles have used this fallacy are often indirect: Headlines reading “the clown won” after the Stewart/Cramer conversation on The Daily Show are as damaging to the cause of coherent, empowering media discourse as any direct, unchecked statement of “What right does a comedian have to criticize?” could ever be.

    And that’s where things get confusing: Why on earth are these statements going unchecked? Where is the dominant culture of critical analysis that curtails, both institutionally and on a case-by-case basis, statements that feed into this “dis-empowerment” of individual viewers?

    There was a time when we had few on-air personalities: now we have an excess of them, and the depressing catch-22 is that if the bulk of these personalities don’t regularly remind their viewers about formal argumentative structures, fair comment, and journalistic ethics (which they don’t), said viewers will come to view the kind of argument that exists instead as the right one — fallacies and all. And why should these on-air personalities do otherwise? They were hired because their companies know that entertainment sells, but don’t grasp why Jon Stewart is so successful providing both entertainment and analysis; so these companies treat their forms of entertainment with all the gravitas of serious journalism, even when they’re not. And if a comedian — someone who readily acknowledges that he’s doing entertainment, but maintains a core sense of journalistic right and wrong distinct from his role as entertainer — calls them on it? Well, they’ve got ample public access where they can condemn him for speaking in the first place, instead of addressing his comments, to their hearts’ content. And no one will call them on it, because they’re the ones setting the discourse in the first place, and the discourse they’ve set is of refusing the legitimacy of a comment on the basis of the person who makes it.

    … Except that there are people who do ostensibly toil for the protection of U.S. citizens in relation to media abuses. The FCC is vigilant about calling out “public indecency” as it (or rather, the loudest of the interest groups that pressures the FCC) perceives these instances to be. And so we see justice meted out swiftly when a woman’s nipple is shown on national prime-time television, or a children’s cartoon has a character with two moms, or an expletive is used in the wrong time-slot. In all these ways, the general public is kept safe from the excesses of media.

    But the unchecked use of fallacies that, by implication, strip an awareness of power from viewers by pushing the essence of American discourse away from what was said, to who said it, and encouraging others to do the same? These are let stand.

    I’m not saying the FCC should fine people for unsatiric use of ad hominem fallacy. I’m just saying, Christ, wouldn’t it be great if someone in a position of media authority at least condemned it?