May 27, 2009

A New Search Engine for an Old Problem

Posted in Science & Technology tagged , , , , , at 11:25 am by Maggie Clark

Yes, this is about Wolfram|Alpha. For those of you who’ve heard nothing of this search engine yet, let me answer your first question upfront: Wolfram|Alpha shouldn’t be compared to Google; they’re apples and oranges in the world of internet data-trawling.

What, then, is Wolfram|Alpha, and why on earth would it be useful when we already have Google? I’d usually tell people to go look for themselves: from the main page, for instance, it’s clearly identified as a “computational search engine.” But what does that mean? Doesn’t Google already use algorithms for its searches? And though the About page provides a little more insight, it still stymied a few people I’ve already introduced to the website. Such confusion isn’t surprising, either, when you take a good look at how expansive the language is:

Wolfram|Alpha’s long-term goal is to make all systematic knowledge immediately computable and accessible to everyone. We aim to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything. Our goal is to build on the achievements of science and other systematizations of knowledge to provide a single source that can be relied on by everyone for definitive answers to factual queries.

I have to smile at this kind of language: it reminds me very much of my own writing, which though intending to convey a lot of information, might be considered so complexly worded as to limit, instead of enhancing, general knowledge about the topic at hand. (I’m working on it!)

And, alack, there is no Simple Wikipedia entry to explain this new site in layman’s terms. Even Wolfram|Alpha itself, though designed in part for comparative queries, lists only a few rudimentary details when tasked to explain what makes it different from Google.

So. If you’ll permit the blind to lead the blind, here’s Wolfram|Alpha in a nutshell:

  • It does not search web pages. You will not get top hits. You will not get related searches. At present, while the system learns, even misspelling something will give you limited returns.
  • It provides, instead, listings — singular or comparative. If you want to see which of two buildings is tallest, or what gravity is on Jupiter, or the basic facts about lithium, Wolfram|Alpha is for you.
  • It is, in other words, just the facts. No blog commentary. No video response. No forums or wikis in sight. Pulling from hard data sources, Wolfram|Alpha provides the basics about anything that can be quantified and computed, in whatever ways are available for said thing to be computed. Truly, anything: Here’s the entry for god.
  • Pursuant to this, Wolfram|Alpha can respond to questions that have concrete, fact-based answers. For instance, it can answer with relative ease “Why is the sky blue?”, What is the gestation period of a human?”, and “What is ten times the surface area of the moon?” And for those of you wondering if this means Wolfram|Alpha doesn’t know the meaning of life, think again.
  • So what on earth is this good for? In an age of sprawling participatory encyclopedias, interactive learning through participation on internet forums, and a whole slew of multimedia ventures — to say nothing of Google itself, which commingles basic search functionality with meta-searching, specialized searches (books, shopping, blogs, news), interactive maps and more — do we really need a website that provides us with “just the facts”?

    Heck. Yes.

    It may come as a surprise, but there are still a great many websites engaged in the whole search engine struggle for survival. There’s Google, of course, and Yahoo, but also Cuil.com — an underdog created by a former Googler who grew dissatisfied with how big the company had grown. (I, personally, have trouble believing anyone would give up access to their incredible catering services.)

    Cuil.com is said to accumulate and store information more efficiently, by linking and dropping similar subject hits on the same computer, so future search bots can find the bulk of their search results in one location. It also has a different layout, prioritizing the presentation of more content from each search result on the search results page. The comments on this Slashdot entry, however, match my own feelings of being underwhelmed by the quality of its search results.

    And then there are niche market tools like Regator.com, which searches the internet for blog posts relating to any topic in question, and Google minus Google, a filter for web searchers tired of seeing Google subsidiary sites (YouTube, Blogger, Knol, etc) prioritized in their searches. Further amendments, like filtering out Wikipedia from search results, are also present — which in my opinion is a nice touch.

    In short, a great many websites are geared towards making the vast stores of information on the internet as accessible as possible by ranking other websites on the basis of information quality and relevance. In doing, so, however, the very definitions of quality and relevance have changed dramatically from the notions they imbued years earlier, in the heyday of Altavista and AskJeeves. How could they not, when far from just being a tool for education the internet exploded into the complex social realm it now is?

    So now, perhaps, the most accurate information about a subject is not foremost on a search list about it: now, perhaps, it has been supplanted by the most popular website other people visited in relation to that topic. And the most relevant information about another topic might easily be transposed by the most popular piece of entertainment riffing off its theme. And, of course, there still lies the question of Wikipedia: Should it come first in Google searches; is it always the most accurate response to whatever query you may have typed in; are more accurate responses buried farther down the list?

    While there is no discounting the incredible developments we’ve made in expanding internet functionality — day in and day out adding to the human element of online operations — it also cannot be denied that there will always be a need for straight answers, too. Think of Wolfram|Alpha as a reminder, then, that for all our dallying in the online realm there still exists a real world — a concrete place with numerous quantifiable attributes just waiting to be described.

    A world that will always await us, should we ever go offline.

    Advertisements

    May 26, 2009

    War Journalism vs Reporting on the Military, Part 2

    Posted in Business & technology, Military matters tagged , , , , at 1:09 pm by Maggie Clark

    When I wrote last Friday that all investigative reporting carries with it a measure of risk, but no kind more so than war journalism, I hope I stressed enough that the ability of most all other subjects to destroy reputations, companies, job prospects, jobs themselves, property values, and even whole livelihoods is still quite considerable. Even in these realms, lives too are sometimes lost.

    Thus it was with little surprise, but great sadness, that I read this past weekend of South Korea’s president Roh Moo-Hyun, who threw himself off a mountain after enduring what has been typified as “relentless” pursuit by the media following allegations of bribery in the past year.

    Sadness, because I do generally believe in the redeemable life, and for someone so sensitive as to realize and react to the weight of his indiscretions must especially be seen as having had in him the capacity, also, to apply awareness of the past to more positive future actions. More disconcerting by far, for me, are those will not even make allowances for the possibility of error, and so forward the argument that, as George Santayan once wrote, “Those who cannot remember the past are condemned to repeat it.”

    Yes, this matter of reflection brings me right back to my original thesis, on the need to treat war journalism and reporting on the military as distinct tasks, and in this way overcome the limitations on truth-telling during a time of armed conflict.

    But so too does a quotation from the aforementioned article on Moo-Hyun:

    NYT — “It has become a bad political habit for presidents in South Korea to try to gain support by punishing the former president,” said Kang Won-taek, a politics professor at Seoul’s Soongsil University. “What happened to Roh Moo-hyun shows that it is time to break this habit.”

    The tendency to define a presidency by the failings of the one that came before took root as the country struggled to redefine itself in the early 1990s as a young democracy after years of dictatorships. Many Koreans were exhilarated as the first democratically elected governments punished the men who had resisted democracy for so long.

    No good, in other words, comes either from denial of the past or the outright demonization of all that came before: The former leaves us no room to learn from our actions; the latter, no room to accept that the same seeds of indiscretion and abuse lie in us just as much as they did in those who came before.

    What remains, then, is the need for nuance; and any journalist will tell you nuance only emerges when there is consistency and longevity to the issue being addressed. Here, then, lies the primary distinction between war journalism and reporting on the military: war journalism exists so long as the conflict does, while reporting on the military would extend across conflicts, and through the long stretches of peace besides.

    An analogy might lie in the chronicling of small mining towns: During production booms there would be plenty to report upon, in terms of speculation, quality, corporate practices and corruption, union issues, housing markets, immigration, and emergent family issues pertaining to social services, community development, opportunity costs, and secondary job fields. But at times of little to moderate production output and community growth there would seem to be fewer dramatic matters to comment upon. And yet there are still issues — there are always issues: from the impact of employment and poverty levels on drug and domestic abuse rates, to the disintegration of a social net, to the rise of hunting to offset low wages, to reduced educational opportunities, health matters, religious communities, and impossibly high relocation costs.

    So it also is with the military, and I would even go so far as to say that what’s omitted from our reports on the military during peace time, or what systemic comparisons we neglect to construct between different armed conflicts, considerably weakens our overall understanding of the role and culture of defense in contemporary society.

    Take, for example, our treatment of military rape — a topic much on my mind since the New York Times‘s Bob Herbert wrote an opinion piece entitled “The Great Shame” back in March. Herbert notes a lot of the most difficult aspects about military rape that go under the radar in current reporting on the wars in Iraq and Afghanistan — specifically, that the soldiers rape both citizens and their own. Last year alone, according to Herbert, saw a 25 percent increase in reported rapes of female soldiers. Considering that rape is one of the most under-reported crimes in our society, it chills me to the bone to wonder how much deeper these offenses go.

    The column furthermore put me in mind of a piece I read in 2007, on Salon.com, entitled “The private war of women soldiers.” Though a strong, culture-building piece, its position in an online, sociology-leaning magazine sadly made sense at the time: I had difficulty imagining the same emblazoned as a features news story on the cover of most mainstream print newspapers — even though, were we to treat rape with the same severity as business coverage, it would be.

    Which is why Herbert’s piece was so striking. How was Herbert able to tackle an issue this demoralizing and potentially demonizing to troops presently stationed in Iraq and Afghanistan, when so many suicides, friendly fire incidents, and criminal behaviour in the same context and region were barely addressed outside of hard news reports?

    The answer, I’m convinced, lies in his approach: Herbert started with an incident with no clear date stamp, and few to no concrete details. He wrote broadly, wedging a couple pertinent facts about current rises in rape statistics amid a vaguer, more expansive discourse about how rape in the military manifests, why, and what can be done about it. By couching the subject in so many generalizations, he was therefore able to draw this stinging conclusion:

    NYT — The military is one of the most highly controlled environments imaginable. When there are rules that the Pentagon absolutely wants followed, they are rigidly enforced by the chain of command. Violations are not tolerated. The military could bring about a radical reduction in the number of rapes and other forms of sexual assault if it wanted to, and it could radically improve the overall treatment of women in the armed forces.

    There is no real desire in the military to modify this aspect of its culture. It is an ultra-macho environment in which the overwhelming tendency has been to see all women — civilian and military, young and old, American and foreign — solely as sexual objects.

    Real change, drastic change, will have to be imposed from outside the military. It will not come from within.

    And you know what? I’m okay with this approach, so long as it produces serious discussion and follow-up. After all, do we really need to drag every rape victim in the military out into the open in order to bare the truth of its existence? I should think not — especially as that in and of itself can impose undue added harm on the victims. Similarly, do we need to parade every suicide case in order to prove it happens? Must every soldier who accidentally shot one of his own in a high-stress combat position be splayed across the papers of the nation?

    No. The rules of war journalism are understandable: In reporting on any immediate conflict, writers and photographers need to minimize their negative impact on the sources at hand — the soldiers, primarily, but also any alternative sources they might seek out from the region, civilian or otherwise — while simultaneously conveying the essential facts of any one news story.

    But we journalists still have meta-data, spanning this conflict and many others besides, at our disposal. And to report once a month on suicide rates, reported rapes, friendly fire incidents, mental health walk-in clinic figures, tour extension numbers, and other such statistics — both at home and abroad, and kept in close relation to a study of historical statistics as well — would in and of itself go a long way to entrenching a dialogue on military culture that no one can perceive as a direct threat to our soldiers overseas. No extensive parade of bodies and names needed!

    Because, really, all this reporting on the military isn’t meant to be a threat: rather, it’s meant to help eliminate those threats most often propagated by ignorance; and ultimately, to help the rest of us truly understand. Not, perhaps, so that one day the entire sub-culture will no longer be needed — that’s far too much a pipe dream for even a young’un like me to humour. But at least so that, one day, we can apply this distinct sub-culture to the relief of inevitable global conflicts with the full knowledge of just what it is we’re giving up in the pursuit — we hope — of a greater common good.

    Post Script:

    *Apologies for the lateness of this entry: In all honesty, my cat deleted the original yesterday — ironically leaving in its place only the letters “un.” She evidently doesn’t quite agree with my position on this issue, a disagreement I hope she understands I can end swiftly by denying her supper. … Then again, who knows what she’d delete in retaliation. Best not to chance it!

    May 22, 2009

    War journalism vs reporting on the military, Part One

    Posted in Military matters tagged , , , , , , , at 10:31 am by Maggie Clark

    War journalism has to be the toughest media gig around. You go out, you get the facts, you tell a very complex story as best you can. And then you have to sit on it. Or the censors get to it. Or your editor just tells you to take it down a notch. Why? Because if you’re too detailed — about intentions, about army locations — you put more lives at risk. Every day finding the balance between two difficult end-goals (telling the whole story, and doing as little harm in the process as possible) carries much greater risks than just about any other kind of news.

    It’s not as though plain old local investigative reporting doesn’t come with its own risks: damaging an individual or a community’s reputation can have very dire consequences in and of itself. But in a war, on the ground, those consequences are much more immediate, and lie almost invariably in further casualties.

    So it is as well with reports on the human element in war, as I referenced in relation to the late Canadian soldier, Major Michelle Mendez, dead of a self-inflicted injury late April — and as I find myself returning to in the case of American Sergeant John Russell, who opened fire two weeks ago in a stress clinic while stationed in Iraq, killing two attending medical officers, three patients, and injuring four others with a stolen gun. Sgt Russell had six weeks remaining on his third tour of Iraq; the stolen weapon came from a fellow soldier, who Sgt Russell violently assaulted some time after own had been removed.

    For all these stories, whether they be about suicide, rape, vandalism, brutality and torture, corpse mutilation, unnecessary civilian casualties, or “friendly fire” incidents, anything that casts our own soldiers, or their allies, in a poor light during war time is immediately deemed a danger to their safety, either through internal morale issues or the provocation of heightened aggression from enemy combatants. And often this status leads to more delicacy, more omission, and more neglect in the realm of story updates.

    This is a problem.

    It’s a problem when incidents keep happening that, with or without the help of the media sphere, make it to the public consciousness — creating in their wake a mythology that, in its vagueness, ends up implicating the good right along with the bad. And after all the horrific military abuses that emerged during and after Bush’s presidency, I highly doubt further censorship, in the aim of keeping a damper on such rumours, would either be effective or without backlash. So what options are we left with?

    The story of Sgt Russell had a news cycle of a scant two days; I’ve given it over a week, and no follow-up exists. To be fair, though, the media’s had its hands full in the last couple days especially, with the case of Steven D. Green, the “ex-soldier” who instigated the gang rape and murder of a 14 year old Iraqi girl, alongside the murders of her father, her mother, and her younger sister, while a private for Bravo Company, First Battalion, 502nd Infantry, Second Brigade Combat Team of the 101st Airborne Division. This is too sick a case to refer to without more vile details, because the news broke just yesterday that Green is getting life in prison for his role in this heinous attack; he, along with four other soldiers implicated in this incident, will be up for parole in ten years:

    New York Times — The March 2006 murders in Mahmudiya, 20 miles south of Baghdad, were so bloody that American and Iraqi authorities first thought they were the work of insurgents. The American soldiers were implicated after at least one acknowledged to fellow soldiers a role in the crimes.

    At the time, the Iraq insurgency was near its violent apex, and American forces were suffering heavy casualties. Private Green’s unit, Bravo Company, First Battalion, 502nd Infantry, Second Brigade Combat Team of the 101st Airborne Division, was sent to a particularly violent area that soldiers called the Triangle of Death soon after arriving in Iraq in the fall of 2005.

    The battalion quickly suffered casualties, including a sergeant close to Private Green. In December, Private Green, along with other members of his platoon, told an Army stress counselor that he wanted to take revenge on Iraqis, including civilians. The counselor labeled the unit “mission incapable” because of poor morale, high combat stress and anger over the deaths, and said it needed both stronger supervision and rest. It got neither, testimony at Mr. Green’s trial showed.

    On March 11, 2006, after drinking Iraqi whiskey, Private Green and other soldiers manning a checkpoint decided to rape an Iraqi girl who lived nearby, according to testimony. Wearing civilian clothing, the soldiers broke into a house and raped Abeer Qassim Hamza al-Janabi. Soldiers in the group testified that Private Green killed the girl’s parents and a younger sister before raping and then shooting the girl in the head with the family’s own AK-47, which it had kept for self defense.”

    Two things came to mind when I read this story: First, and most prominently, was the blatant labelling of Green as an “ex-soldier” in the headline: “Ex-Soldier Gets Life Sentence for Iraq Murders.” Well, yes, clearly the army would dishonourably discharge him after such an incident. I could see that getting a sentence or two inside the actual article. But as the primary fact in a headline about the heinous crime, its consequences, and the systemic mental health issues it brings yet again to the surface? Not on your life: Green was a soldier when he committed those acts — a soldier whose entire unit was deemed unfit for duty, and yet was left by its superiors without adequate resources for stress and grief management. The moment we veer from these facts, even for a second, we start shifting our attention from the continual immediacy of mental health issues on the ground in Iraq, and permit the build-up to more — more killings, more rapes, more suicides.

    … Which leads me to the second thought this article prompted — a throwback to something I’d read last week in relation to Sgt Russell. “At a Senate hearing Tuesday,” ABC News reported, “Army Secretary Pete Geren and chief of staff Gen. George Casey diverged from a discussion of the Army’s budget to weigh in on what is being done for soldiers like Russell. … Casey said it isn’t true most soldiers suffer from post traumatic stress disorder following combat, instead making the point that ‘the vast majority of people that go to combat have a growth experience because they are exposed to something very, very difficult and they succeed.'”

    Honestly, I don’t know quite how to take this argument: I’m sure there are plenty of people who cope perfectly with the taking of enemy lives, the knowledge of civilian casualties, children or otherwise, an awareness of the brutality wrought by others in their ranks, and exposure to the deaths or crippling injuries of their comrades. I’m just not entirely sure I’d be comfortable around them.

    The fact is, war is not meant to be pretty, and it cannot be managed with the board-room efficiency of a business. Nor should it be: No amount of spin and rhetoric should ever take away from the importance of protecting human life, and the gravity of its loss in a time of war. Sadly, it looks very much as though each generation needs to live through a time of conflict before that lesson truly hits home.

    And yet, surely we can do better. Surely there is a way, with all of the channels available to us today, to be better in our reporting. Better by our fellow civilians, who are represented to the world by the actions of our troops, and our public condemnation (or lack thereof) of any wrongdoing on the field. Better to the civilians whose lives we claim we’re trying to protect from insurgency and tyranny in the war zones we’re fighting in, by holding military abuses on their soil to higher account. And better still to the soldiers themselves, who for better or worse place themselves in the line of fire — external and internal, in the course of duty — in search of a better peace than the one we already know.

    I think the road to this goal lies with a stronger division between war journalism and reporting on the military. But I also think this argument is one for another day — Monday, to be specific.

    Today I just want to end off reflecting on the five lives ended by Sgt Russell, and the four, equally innocent, lives cut short by Ex-Private Green. How much future bloodshed could we ward off, I wonder, if we truly gave ourselves over to the solemn remembrance of all that’s come before?

    May 20, 2009

    Participatory Government Online: Not a Pipe Dream

    Posted in Business & technology, Global discourse, Public discourse tagged , , , , at 8:13 am by Maggie Clark

    In an undergad political science course a few years back, I recall being challenged to present explanations for public apathy in Canadian politics. Out of a class of some thirty students, I was the only one to argue that there wasn’t apathy — that low voter turnout among youth was readily offset, for instance, by far higher youth turnout in rallies, discussion forums, and the like. Youth were absolutely talking politics: they just weren’t applying this talk in the strictest of official senses.

    My professor always did love such counterarguments, but my classmates never seemed to buy them. Rather, many argued that the “fact” of disengagement was not only accurate, but also healthier, because it meant that only those who “actually cared” about policy would set it. (We were working, at the time, with figures like only 2 percent of the Canadian population being card-carrying party members.) Many of these same students likewise believed that economics was not only the ultimate driving force in our culture, but also the only driving force that could lead; and also that true democracy was unwise because only a select few (I could only assume they counted themselves among this number) were able to govern wisely.

    At the time, Facebook was two years old. YouTube was one. And the online landscape, though unfurling at a mile a minute, was still light years from its present levels of group interaction. My sources for the presentation in 2006 were therefore an uncertain medley of old and new media: news articles and statistics; online party forums and Green Party doctrine.

    I didn’t have at my disposal, for instance, incredible videos like Us Now, a documentary encapsulating the many ways in which average citizens — seeing truly accessible means of interacting on a collective level with their environment — are achieving great success breaking down the representative government model to something much more one-on-one.

    Nor did I have The Point, which provides anyone with an account and an idea the means to start a campaign, co-ordinate fundraising, organize group activities, and otherwise influence public change. (Really, check it out — it’s fantastic.)

    And most regrettably of all, I didn’t have the Globe and Mail‘s Policy Wiki.
    This last, I just discovered yesterday on BoingBoing.net, when they noticed the Globe and Mail’s newest project on the website: The creation of a collectively developed copyright law proposal, to be sent to Ottawa for their consideration on July 1, 2009.

    As a huge policy geek, and a member of the new media generation to boot, I saw this as a goldmine of opportunity — and yet there is plenty else on the website for other policy development, too: discussion forums and wiki projects alike. So of course, in my excitement, I sent the link to a few members of the old generation — only to receive a curious collection of responses, dismissing the above as an exercise in anarchy, while simultaneously criticizing old-school committees as never accomplishing anything properly.

    Well, old guard, which is it? Is our present model of representative government failing us in certain regards, and should we thus try to engage different policy-building models? Or is the same model which, despite early challenges to legitimacy, created an online encyclopedia as powerful as the Encyclopedia Britannica, by its very nature as an open-source community project unfit for political consideration?

    Us Now makes the point that the internet’s promise of a more dynamic and accessible global community has had many false starts (spam, scams, and the proliferation of child pornography rings come personally to mind). But long before we became cynical of the internet’s capacity to improve our social impact, we as a society were already well used to doubting the potential of our fellow citizens to act intelligently and in the pursuit of the communal good. You can thank Machiavelli’s The Prince, Italo Calvino’s Crowds and Power, and bastardized readings of Adam Smith’s The Wealth of Nations in part for this.

    A little while ago, however, I got around to reading John Ralston Saul’s The Unconscious Civilization, a CBC Massey Lecture Series essay collection about the rise of the management class and the utter reversion of the democracy/free market equation to the extent that the notion of democracy itself has suffered massive political distortion. Written just before the first real explosion of online communal projects — be they open source software, open-access socio-political groups, or information-dissemination tools — what Saul wasn’t able to account for in his work was the balancing force of technology itself. Rather, when he wrote these essays, technology was still very much a cornerstone of continued economic distortions in lieu of real democracy. Now, though, it’s clear that technology created through the corporate model has itself emerged as a platform for participatory government — and thus also as the undoing of those same, hierarchical economic forces. Coming full circle is fun!

    So, to get back to this matter of “trusting in the intelligence of individuals, and their capacity to act in the common good,” yes, there is a lot of circumstantial evidence to the contrary on the internet. Heaven knows, for instance, that the low-brow interactions which inspired CollegeHumor.com’s We Didn’t Start The Flame War are in fact a daily, persistent reality online, and make up a substantial percentage of commentary therein.

    Yet any parent will tell you that the way to raise a responsible child is to give her responsibilities to live up to; a child entrusted with none will invariably continue to act like one. So rather than using, as a test of our group potential online, those sites that in no way engender a sense of responsibility for our actions, why not look at those sites that do — like ThePoint.com, and the Globe and Mail Policy Wiki?

    Furthermore, if our current model of representative government no longer yields the level of public engagement we crave (read: in the ways the government wants to see), maybe it’s because citizens at large haven’t been given the opportunity to feel like real participants at all levels of the democratic process. And maybe, just maybe, the internet not only can change that perception, but already is.

    After all, those same students who, in the comfort of a political science classroom just three years back, so boldly proclaimed that collective decision making was a waste of time? You’ll find every last one on Facebook and LinkedIn today.

    May 18, 2009

    To Pay or Not To Pay: The Internet’s Most Intricate Crisis

    Posted in Business & technology tagged , , , , , , , at 10:24 am by Maggie Clark

    Within two months after Last.fm, a music streaming service, signed into partnership with four major record labels, Amazon.com saw a 119 percent increase in online music sales. Through an ad-based revenue model Last.fm was able to offer free access to a database of songs numbering in the millions, and to group them into “stations” wherein your tastes would yield similar artists or songs in that vein. The catch was that after three iterations of one song, Last.fm would display an advertisement directing listeners to affiliate partners selling the tune. All in all, it was a sweet deal: We got free music, the big labels got paid, the small labels got exposure, and contrary to popular wisdom about downloaders detracting from music profits, online sales were through the roof.

    So, of course, Last.fm switched to a subscription model on April 22, 2009: Now International Users have to pay “three” every month — three euros, three dollars: whatever is regionally appropriate. And honestly? This makes tremendous business sense: Last.fm has to pay for every track you listen to from a major label, and when it can’t negotiate adequate terms for payment with a label, sometimes that label just cuts out.

    Nonetheless, as part of the Napster generation I can’t help but note how, the more things change online, the more they’ve ultimately stayed the same. From Napster to Pandora to Muxtape to Seeqpod and, of course, a slew of others, the introduction of free big-label music under any number of guises has always, invariably ended in a curtailing of services (at best), or else a complete redirection of the site’s aims and/or bankruptcy.

    Notice anything funny there? Take a look at how this cycle begins: With the desire to give something away for free. Not to make a profit on it; just to scrape by — and only when profit margins drop deep into the red, to impose fees on the consumers. Yeah, you might say, it’s easy not to try to make money on something you didn’t create (the music). But… if history serves us well, it’s not. People just don’t pass up the opportunity to exploit the work of others for their own profit. So how is it that models like the ones listed above ever existed in the first place?

    The answer perhaps lies in our generation’s unique conditioning: if as individuals we still demanded that our own creative output be viewable solely through a pay system (as Amazon is proposing in blog subscriptions for Kindle), we’d be hypocrites to demand free content from others. But growth on the internet has proven instead too nuanced for such hypocrisy: while some services have always tried to charge for content, the blogosphere, YouTube, GoogleVideo, MySpace, DeviantArt, Flickr, news aggregates, and other such websites have always run on a free viewing model. In short, by now we’re more than used to posting a piece of writing, a photo, a video, or a song online and expecting nothing monetary from it. Art and entertainment have entered into a free-for-all creation domain, and while this doesn’t mean we don’t still hold in high regard those artists and entertainers who dedicate the whole of their lives to such work, it certainly means we have different expectations for our engagement with them.

    As such, the story of those aforementioned music services means just what seems to mean: That our first push out into the world of the internet is just as likely to be in the pursuit of free access as it is to be about exploitation — and thus, that we as consumers can forever expect to find ourselves latching on to free content, taking it for granted, and having subsequent power plays or business models then wrest that freedom away. A cry of foul will emerge, we’ll flood a comments page with angry protests… and then most of us will clear off, find a new free music service, and repeat.

    Rest assured, this isn’t as hard to stomach as it sounds: we’re already quite used to learning to pay for goods we’d always taken for granted — how else can you explain bottled tap water? But the story of free music is a fast-paced tale that also speaks volumes about deeper, more complex payment issues at work on the internet.

    Because while the struggle for survival of music streaming services cater to our more immediate fears about The Man, there is a longer, more drawn-out battle being waged in turn for the whole of the internet. Yes, I’m talking about the attempts of Internet Service Providers to make heavy internet users pay more, or divest the whole medium of its equal playing field by allowing some companies to pay for prioritized access, effectively shutting small companies and websites out of the mass market. Or what about Bell Canada, which last year found an ally in the CRTC when the Canadian Association of Internet Providers complained that Bell was “throttling” access for peer-to-peer applications — a direct challenge to net neutrality? When the CRTC sided with Bell in the case, they likewise permitted, and set precedent for, the legality of an ISP interfering with an individual’s use of the service he’s paid for, through “traffic-shaping.”

    And then, of course, there is the anti-piracy bill passed by the French National Assembly on May 12, 2009: anyone caught downloading or sharing copyrighted files three times can now be suspended from the internet for two months to a year on that third notice. Chillingly, the law would not require a trial or court order: All the ISPs need do is send you your warnings, making this a huge win for corporate control of the medium.

    This, then, is the real conflict of the internet — an on-going negotiation being fought in a much more protracted, expansive way than any music streaming service need fear: but a negotiation, nonetheless, that will shape the future of the internet for us and those to come.

    For now we take our freedoms and equality online for granted — just as we do our free music moment by moment. The question is, if the lesson of music streaming services has taught us anything, what can we really say about how free or equal the internet as a whole will be just ten years down the line?

    And what, right now, can we do about it?

    May 6, 2009

    Calm before the swine

    Posted in Global discourse tagged , , , , at 9:59 am by Maggie Clark

    There is reason to think positively about the strength of citizens en masse. There is reason, too, to think positively about the benefits of our new networking technologies. And one need look no farther for proof of this than the confrontation between panic and perspective in relation to the swine flu epidemic.

    Swine flu had, and still has, all the earmarks for a perfect shock story: The strain, H1N1, afflicts the healthy, the strong, by over-stimulating the immune system’s response. It’s an inter-species mutant, so you can imagine the inference that it must surely be three times as strong as its avian, human, and swine strain predecessors. And the outbreak has been tied to Mexico — just one more illegal immigrant to worry about, right? (It’s even being called the “killer Mexican flu” in some circles.)

    As I write this, according to the Canadian Public Health Agency, there are 165 reported cases of this H1NI strain in humans in Canada. The U.S. claims 403 cases, and between the two of us we have exactly two confirmed deaths. According to WHO statistics (current to May 5) Mexico has 822 cases, with 29 deaths; in the whole world, 21 countries share a collective case count of 1,490, with no other confirmed deaths.

    If scientists declare that the strain has established itself outside of North America, the flu will reach pandemic status. In theory, that sounds terrifying, but really, the meaning extends no further than the fact that the illness can be found across the globe. The term pandemic says nothing, for instance, about how lethal or non-lethal said condition is; and though some sources are fond of speculating worst case scenarios, this means that the death rate is still very low. How low? Let’s take the U.S. numbers to illustrate: Annually, there are some 200,000 cases of hospitalization due to typical flu types in the U.S. — and 36,000 deaths. By this measure, swine flu has a long way to go before being anywhere near as serious a threat as its local, home-grown competitors.

    And yet all this, for me, isn’t where it gets interesting. Not even close. Rather, what continues to surprise and impress me is our capacity for self-regulated response to the initial panic invoked around this illness. Yes, the media was talking up a storm about Influenza A H1N1. Yes, doomsday speculation was abounding. And yes, many industries — sanitation and pharmaceutical groups especially — have profited greatly in terms of market shares and business from all this panic.

    But also abounding was — and still is — a countering force of calm. And it takes some truly extraordinary forms: For instance, mainstream news articles taking other articles to task for the lack of coverage about all the good news we have about Influenza A H1N1, and ethical deliberations about whether or not laughing at this illness (its name, its origins) is acceptable. And then there’s the really fun stuff: Stephan Zielinski applying the amino acid sequence for Influenza A H1N1 to ambient music. Gizmodo posting a hauntingly beautiful video demonstration of how the virus gets released. xkcd.com aptly encompassing the typical range of responses to Swine Flu on Twitter.

    In other words, for all the panic we’ve had thrown at us about this illness, many have responded with a measure of fearlessness at least a hundred times as infectious. Does this mean everyone is rid of that panic? No, of course not: these reactive trends are often regional and compartmentalized due to varying interests and complex investments. The mass killing of all pig herds in Egypt, for instance — a perfectly rational response to a disease that, at that time, had no cases of pig-to-human infection manifested in the world, and absolutely no cases of human infection in the country itself — leaves huge consequences for the pig farmers, who with 300,000 animals killed have lashed back at the government in the form of protests: doubtless this panic attack on the part of officials will leave a long list of social consequences in its wake.

    But think back, for comparison’s sake, to our global reaction to SARS — the extreme panic, the devaluation of tourism in heavily affected cities and regions, the dramatic quarantining procedures. Globally, the disease racked up 8273 cases, with 775 direct deaths (a death rate of 9.6 percent, weighted heavily toward seniors). Though clearly a more serious disease than Influenza A H1N1, the overall death rate of Americans due to seasonal influenza was still much higher; and yet our panic was long-standing and far-reaching, in large part because we were given no room for questions of doubt: only more panic.

    Similarly, I’m not convinced the relative calm in this case emerged from the ground up: rather, I suspect news articles first had to present seeds of doubt about this issue, as forwarded by scientists reacting to the extent of media spin. I think room for doubt had to emerge from these sources first; and then the average reader, artist, and blogger could follow after — in turn serving to create more room to maneouver, rhetoric-wise, in future works by the mainstream media. But regardless of speculation about just how, and in what order, these groups fed off each other — the scientists, the media, and the participatory citizenry as a whole — what’s more striking is that they fed off each other at all to produce this ultimately calming effect.

    We have, in the last 8 years, kicked ourselves over and over again for allowing flimsy excuses for war-mongering to stand; for allowing freedoms to be stripped from us in the name of security; for permitting, in general, the hard polemics of with-us-or-against-us to divide the population. And rightly so: When we go along with fear-mongering, we can be, en masse, pathetic excuses for an advanced and critically thinking civilization.

    But cases like our reaction to swine flu should likewise give us cause for hope — and should be treated as such, with praise for measured response wherever it emerges. For as much as we can act like sheep if treated like sheep, it nonetheless takes precious little in the way of tempered social rhetoric for us to realize our own, independent engagements — fearless, inquisitive, and inspired alike — with the world instead.

    May 1, 2009

    Death by any other name

    Posted in Military matters, Public discourse tagged , , , , , at 9:57 am by Maggie Clark

    Major Michelle Mendez, a Canadian soldier stationed in Afghanistan, was on her second tour in the region when found dead in her sleeping quarters at Kandahar Airfield. Hers marks the third death of a Canadian woman, and the 118th fallen Canadian, in Afghanistan since our involvement in the conflict began. The media has done an exemplary job of presenting Mendes in the respectful light afforded all Canadian soldiers lost in this conflict — and perhaps with extra care, too, because hers marks the second female fatality in as many weeks — but one word is pointedly absent from all talk of her “non-combat death”:

    Suicide.

    According to the Canadian military, an investigation into the circumstances of her death is still ongoing: evidently the possibility of her firearm accidentally discharging has not been entirely ruled out, though The Globe and Mail reports that “a Canadian government source said ‘all evidence points toward a self-inflicted gunshot wound.'”

    The prominence of this story, and the blatancy of the aforementioned omission, have piqued my interest. The debate about whether or not to talk about suicide in newspapers, and in what ways, with which emphases, has been waged for decades. The argument ultimately centers on two points: the quest for greater public understanding, and the fear of inducing a copycat effect among readers. To this end, there are fierce defenders of different approaches — each backed by their own body of research and professional opinion. Last year The Newspaper Tree wrote an editorial responding to reader concerns over the term’s use in relation to one case: therein they noted that certain organizations of mental health professionals agreed it was better to tell readers the cause of death, but that the stories needed to be presented with the “valuable input of well-informed suicide-prevention specialists” in order to be effective. In that same year, Media Standards Trust published a firm condemnation of suicide stories, citing the high statistical correlation between published stories and copycat suicides.

    My problem with the omission approach, however, is its selectivity: Suicides are deemed taboo, but the publishing of violent domestic deaths? murder-suicides? school shootings? isn’t — and all of these stories arguably pertain to people in even more disturbed mindsets (one, because I do not hold that everyone who commits suicide is “disturbed” in the sense of having lost their ability to reason; and two, because their acts take the lives of others, too). A recent Times article asked if the copycat effect was being felt here, too, pointing to the lone study that has been completed to date on the theme. The article also developed a short history of the copycat effect in media, which reads as follows:

    The copycat theory was first conceived by a criminologist in 1912, after the London newspapers’ wall-to-wall coverage of the brutal crimes of Jack the Ripper in the late 1800s led to a wave of copycat rapes and murders throughout England. Since then, there has been much research into copycat events — mostly copycat suicides, which appear to be most common — but, taken together, the findings are inconclusive.

    In a 2005 review of 105 previously published studies, Stack found that about 40% of the studies suggested an association between media coverage of suicide, particularly celebrity suicide, and suicide rates in the general public. He also found a dose-response effect: The more coverage of a suicide, the greater the number of copycat deaths. (See pictures of an exhibit of Columbine evidence.)

    But 60% of past research found no such link, according to Stack’s study. He explains that the studies that were able to find associations were those that tended to involve celebrity death or heavy media coverage — factors that, unsurprisingly, tend to co-occur. “The stories that are most likely to have an impact are ones that concern entertainment and political celebrities. Coverage of these suicides is 5.2 times more likely to produce a copycat effect than coverage of ordinary people’s suicides,” Stack says. In the month after Marilyn Monroe’s death, for example, the suicide rate in the U.S. rose by 12%.

    Journalists have a responsibility to the living. We have a responsibility to give readers the best means necessary to make informed decisions about the world around them. This also means doing the least amount of harm. In the case of suicide, this measure of harm is difficult to assess at the outset, as even the very language of the event is against us. To “commit suicide” bears with it the gravitas of an age when suicide was deemed a crime, not a tragedy — and not, in some cases, a release from untreatable pain. To “take one’s own life” is a step up — dramatic, but delicately put — though it is unclear if one term is preferable to the other in keeping the copycat effect to a minimum.

    That effect itself also plagues me, because I have to wonder if it occurs in part because there isn’t enough reporting: if all suicides were listed as such (3,613 in Canada in 2004; 32,439 in the U.S. — roughly 10/100,000 for each population), and those suicides were contextualized by similar tallying of all deaths (drownings, the flu, and other causes of death with much higher population tolls) would that copycat effect drastically diminish over time?

    I can only speculate. Meanwhile, another telling question has a more interesting answer: Can the news provide the requisite depth and breadth of coverage on mental health issues without the direct mention of suicide? In answer, I refer you to this piece from The Globe And Mail, which delicately tackles mental health in the Canadian military as a hot topic arising from Mendes’ “non-combat death,” while the Canadian Press approaches the issue from the vantage point of the female chaplain who presided over Mendes’ ramp ceremony.

    There are, then, ways to nod to the issues surrounding suicide without using that word directly. But are they enough? Or does the omission of the word, in conjunction with so much open commentary about related issues, create a different reality — one in which suicide, lacking its public face, becomes at best a vague and theoretical matter?

    These are difficult questions, and they grow more difficult when addressing systemic suicides — as exist among many Aboriginal communities in Canada, as well as among military personnel — and when suicide strikes the very young. To whom does the journalist owe her ultimate allegiance: the grief-stricken families, the immediately affected communities, or the public at large? How can we use the fact of suicide to better our understanding of this world we live in? Are we forever doomed to make things worse by the mere mention of suicide’s existence?

    Two days ago I watched Rachel Getting Married, a film about a woman who comes home from rehab to take part in her sister’s wedding. A great many difficulties unfold as this woman struggles with guilt and self-hatred, coupled with depression and suicidal tendencies. Watching this film, I registered numerous “triggers” in myself, and cycled for a day and a half back into certain, terribly familiar mental routines. It was then, as I reminded myself that most people likely wouldn’t have had the same reaction to this stimulus, that it struck me: I will never be completely rid of these thoughts, these propensities to cycle between contentment and depression. Anything — a movie, a newspaper article, an off word from a close friend — might trigger them, and then it will be my responsibility to take control of these impulses: acknowledge them, experience them, and move past them.

    I know, too, that eight percent of Canadians live with depression, and that at least 16 percent will experience a period of depression at some point in their life. I know I’m on the lucky side of this spectrum: I’ve learned how to counter the anxiety that often pushes depression to the brink, and after years of very extreme engagements with my mental health issues, they are manageable for me. I know this isn’t the case for everyone. I think to myself, what if someone in a much more agitated or suggestible state of mind watched this film instead — or others, with far more tragic endings? What if that was all it took, and the film pushed them to the brink?

    Yes, a film or song or book could move someone to suicide. Most likely, it has already happened a lot. In short, anything could be a trigger; anything might be the last straw. But art, like the media, has as its higher purpose the construction of conversations about the world we live in, and how we live within it. So if there is a way to address suicide directly in the news — with the aid of suicide prevention experts; with a fully conveyed understanding of the context in which suicide operates; and with absolute respect for the families and friends each instance affects — I think we need to take it. To do otherwise, for me, is to leave each victim as alone in death as they surely felt in the lives they chose to end.

    And honestly, that’s just not something I can live with.

    April 25, 2009

    New York City: A Fully Realized Social Discourse

    Posted in Uncategorized at 10:30 am by Maggie Clark

    Barthes’ Mythologies came quickly to mind as I rode into New York City for the first time this past week, recessed in the back of a cab. The seats were low, with loops of sturdy material bolted to the sides for people who needed help getting up from them. A thick plate of protective plastic separated the front from the back, with a credit and debit machine mounted on our side, along with a slot for hard cash. Before me a touch screen jumped to the local news, coupled with messages from the mayor about cab service developments.

    That’s when it hit me: I was looking at a fully realized social discourse — the kind I would see played out again and again over the course of my short visit to the Big Apple. Very clearly, in everything from the make-up of the car to the make-up of the whole cab fleet, you could see where different needs and wants were engaged and applied to real-world solutions: The plastic both for the protection of the cabbie and the privacy of the passenger; the low seats to further complicate robbery attempts and allow maximum room for the passengers; the touch-screen to help with long traffic times and, at least incidentally, to give passengers a sense of direct connection with the city they’re driving through. And you could see, too, the compromises in this arrangement: the loss of possible friendly interaction between driver and passenger, the loss of spatial control on the part of the passenger in lieu of driver empowerment. It was, in short, a semiotic wet dream. And it wasn’t alone.

    In fact, everything in Manhattan (and what parts of Brooklyn I encountered) bore with it the social history of its development. Far beyond the striking presence of Greek and Gothic Revivalism, to say nothing of other Neoclassical structures in the basic architecture of the towering down- and uptown cores, there were still other, humbler entrenchments of social discourse plainly visible for anyone on the street. These included the newsstands — permanent entrenchments on every other corner which highlight the centrality of print media and convenience items to pedestrian life — and the billboards, the digital displays, the street vendors — all of which made every space on the street a possible zone of interaction between individuals, other individuals, and commercial products. The digital displays especially had a striking engagement with their surroundings: in the subways and Grand Central Station they made use of existing dimensions or constraints on building — whether it be an ad fitted to the space allowed by a wide beam, or a digital image projected on a towering marble column — and in so doing minimally affected their surroundings. But much could also be said about the subways themselves, which, though complex, have developed in such a way as to attend to both the needs of local commuters and those who need to move quickly over a sprawling city landscape.

    And then there are the people. So much is made of the stereotype of angry or arrogant New Yorkers that I was truly humbled to discover how staggeringly polite and open to engagement so many others are. I know I was only in the city for four days, so my impression is anecdotal at best, but when an old lady railing on about the crowd’s need to find Jesus stops long enough to help direct me to the entrance of a bank, I can’t help but take pause. People responded with patience and a friendly demeanour when I needed help finding my way; I got the occasional apology from people who had to cut into my path; and I started conversation with immaculately dressed strangers who responded with openness in turn. And everywhere there was talking, talking, talking — the city never loses that animation, that constant interplay.

    Yes, I also encountered the occasional arrogant, impatience, or just plain rude personality; I am in no way denying that they exist. But what I also discovered, in the way of courtesy, paints a much more intricate picture of New York life — and also, I suspect, highlights an underlying factor that allows both politeness and abrasive natures to such an extreme in the same community: in a city that is itself so replete in fully realized social dialogues, individual self-realization is itself given tremendous life.

    I’d like to say that my home town, Toronto, has similarly realized dialogues — and I’m sure there are a few. But the great pleasure that comes from travelling to a place that is so alike in many ways is that the ways in which the two differ become patently clear. And in the U.S. for the first time in my life — in New York, of all places, for my first visit to the U.S. — what I came away with most of all was a sense of a standard being set. Is New York perfect? Not at all. Are these social dialogues, so concretely established in city life, finished? Not even close. (As I especially discovered when matching up the mayor’s transit and cab ads for a New York public school survey to this article in the New York Times, about the mayor’s conflicting responses to the opinions of others.) But the arena for these dialogues is so concretely defined that I can’t help but think Toronto’s own engagement with its own issues — everything from homelessness to education to worker safety to commercialism to identity to multicultural interaction — a shadow of this southern self-realization.

    What am I asking for here — is it news stands on our downtown street corners? touch screens in our cabs? mayoral addresses in the form of ads across the city? Maybe. I honestly feel one can do little better than New York has in terms of the entrenchment of news media (and with it, more direct and constant civic engagement).

    But beyond that, in a broader sense, I suppose I’m calling on a sense of self-confidence, made manifest in all our decision-making as a community. It is this self-confidence that I think creates the New York stereotypes — the loud-mouths, the arrogantly opinionated — but these are necessary extremes of a system that allows, as well, much in the way of a quieter, more nuanced self-assurance. And, however anecdotal my experience may be, it is indeed my experience: These more nuanced displays of self-assurance exist.

    It is, in essence, a matter of effective argumentation: Arguments do not need to be made loudly, or arrogantly, to achieve their full effect, but they do require confidence; though many are made in the former way (especially among pundits!), there was just as much self-assurance in Socrates’ form of address — for though his route was a constant line of questioning, it was still his route, and by holding fast to that collection of beliefs and approaches that were his own, he enacted precisely the same self-realization.

    What would our writing look like, our cities look like, our social dialogues look like, if as individuals and as communities we were brave enough to make decisions and hold fast to them — and equally brave enough, too, to make other decisions, and hold fast to them, if the first decisions proved ineffective?

    I ask this especially because we as Canadians often pride ourselves on our humility, our politeness, and with it, our tact and discretion. That these values are only true to life at the best of times for us, and most other populations, is a moot point: The stereotype of the “nice” Canadian is what it is. And yet, it’s also not — for on matters of U.S. interest Canadians have absolutely no qualms about speaking our minds, and investing with a sense of higher-than-thou authority our “outsider’s” perspective on events therein. I was guilty of this. I am guilty of this. But after visiting New York, I hope not to be as guilty of this in the future.

    For after visiting New York, and seeing firsthand what a fully realized social discourse can look like, I feel more motivated than ever to realize the same strength of community conversation here at home — and not just about the U.S., heaven’s no: About us. About Canada. About all the ways in which the whole wide world intersects with us and our own.

    I have no idea what such a fully realized discourse will look like. I only know — I only feel — that we’re not there yet.

    And I want to be a part of it.

    April 18, 2009

    The Heart of the Matter: A Shifting Social Discourse

    Posted in Global discourse, Public discourse tagged , , , , , at 2:57 pm by Maggie Clark

    A very important transition is occurring in North America, and I suspect it will still be another year or so until we grasp its full implications. Just a few weeks back, Chinese financial leaders suggested changing the world’s standard currency from the dollar to a global currency reserve, and UN economists have since backed this proposition. This move would mark a shift away from the U.S. as the source of global financial stability, and towards a preexisting global discourse that will at last be given its own voice, even if North American still plays a large role in the debate.

    I suspect the same is very much true for socio-religious discourse: While George W. Bush was in office, the rise of right-wing Christianity in conjunction with the U.S.’s wars in Afghanistan and Iraq launched a polemic debate between Christians and Muslims — a West meets Islam, “U.S.” vs. them affair. Moreover, the rise of a particular brand of Christianity — politically-motivated Evangelical Christians — created in its own right a series of related conflicts on the home front, such that Evangelical resistance to the theory of evolution in classrooms, global warming in government policy-making, expansive rights for women and the LGBT/IQQ community, and various issues pertaining to “morally acceptable” content on national airwaves garnered excesses of media attention and political sway.

    Now, though the politically-motivated Evangelical Christian community still amounts to a sizable social force, the media portrays a very different, more long-standing socio-religious battle: the conflict between Israel and the Arab world.

    In this ideological warfare, North America undoubtedly still plays a crucial role, but in the last few years this role has shifted from one of proactive engagement to one of passive response. The U.S. has always been deemed pro-Israel, regarding the country as a beacon of hope for stability and the eventual spread of democracy in the Middle East. However, the U.S. simultaneously relies upon strong business relations with nations in the Arab world, and to this end has equally supplied many such countries with arms, money, and the maintenance of dictatorships that suited U.S. interests. This has always made its involvement in the region self-motivated.

    Post 9/11, that involvement necessitated a stronger alliance with those who would fight against U.S. enemies in Afghanistan; later, it also meant stronger alliances with those who would support Americans in Iraq. But times have changed. Immigration from the Arab world into Europe created stresses from which controversial national leaders and extreme anti-foreigner stances have emerged. The two-state solution between Israelis and Palestinians, once a viable discourse with its very own “road map” to peace, is no longer a welcome solution for many in the region. And here in North America, every political decision is becoming increasingly mired in questions of perceived Islamophobic, Zionist, anti-Semitic, pro-Israeli, pro-Palestinian, anti-Israeli, anti-Palestinian, pro-terrorist, and anti-terrorist allegiances.

    This is not by any stretch of the imagination to argue these terms weren’t bandied about before — of course they were. But what has been lost in recent months, from a socio-religious context, is a sense of North American values having any measure of relevance in the debate. Even terrorism is not being engaged as something feared again on home soil; rather, those terms, like their aforementioned brethren, time and again reroute discussion to the matter of the Middle East.

    An excellent example of this arose quite recently, in the matter of George Galloway. Galloway is a five-time British MP expelled from the Labour party for extremely controversial comments made in response to Britain’s invasion of Iraq. He has toured Britain and the U.S., working with many causes: some clearly humanitarian, many others complicated by statements that have brought UN condemnation upon him, and actions that have blurred the lines between humanitarian aid and front organizations for personal gain. (I won’t make a habit of this, but there are so many controversies pertaining to his views, actions, and travels that I’m going to recommend reading his Wikipedia entry — no one mainstream article on the man comes anywhere near as close.) On March 20, 2009, he was denied entry into Canada, on the basis of his ties to Hamas: though he has gone on record stating that he does not agree with Hamas, Galloway gave the government $45,000. As Hamas is on Canada’s list of terrorist organizations, this was enough to deny him entry, though Canadian immigration ministry spokesman Alykhan Velshi’s comment on the issue is a little more dramatic than that:

    The Telegraph — Immigration ministry spokesman Alykhan Velshi said the act was designed to protect Canadians from people who fund, support or engage in terrorism.

    Mr Velshi said: “We’re going to uphold the law, not give special treatment to this infandous street-corner Cromwell who actually brags about giving ‘financial support’ to Hamas, a terrorist organisation banned in Canada.

    “I’m sure Galloway has a large Rolodex of friends in regimes elsewhere in the world willing to roll out the red carpet for him. Canada, however, won’t be one of them.”

    Galloway contested the ban, lost, but got around the ruling by being broadcast via video-link from New York to Canadian locations. And so life went on, with the news turning to “Tea Parties” in the U.S. and Canadian outrage towards the Afghani rape law. Yes, we have plenty of political matters to attend to at home; there is no shortage of issues. But the question posed by the high profile case of Galloway — to say nothing of audience reactions to North American portrayals of recent Israeli-Palestinian disputes and Somali pirates– remains: Which is the greatest? Not in the world at large, per se, as so many cultural wars are played out on that stage every day — but here, at home, in North America? Does our ultimate socio-political investment lie with home turfs, and all the multicultural challenges upon them, or quite literally with foreign lands, and the conflicts waged there instead? If the latter, does this tie our future directly to their outcome? What are the implications (not necessarily negative!) of a national discourse set primarily by the happenstance on foreign soil?

    April 17, 2009

    Making allowances for human nature

    Posted in Public discourse tagged , , , , at 8:28 pm by Maggie Clark

    What better way to spend Easter than reading the Bible — am I right?

    It’s not the most likely thing for an atheist to say, but I’ve been mulling over the application of Bible verse to contemporary beliefs: in particular, as they pertain to Evangelical stances on issues like climate change. As a friend sagely reminded me, all religious culture comes first — then canon is interpreted to fit it. This is why so many Bible verses might be accepted in one generation, and ignored in the next: Other aspects of human culture change, and with those changes, our engagement with the original texts is also transformed. (One need look no further than the treatment of slavery in the Old Testament to recognize that Abrahamic faiths pick and choose which “hills” they’ll defend in public practice; there are other, social factors that play in to the application of faith.)

    And yet, alongside reading the Old Testament, this past weekend I picked up a CBC Massey Lecture series installment — five short lectures from the 1980s by Nobel Laureate Doris Lessing, known best for literature with strong political and feminist leanings. This volume of hers, Prisons We Choose To Live Inside, tackles a most curious social juxtaposition: the fact that we are, as a civilization, more aware than any generation before us of overarching trends, tendencies, and themes in human nature — and yet just as unable as individuals to apply this knowledge to our everyday lives. Lessing herself was drawn up in a Communist party as a young woman; this was in direct response to the egregious abuses of power enacted in her childhood nation, apartheid-torn Zimbabwe (then Rhodesia), but as Lessing develops in her lectures, there was just as much propaganda and groupthink necessarily at work among her chosen group as among the corrupt society those Communists were striking out against.

    From such personal experiences and relevant academic experiments, Lessing develops the argument that all groups have this propensity towards thinking themselves in the right, and all dissenters as in the wrong; and that this righteousness furthermore flies in the face of the temporary nature of all human resolution. However, argues Lessing, if we were only to make ourselves more aware of the transience of our beliefs — and more willing, too, to accept as human nature the inclination to various trains of thought (polemic argument, for one; and with it an “us vs them” mentality) we might be able to maintain more critical thought even as time entrenches us in one camp, or one label, above all else. We might even be able to make a greater difference in the world: Lessing writes at one point about how the broad condemnation of war will never suffice to eradicate its existence if we don’t acknowledge and accept that some people do, and always will, actively enjoy the exercise of war itself. These more complex analyses are harder, yes, but likely more useful in effecting real-world change, and so at the very least merit an attempt.

    But to return to the Bible: Lessing notes that religious and political beliefs share a common propensity towards absolutism and fanaticism — an observation we are all too often loathe to make, though the acceptance of this similarity might help us learn to better converse with those whose viewpoints differ from our own. The depressing truth is that most people are so long trained in empty rhetoric, and so short on the experience and tools needed to engage in formal debate, and most of all so comfortable in their own righteous certitude as to see no reason to second-guess their way of thinking, that even getting everyone to engage in open dialogue is a pipe dream in and of itself.

    And yet, let’s say it could be done. What would that look like? How would it be achieved?

    These are the questions I was asking myself while poring through the Old Testament this Easter, because I’m still holding out hope that some measure of formal debate might be attained if we in the media are willing to engage believers on their “home turf.” The problem is, is that home turf the religious texts themselves, or the empty rhetoric that often passes for argument in public spheres? (I’m referring here to the singing of songs in response to critical inquiry, the rattling off of catch-phrases, and all in all the extreme use of circular and straw man fallacies to avoid scholastic scrutiny of the verses themselves.)

    In the case of climate change, my starting point was simple: Is there any reason Christian Evangelicalism can’t be united with theories of climate change? For many years now, a culture of vehement denial has been maintained in these communities, but why? Does climate change necessarily threaten the precepts of Christian belief? Is it necessarily a challenge to the faith of so many Americans?

    From what I’ve been able to discern, there are a few places — some obvious, some less so — where the existence of climate change seems, at least on the surface, to be a threat. The most obvious is a sense of entitlement: Many believe their god gave them this land, and all that exists upon it, to do with as they would. This permits a rather regal lifestyle upon the earth — one in which the fruit of one’s labour may be applied to whatever one deems fit. If climate change has a human origin, and with it comes the cry for the curtailing of excess, this would to many seem a direct challenge to that entitlement. Worse still, it threatens a sense of hierarchy on the planet: God, then man, then the beasts of the earth, then everything else. If the preservation of one species suddenly trumps man’s full enjoyment of god’s gifts, how can that not be considered a threat?

    This is where Bible-reading comes in: I wondered if that entitlement were as textually concrete as many Evangelicals make it out to be. True, in the Genesis story the world is created with man its crowning achievement… but that’s Eden. And humankind gets kicked out of it. Much of the New Testament ennobles man’s place at the top of the planetary food chain, but there’s really nothing to suggest that man should feel entitled, after the Fall, to a world as stable and nurturing as Eden. And, after all, Christian nihilists (those who see no intrinsic good in humanity, or this life, without the presence of a god) already regard this world as bleak and secondary — so why can’t the instability of the environment, and human responsibility for the quality of the land they live on, be reconciled with Evangelical thought?

    I suspect the answer lies in a deeper threat felt by Evangelicals: namely, that climate change — and with it, the threat to the stability of human life on Earth — has grave consequences for proponents of intelligent design. Evolution presents elements of the world, and all who dwell within it, as “just good enough” — with first successful drafts, as opposed to perfect creatures, being the product of evolution. But intelligent design is argued from a position of precision and perfection, with the human eye especially (bewilderingly, too, for it has many weaknesses and blind spots) used to argue for the “impossible complexity” of the world we live in. From this standpoint, it’s easy to see where climate change can be threatening: If humankind could so easily tip the balance so as to make the world inhospitable, so much for that perfect construction!

    And yet, here too, it’s so easy to spin the message so as to fit Evangelical parameters: God gave us a world built so that its fate is determined by human action. Gay marriage = hurricanes, floods, and stabbing death on Greyhound busses (okay, that last is a little extreme). Gluttony and greed = deforestation, unchecked industrialization, and climate change. Causal, not just correlative, relationships are the lifeblood of much religious thought: in a sphere of argumentation that already permits leaps of faith to fill in where empiricism fails, there is no intrinsic reason for Evangelical belief to side against the existence of climate change.

    So where does this leave the matter of critical discourse? Well, if it were possible to foster open dialogue about such issues, the aforementioned route seems the likeliest to succeed. But more importantly, I think it has to succeed: in the last week alone we’ve seen much in world news highlighting the need to address intersections between religion and human rights, but still the topic remains taboo. Why? Is it really impossible to talk about the differences between religion and culture, group and individual, or contextual and universal rights without brewing a maelstrom of polemics, empty rhetoric, and broad accusations of various -isms and -phobias from the general public?

    Lessing would argue that it is impossible to avoid these manifestations of human nature — but that even then, it is still possible, with an awareness of past behaviours and social constants, to react to these inclinations in a way that counteracts what would otherwise have us forever defining ourselves, and others, in uncompromising blacks and whites.

    I really hope she’s right.

    Previous page · Next page