I didn't want to study suicide.
Mainly because suicide is a bummer of a topic. It reminded me of unpleasant memories from adolescence. And whenever I talk about it, the first thing everyone does is get quiet - then they get concerned about my well-being. Which is nice and all, and I appreciate it, but after working on this stuff for a few years, I would forget the level of emotional charge the topic has, and get really excited about some finer point of data analysis, and come off sounding callous when really all I wanted to share was this exciting little piece of the puzzle.
On the other hand, epidemiologic studies of suicide go way back (to Durkheim in 1897, and before him Morselli in 1881), and unlike most health conditions associated with sexual orientation, suicide has been measured in a consistent way across the whole population for an extended period of time. So, in a sense I was stuck with it as the only health outcome that had both geographic and temporal scope, which is what I needed to look at normative heterosexuality.
So anyway, as I mentioned before, I wanted to look at how heteronormativity (a shared set of assumptions about sex, gender, and who ought to be having sex with whom) affected suicide rates.
At first, I wanted to find a data set where I could could compare gay men, lesbians, and bisexuals to heterosexuals. But the death certificates don't have that kind of information. And as I got to thinking about it, even if they did, how reliable could it be?
And that got me to thinking, maybe the sexual orientation of these people is really beside the point. Perhaps the stresses associated with dealing with assumptions of heterosexuality are greatest among people who don't identify as "gay" anyway.
So, the first study I did was to look at gay rights laws as a measure of heteronormativity, the idea being that in order to enact a gay rights law, politicians have to believe that public opinion is such that they'd be better off protecting sexual minorities from discrimination than not. The first gay rights laws were enacted in 1973, in San Diego and Austin, I believe. In 1981, Wisconsin was the first state to pass a gay rights law, and by 2003, most of the country's population lived in a jurisdiction with a gay rights law. (the gray map there has a nifty time-lapse).
I looked at three levels of gay rights protections, in order to get something like a dose-response curve - the red areas had no protections whatsoever, the green areas were protections for public sector workers only, and the blue areas had protections for both public sector and private sector workers.
And the results here are pretty compelling - at least for White males, particularly adolescents, young men, and the elderly.
Each color in this graph represents a different age group. So, among White males aged 15-19, suicide rates were 179 per million in areas with no gay rights protections, 155 in areas with protections limited to the public sector, and 131 in areas with protections for all workplaces. The only group without a step-wise dose-response was White men aged 45-64.
Among White women, the first thing to notice is that suicide is less frequent, and also doesn't increase among elderly white women, unlike men. The decline in suicide rates with increasing levels of gay rights protections is also not so pronounced, but there are declines in each of the age groups under 45.
Suicide is less common among Black men than White men in the US, but is still pretty high. And unlike White men, the peak incidence of suicide is in younger age groups. But what is strikingly different is that the highest suicide incidence among Black males is in areas with the highest levels of gay rights protections, which suggests to that public opinion among Black populations about homosexuality may not be strictly related to public opinion among White populations from the same area, and presumably the enactment of gay rights protections is, in most jurisdictions, reflective mostly of White public opinion. I'd love to do an analysis based on what might be a better measure of heteronormative assumptions in Black communities. Any ideas?
Among Black females, the incidence of suicide is lower than the other populations above, and like White females, declines among older women.
The differences between areas with and without gay rights protections are not large, but in general, suicide rates among Black women tend to be slightly higher in areas with gay rights protections. So these results also raise questions about whether gay rights laws are a good measure of heteronormativity for all populations. Or alternately, if the social forces leading to suicide are perhaps not identical among White and Black populations - perhaps heteronormative assumptions cause more distress in White populations, particularly among White males, while economic issues and racial discrimination play a larger role in Black populations.
Another consideration is that perhaps the stresses induced by heteronormativity are largely related to the performance of masculinity, which is why men turn violent against themselves under these pressures. Perhaps men under heteronormative pressures also direct violence outwards towards the women closest to them, and thus homicide, rather than suicide, might be a more strongly related outcome among women. That's foreshadowing to an analysis I'm thinking about doing next...
The patterns I noted are virtually unchanged after adjusting for a wide variety of potential confounders, namely population density, region of the country, unemployment rate, poverty rate, and measures of social isolation (proportion living alone, proportion who moved in the last five years).
Also, when I looked only at those areas that changed status (went from no protections to having gay rights protections), the same trends held up, so in order to explain these results, some other factor would have to be changing at the same times in the same places, which seems like too much of a coincidence to be possible.
The trends above are very similar when I looked at how people vote on the restriction of marriage to "one man and one woman" as a measure of heteronormativity, but as I mentioned before, the strong trend towards people being less likely to endorse a restrictive definition of marriage makes this measure a bit more complicated, so I'm trying to figure out how best to represent it.
I'm Bill. These are my observations on queer health, and other things I care about for one reason or another. Tuna was my adorable dog, a companion of 16 years.
Thursday, December 17, 2009
Sunday, December 13, 2009
After I Left AIDS - Part II (Thesis)
So, after I left AIDS, I got thinking about how homophobia, as a societal norm, affects health. Not just queers' health, but how it also affects the health of the whole population.
In my last post, I talked a bit about my journey through thinking about health disparities, and how nobody seemed to be measuring the causes of these disparities. That leads directly to my doctoral thesis, which was about how to measure normative heterosexuality, and from there, estimating the impact of it on suicide. Not just on "gay" suicide, but suicide in the whole population, and also in various sub-populations defined by sex, age, and race/ethnicity.
So, following the lead of thinking about residential segregation by race/ethnicity, and income inequities, I began thinking about how to measure normative heterosexuality, the presumed cause of the health disparites that epidemiologists had begun to document with greater and greater precision.
How do you measure the degree to which a group of people (a large group of people) share a rigid set of beliefs about sex, gender, who ought to be having sex with whom, and how? My first thought was that the frequency of hate crimes directed against gay men would be a good measure. If this set of rigid beliefs dominated a social setting, then the informal "enforcement" of those beliefs would be enacted through the commission of bias-motivated crimes, presumably mostly by young men with "something to prove".
When I pulled the data down off the FBI's Uniform Crime Reporting (UCR) System, I quickly realized something was amiss. San Francisco had by far the highest number of anti-gay hate crimes in the country, and several Southern and Mountian states reported not a single one.
I've put more recent statistics by state in a table, based on numbers from 2004 to 2008, the five most recently reported. Basically the same trend holds - bias-motivated crime tends to be higher in places we think of as gay-friendly, and extremely low in the deep South. Then there are also strange jurisdictional oddities - Pennsylvania for example appears to have an extraordinarily low rate of bias-motivated violent crime.
The way I've come to understand this data is that it represents not the phenomenon of crime occurring, but rather on two phenomena: 1) how comfortable victims feel about reporting a bias-motivation to law enforcement, and 2) local law enforcement customs and legal constraints about recording and validating these reports. If it was just the first of these, then one could use the reporting of hate crime as a measure of homophobia at a societal level, that is the more hate crime reported in an area is evidence of how little homophobia there is there, as perverse as that sounds. But alas that second factor, particularly the bit about jurisdictional quirks in how different local law enforcement agencies deal with the reports that are made to them, really throws the whole thing off.
So, I couldn't use hate crime statistics. But maybe I could use the presence or absence of a law for reporting hate crime statistics that specifically included sexual orientation. Or, how about the presence or absence of a law prohibiting discrimination on the basis of sexual orientation?
So, the next thing I looked at was which states had gay rights laws, and when they were enacted. Various of the states have enacted gay rights laws over the years, the first being Wisconsin in 1981, a few more in the late 1980's, and a lot during the 1990's. Recently, state-by-state gains have slowed considerably, as gay activists have pressed for a national law (ENDA), or been distracted by the marriage thingy.
The point for my purposes is that the enactment of state-wide gay rights laws has been a pretty hotly-contested issue, debated for years within each state's legislature, rather than by a small cadre of legalistic judges, or the flash of public opinion of a referendum. As a result, the enactment of a gay rights law represents something of a local watershed, the point in time at which the balance of adverse consequences for elected officials switches from a net negative to a net positive.
So, looking at the enactment of gay rights laws seemed to hold promise, at least from a theoretical perspective, as a good measure of the broad social environment of a State in regards to the level of normative heterosexuality.
Another potential measure of normative heterosexuality to be considered is public opinion polling. The gay rights law thing seems a bit crude - a yes-or-no variable to measure something which I claimed varied by degree from one place to another, and one time to another within those places. Public opinion polling, on the other hand, offered the promise of a finely-tuned measure of normative heterosexuality. There are some relevant questions that have been asked the same way for decades. For instance, Paul Brewer has examined the time trends in how Americans feel about the "wrong"-ness of same-sex sex, which increased during the AIDS years, followed by a precipitous drop recently, the majority of Americans now saying it is not "always wrong" (small consolation that!).
So, public opinion polling looks like it might be a better "thermometer" to gauge how people feel about homosexuality. And there is longitudinal data to work with, so I could look at changes over time.
On the other hand, public opinion polls, by design, ask the smallest number of people possible in order to get accurate results. Thus a "large" national poll might have only 500 respondents. The GSS from which the data above is generated is a good bit larger than that, but still it is only a few thousand in any given year. A few thousand sounds like a lot of people, but what I needed to do was compare across places, not just time. So a few thousand breaks down into a few dozen in some states, and in others, fewer than ten. It would be a stretch to characterize the whole State of Connecticut based on how 15 randomly chosen people answered a question (for the record, I'm pulling that number out of thin air, but that's about what it comes down to).
So I was stuck with public opinion polling, too. Good temporal trends, but lousy in terms of geographic specificity.
A related idea was to look at how people voted on anti-gay referenda, such as the Briggs Initiative in California in 1978, Measure 8 in Oregon in 1988, and Colorado's Amendment 2 in 1992. These explicitly anti-gay referenda had the advantage of high geographic specificity, presumably accurate down to the precinct level, but represented a snap-shot in time. Also, they represented a small number of states, and the questions addressed in each one were quite different.
While I was working on my thesis, though, another opportunity to think about voter referenda came up. The issue of same-sex marriage cropped up. Although same-sex marriage has been contested in U.S. courts since 1970, it had never gotten much notice one way or the other - the Christian right didn't feel threatened by it, and most gay acitivists thought marriage was a non-starter politically, or at any rate a horrid reminder of heterosexuality run amok that should not be emulated.
But in 1998, Hawaii and Alaska voters chimed in on same-sex marriage, a few more did in the 2000 and 2002 elections, and then the 2004 election was swamped with voter initiatives to restrict marriage, in part a cynical manipulation by Republic Party operatives in order to keep their guy at the helm.
These referenda share the problem that opinion polling data have, in that they are a snap-shot in time (except for a few states which have had multiple referenda on this issue), but there were major advantages. For one thing, the question being asked was nearly identical in every state, some slight variation on whether legal recognition of marriage should be restricted to "one man and one woman". As an aside, no state has yet offered to restrict marriage to "one woman and one man" - something to consider when thinking about marriage as a forum for liberty and equity. And, the geographic scope was huge, with most states chiming in on the issue one way or another. The map I made here shows how different areas voted, from strongly in favor of restricting marriage (red) to being against restricting marriage (dark green).
On the whole, this map comports more or less with what one would expect, there's more red in the rural areas, more green in urban centers and on the Pacific coast, and there seems to be a trend towards more green in the Northeast. But there are some unexpected spots, too, such as South Dakota, which was substantially less in favor of restricting marriage than its neighbors Nebraska and North Dakota, And Arizona, which was the first state to reject restricting marriage in 2006 (alas, they went to the dark side in 2008).
So, there are some tricky issues to deal with in using this data. I haven't quite figured out how to make it comparable across time periods.
The final method I've thought of for measuring normative heterosexuality is using counts of same sex couples. The number of same sex couples was counted (albeit inadvertently) by the U.S. Census in 1990. For the 2000 Census, they did a better job of it, and the upcoming 2010 Census is expected to do better yet.
In any event, the number of people who identify themselves as married same-sex partners and un-married same-sex partners in the Census is probably mostly a factor of three forces: 1) How comfortable people in same-sex couples feel identifying themselves as such on the Census forms; 2) The degree of selective in-migration and out-migration of people in same-sex couples (or destined to join one), and 3) The degree of confusion by people in mixed-sex couples who inadvertently identify themselves as same sex partners.
The first two of these factors (net migration and comfort identifing as a same-sex couple) are related to what I want to measure - how accepting an area is of homosexuality. The third factor is a pain in the butt, not in a good way. I've discussed that issue at length before.
So, counting same-sex couples has two huge advantages: it uses the same methodology for the entire United States, and you can get comparable data down the the neighborhood level (census tracts). On the other hand, the data itself has some big caveats - it doesn't identify young people, single people, or couples living in separate residences, and it is essentially useless when considering older people (for reason 3 above). And although there will soon be three time points to compare, the methodology has changed in each Census, and it remains to be seen if the 2010 Census data will be comparable to the 2000 Census data (probably not, but for the reason that the methods are becoming more accurate).
So, in the end, I decided to pursue three measures of normative heterosexuality further:
1) The enactment of gay rights laws,
2) How people voted on referenda to restrict marriage to one man and one woman, and
3) The proportion of same-sex couples identified in the Census.
More to come...
In my last post, I talked a bit about my journey through thinking about health disparities, and how nobody seemed to be measuring the causes of these disparities. That leads directly to my doctoral thesis, which was about how to measure normative heterosexuality, and from there, estimating the impact of it on suicide. Not just on "gay" suicide, but suicide in the whole population, and also in various sub-populations defined by sex, age, and race/ethnicity.
So, following the lead of thinking about residential segregation by race/ethnicity, and income inequities, I began thinking about how to measure normative heterosexuality, the presumed cause of the health disparites that epidemiologists had begun to document with greater and greater precision.
How do you measure the degree to which a group of people (a large group of people) share a rigid set of beliefs about sex, gender, who ought to be having sex with whom, and how? My first thought was that the frequency of hate crimes directed against gay men would be a good measure. If this set of rigid beliefs dominated a social setting, then the informal "enforcement" of those beliefs would be enacted through the commission of bias-motivated crimes, presumably mostly by young men with "something to prove".
When I pulled the data down off the FBI's Uniform Crime Reporting (UCR) System, I quickly realized something was amiss. San Francisco had by far the highest number of anti-gay hate crimes in the country, and several Southern and Mountian states reported not a single one.
I've put more recent statistics by state in a table, based on numbers from 2004 to 2008, the five most recently reported. Basically the same trend holds - bias-motivated crime tends to be higher in places we think of as gay-friendly, and extremely low in the deep South. Then there are also strange jurisdictional oddities - Pennsylvania for example appears to have an extraordinarily low rate of bias-motivated violent crime.
The way I've come to understand this data is that it represents not the phenomenon of crime occurring, but rather on two phenomena: 1) how comfortable victims feel about reporting a bias-motivation to law enforcement, and 2) local law enforcement customs and legal constraints about recording and validating these reports. If it was just the first of these, then one could use the reporting of hate crime as a measure of homophobia at a societal level, that is the more hate crime reported in an area is evidence of how little homophobia there is there, as perverse as that sounds. But alas that second factor, particularly the bit about jurisdictional quirks in how different local law enforcement agencies deal with the reports that are made to them, really throws the whole thing off.
So, I couldn't use hate crime statistics. But maybe I could use the presence or absence of a law for reporting hate crime statistics that specifically included sexual orientation. Or, how about the presence or absence of a law prohibiting discrimination on the basis of sexual orientation?
So, the next thing I looked at was which states had gay rights laws, and when they were enacted. Various of the states have enacted gay rights laws over the years, the first being Wisconsin in 1981, a few more in the late 1980's, and a lot during the 1990's. Recently, state-by-state gains have slowed considerably, as gay activists have pressed for a national law (ENDA), or been distracted by the marriage thingy.
The point for my purposes is that the enactment of state-wide gay rights laws has been a pretty hotly-contested issue, debated for years within each state's legislature, rather than by a small cadre of legalistic judges, or the flash of public opinion of a referendum. As a result, the enactment of a gay rights law represents something of a local watershed, the point in time at which the balance of adverse consequences for elected officials switches from a net negative to a net positive.
So, looking at the enactment of gay rights laws seemed to hold promise, at least from a theoretical perspective, as a good measure of the broad social environment of a State in regards to the level of normative heterosexuality.
Another potential measure of normative heterosexuality to be considered is public opinion polling. The gay rights law thing seems a bit crude - a yes-or-no variable to measure something which I claimed varied by degree from one place to another, and one time to another within those places. Public opinion polling, on the other hand, offered the promise of a finely-tuned measure of normative heterosexuality. There are some relevant questions that have been asked the same way for decades. For instance, Paul Brewer has examined the time trends in how Americans feel about the "wrong"-ness of same-sex sex, which increased during the AIDS years, followed by a precipitous drop recently, the majority of Americans now saying it is not "always wrong" (small consolation that!).
So, public opinion polling looks like it might be a better "thermometer" to gauge how people feel about homosexuality. And there is longitudinal data to work with, so I could look at changes over time.
On the other hand, public opinion polls, by design, ask the smallest number of people possible in order to get accurate results. Thus a "large" national poll might have only 500 respondents. The GSS from which the data above is generated is a good bit larger than that, but still it is only a few thousand in any given year. A few thousand sounds like a lot of people, but what I needed to do was compare across places, not just time. So a few thousand breaks down into a few dozen in some states, and in others, fewer than ten. It would be a stretch to characterize the whole State of Connecticut based on how 15 randomly chosen people answered a question (for the record, I'm pulling that number out of thin air, but that's about what it comes down to).
So I was stuck with public opinion polling, too. Good temporal trends, but lousy in terms of geographic specificity.
A related idea was to look at how people voted on anti-gay referenda, such as the Briggs Initiative in California in 1978, Measure 8 in Oregon in 1988, and Colorado's Amendment 2 in 1992. These explicitly anti-gay referenda had the advantage of high geographic specificity, presumably accurate down to the precinct level, but represented a snap-shot in time. Also, they represented a small number of states, and the questions addressed in each one were quite different.
While I was working on my thesis, though, another opportunity to think about voter referenda came up. The issue of same-sex marriage cropped up. Although same-sex marriage has been contested in U.S. courts since 1970, it had never gotten much notice one way or the other - the Christian right didn't feel threatened by it, and most gay acitivists thought marriage was a non-starter politically, or at any rate a horrid reminder of heterosexuality run amok that should not be emulated.
But in 1998, Hawaii and Alaska voters chimed in on same-sex marriage, a few more did in the 2000 and 2002 elections, and then the 2004 election was swamped with voter initiatives to restrict marriage, in part a cynical manipulation by Republic Party operatives in order to keep their guy at the helm.
These referenda share the problem that opinion polling data have, in that they are a snap-shot in time (except for a few states which have had multiple referenda on this issue), but there were major advantages. For one thing, the question being asked was nearly identical in every state, some slight variation on whether legal recognition of marriage should be restricted to "one man and one woman". As an aside, no state has yet offered to restrict marriage to "one woman and one man" - something to consider when thinking about marriage as a forum for liberty and equity. And, the geographic scope was huge, with most states chiming in on the issue one way or another. The map I made here shows how different areas voted, from strongly in favor of restricting marriage (red) to being against restricting marriage (dark green).
On the whole, this map comports more or less with what one would expect, there's more red in the rural areas, more green in urban centers and on the Pacific coast, and there seems to be a trend towards more green in the Northeast. But there are some unexpected spots, too, such as South Dakota, which was substantially less in favor of restricting marriage than its neighbors Nebraska and North Dakota, And Arizona, which was the first state to reject restricting marriage in 2006 (alas, they went to the dark side in 2008).
So, there are some tricky issues to deal with in using this data. I haven't quite figured out how to make it comparable across time periods.
The final method I've thought of for measuring normative heterosexuality is using counts of same sex couples. The number of same sex couples was counted (albeit inadvertently) by the U.S. Census in 1990. For the 2000 Census, they did a better job of it, and the upcoming 2010 Census is expected to do better yet.
In any event, the number of people who identify themselves as married same-sex partners and un-married same-sex partners in the Census is probably mostly a factor of three forces: 1) How comfortable people in same-sex couples feel identifying themselves as such on the Census forms; 2) The degree of selective in-migration and out-migration of people in same-sex couples (or destined to join one), and 3) The degree of confusion by people in mixed-sex couples who inadvertently identify themselves as same sex partners.
The first two of these factors (net migration and comfort identifing as a same-sex couple) are related to what I want to measure - how accepting an area is of homosexuality. The third factor is a pain in the butt, not in a good way. I've discussed that issue at length before.
So, counting same-sex couples has two huge advantages: it uses the same methodology for the entire United States, and you can get comparable data down the the neighborhood level (census tracts). On the other hand, the data itself has some big caveats - it doesn't identify young people, single people, or couples living in separate residences, and it is essentially useless when considering older people (for reason 3 above). And although there will soon be three time points to compare, the methodology has changed in each Census, and it remains to be seen if the 2010 Census data will be comparable to the 2000 Census data (probably not, but for the reason that the methods are becoming more accurate).
So, in the end, I decided to pursue three measures of normative heterosexuality further:
1) The enactment of gay rights laws,
2) How people voted on referenda to restrict marriage to one man and one woman, and
3) The proportion of same-sex couples identified in the Census.
More to come...
Wednesday, December 9, 2009
After I Left AIDS - Part I
About a month ago, I wrote about Why I Left AIDS, but didn't get around to what I'd moved into.
While I was working in gerontology, and started taking classes again in public health, I was trying to figure out what I wanted to do research on. I knew it wasn't HIV/AIDS, and most of the other health outcomes related to gay men (suicidality, depression, substance abuse) were kind of downers. The depression bit hit close to home, and the substance abuse felt completely foreign to me, so I didn't really know where to go.
At the time, in gerontology, I was working on a variety of measures of regional variation in social conditions to try to explain health disparities. We had noticed a big difference in the occurrence of pressure ulcers (bed sores) by racial identity. While it was interesting for me to crunch large datasets, and to work with colleagues to figure out a narrative that might explain the health disparity, documenting the disparity and theorizing about why it occurred seemed unsatisfying. I wanted to measure the cause, not just the effect.
I had also been a teaching assistant for the epidemiology class at Brown for many years at that point, and we always chose an article about the link between residential segregation along racial lines and some health outcome or another, usually birth weight or premature delivery. The idea was that racial segregation, the separation of people in space, reflected social segregation, or the history and current strength of racial hierarchical ideology.
So it was a natural connection to say, hey let's look at whether the health disparity in bed sores is larger in cities characterized by high levels of racial segregation than it is in cities where people are more evenly distributed.
Unfortunately, we never got around to writing that paper (I don't think we even got to the analysis stage before I moved on), but the point is, I spent long hours figuring out how to measure, in a quantitative sense, the racial segregation of where people live, and also the levels of economic disparity (gap between the rich and poor), and how these measures vary across the U.S.
Roughly at the same time, I began to think that I wasn't so interested in documenting the extent of health disparity there was between gay men and straight men, lesbians and straight women (and there was almost no literature on bisexuals, and even less on gender identity), I was interested in measuring what causes the health disparities that do exist.
At first, I tried to think about measuring homophobia in high schools. In my own life, high school was definitely the most homophobic environment I had survived, after all.
I figured that one way to do it was to ask a wide range of students who had graduated and gone on to college to rate their high school environment with regards to homophobia. Having gone to college, they would have at least one other environment to compare to, some perspective. And by asking them about the school environment, rather than their personal experience, then queer kids and straight kids would both have valuable and relevant insights and perceptions on the issue.
I developed a 20-some odd page questionnaire and tested it on maybe 40 or so Brown undergrads, queer, straight, and in-between. At that point, it was an exercise for a survey design class I was taking, so I wasn't particularly interested in scientifically important questions like inter-rater reliability, I had a much more mundane purpose - did recent high school graduates feel like these questions made sense? Were they salient? Were they getting at what I intended them to get at? and was I missing anything?
It was a great experience (for me, anyway). And the questions did make sense (most of them), they were on target (most of them), and there were a few things I had missed. I was convinced that it was worth taking it to the next stage.
I began thinking about how to use it. It was one thing if one could describe the social environment, it was another to use it to predict health or health behaviors. In conversation with a student (Marc), we had an idea - which was to measure the homophobia at a series of high schools where more than one student had died of self-inflicted injuries to high schools where no student had died of self-inflicted injuries in many years, and to measure the extent to which the school social environment was infused with homophobia in both sets of schools.
And here's an important point - whether the students were queer, straight, or in-between was not relevant to our plan. It wasn't going to be a study about who had killed themselves, but about what sort of environment drives people to the point of ending their lives.
So maybe you're seeing a thread here already - the vast majority of research literature on queer health is about documenting the bad things that queers (and usually gay men specifically) are at higher risk for. But I wanted to take a different tack - I wasn't so concerned with what the specific health outcomes were, but the cause of them, and specifically, the cause in the sense of the social environment.
And this opened up a new possibility - examining the influence of the perfusion of homophobia in social environments not just on queer people, but on the whole population, on straight people too.
My involvement with ActUp/RI was highly influential in getting me to think about homophobia as a health hazard, but in that context, I thought about it as the reason the government was letting gay men die without saying a word, literally. Or when words were spoken, they would be words of condemnation, threats of quarantine, of judicial prosecution for having an infections disease, of punishment for exposing the "general population" to a scourge that we deserved but they did not.
Instead, I was now thinking about homophobia as a threat to the whole population.
More to come...
While I was working in gerontology, and started taking classes again in public health, I was trying to figure out what I wanted to do research on. I knew it wasn't HIV/AIDS, and most of the other health outcomes related to gay men (suicidality, depression, substance abuse) were kind of downers. The depression bit hit close to home, and the substance abuse felt completely foreign to me, so I didn't really know where to go.
At the time, in gerontology, I was working on a variety of measures of regional variation in social conditions to try to explain health disparities. We had noticed a big difference in the occurrence of pressure ulcers (bed sores) by racial identity. While it was interesting for me to crunch large datasets, and to work with colleagues to figure out a narrative that might explain the health disparity, documenting the disparity and theorizing about why it occurred seemed unsatisfying. I wanted to measure the cause, not just the effect.
I had also been a teaching assistant for the epidemiology class at Brown for many years at that point, and we always chose an article about the link between residential segregation along racial lines and some health outcome or another, usually birth weight or premature delivery. The idea was that racial segregation, the separation of people in space, reflected social segregation, or the history and current strength of racial hierarchical ideology.
So it was a natural connection to say, hey let's look at whether the health disparity in bed sores is larger in cities characterized by high levels of racial segregation than it is in cities where people are more evenly distributed.
Unfortunately, we never got around to writing that paper (I don't think we even got to the analysis stage before I moved on), but the point is, I spent long hours figuring out how to measure, in a quantitative sense, the racial segregation of where people live, and also the levels of economic disparity (gap between the rich and poor), and how these measures vary across the U.S.
Roughly at the same time, I began to think that I wasn't so interested in documenting the extent of health disparity there was between gay men and straight men, lesbians and straight women (and there was almost no literature on bisexuals, and even less on gender identity), I was interested in measuring what causes the health disparities that do exist.
At first, I tried to think about measuring homophobia in high schools. In my own life, high school was definitely the most homophobic environment I had survived, after all.
I figured that one way to do it was to ask a wide range of students who had graduated and gone on to college to rate their high school environment with regards to homophobia. Having gone to college, they would have at least one other environment to compare to, some perspective. And by asking them about the school environment, rather than their personal experience, then queer kids and straight kids would both have valuable and relevant insights and perceptions on the issue.
I developed a 20-some odd page questionnaire and tested it on maybe 40 or so Brown undergrads, queer, straight, and in-between. At that point, it was an exercise for a survey design class I was taking, so I wasn't particularly interested in scientifically important questions like inter-rater reliability, I had a much more mundane purpose - did recent high school graduates feel like these questions made sense? Were they salient? Were they getting at what I intended them to get at? and was I missing anything?
It was a great experience (for me, anyway). And the questions did make sense (most of them), they were on target (most of them), and there were a few things I had missed. I was convinced that it was worth taking it to the next stage.
I began thinking about how to use it. It was one thing if one could describe the social environment, it was another to use it to predict health or health behaviors. In conversation with a student (Marc), we had an idea - which was to measure the homophobia at a series of high schools where more than one student had died of self-inflicted injuries to high schools where no student had died of self-inflicted injuries in many years, and to measure the extent to which the school social environment was infused with homophobia in both sets of schools.
And here's an important point - whether the students were queer, straight, or in-between was not relevant to our plan. It wasn't going to be a study about who had killed themselves, but about what sort of environment drives people to the point of ending their lives.
So maybe you're seeing a thread here already - the vast majority of research literature on queer health is about documenting the bad things that queers (and usually gay men specifically) are at higher risk for. But I wanted to take a different tack - I wasn't so concerned with what the specific health outcomes were, but the cause of them, and specifically, the cause in the sense of the social environment.
And this opened up a new possibility - examining the influence of the perfusion of homophobia in social environments not just on queer people, but on the whole population, on straight people too.
My involvement with ActUp/RI was highly influential in getting me to think about homophobia as a health hazard, but in that context, I thought about it as the reason the government was letting gay men die without saying a word, literally. Or when words were spoken, they would be words of condemnation, threats of quarantine, of judicial prosecution for having an infections disease, of punishment for exposing the "general population" to a scourge that we deserved but they did not.
Instead, I was now thinking about homophobia as a threat to the whole population.
More to come...
Monday, December 7, 2009
Breast Cancer Screening Controversy
I'm going to be teaching two sections of epidemiology this Spring, one for grad students, one for undergrads.
The grad student version I'm pretty confident about, but I want to change a few things, especially the cumulative paper that I ask the students to write throughout the semester.
The other thing I'm thinking about is pulling in the breast cancer screening controversy, which seems to have long legs, re-appearing in the news on a regular basis. I had been thinking about H1N1, but to be perfectly honest, it hasn't been able to attract my attention (not the way the 1918 war-fueled epidemic did anyway).
For the undergrads, I'm trying out a new textbook (new to me, anyway), which has more pictures. I haven't been able to find a good textbook for undergrad epi, and the worst are the ones that say that that is their target audience.
Anyway, back to breast cancer screening. I think it's a great issue to tussle with. It has a lot of emotionally laden content in addition to "the science". The science itself is complex and fascinating, and really engages all forms of epidemiologic study designs, from case-control studies to massive experimental trials, and concerns epidemiologists have about sources of error and misleading results.
Also, one of the pioneering epidemiologic researchers was Janet Lane-Claypon, who did a case-control study comparing 500 women with breast cancer to 500 women without breast cancer, and confirmed most of the risk factors that we now know have a large influence on the development of breast cancer, in 1926. I like having a historical focus in my class, and it bugs me that that means reading exclusively male writers in a class that's predominantly made up of women.
I'd also like to include more of the large corpus of early writing from Spanish language authors, but I'm not familiar enough with it, and the few pieces I have seen translated just wouldn't fit well into my curriculum. (Perhaps it's time to expand my curriculum, then!)
But back to screening. I myself didn't think much about breast cancer screening, until my mom got a positive mammogram. It pretty well freaked her, and me, out. Weeks of anxious anticipation were not erased after minor surgery removed what turned out to be perfectly benign calcified lumps. But still, what if it had been cancer, wouldn't it have been good to know earlier rather than later?
The more I've thought and read about it, the more I've come around to a different point of view - it probably wouldn't have been better to know about it earlier. I know that sounds harsh to anyone with breast cancer, and easy for me, given that it wasn't breast cancer. But I don't say it glibly. The unnecessary anxiety, the unnecessary (if minor) surgery, these are not benign side effects. They may be mild inconveniences compared to mastectomy, chemo and/or radiation. But really how many unnecessary side effects are we generating with screening mammograms compared to how many treatable breast cancers that get detected (and wouldn't be equally treatable after they grew a bit and became diagnosed by other means)? How many breast cancers are detected and treated with highly toxic and invasive methods that, left alone, would never have caused a problem? Those are complicated questions that are technically challenging to answer.
Then, there's also an issue of where we, as a society, spend money. I don't think that costs should be a determinant of what health care people get. In a previous post, I lampooned the idea of doing a cost-benefit analysis of vaccination against HPV. The more effective a vaccine campaign is, the less cost-effective it would be, so it's just silly to do a cost-benefit analysis in the first place.
But at the same time, one wonders if all the attention paid to promoting mammograms as the one thing you can do to prevent breast cancer has crowded out other means of preventing breast cancer. Methods that may be less sexy, and less under an individual's control. Why does preventing breast cancer have to be something each woman does for herself? What about pesticides and environmental pollutants that probably have a very small influence on any one woman's risk of getting breast cancer, but by increasing all women's risks somewhat, have a large societal impact? What about the disparities in the levels of these pollutants that often mimic disparities in class and race in this country? What about addressing the structural poverty and disenfranchisement that keeps women from having symptomatic breast cancers dealt with early on when it's more treatable? These methods at least give men something to do!
The grad student version I'm pretty confident about, but I want to change a few things, especially the cumulative paper that I ask the students to write throughout the semester.
The other thing I'm thinking about is pulling in the breast cancer screening controversy, which seems to have long legs, re-appearing in the news on a regular basis. I had been thinking about H1N1, but to be perfectly honest, it hasn't been able to attract my attention (not the way the 1918 war-fueled epidemic did anyway).
For the undergrads, I'm trying out a new textbook (new to me, anyway), which has more pictures. I haven't been able to find a good textbook for undergrad epi, and the worst are the ones that say that that is their target audience.
Anyway, back to breast cancer screening. I think it's a great issue to tussle with. It has a lot of emotionally laden content in addition to "the science". The science itself is complex and fascinating, and really engages all forms of epidemiologic study designs, from case-control studies to massive experimental trials, and concerns epidemiologists have about sources of error and misleading results.
Also, one of the pioneering epidemiologic researchers was Janet Lane-Claypon, who did a case-control study comparing 500 women with breast cancer to 500 women without breast cancer, and confirmed most of the risk factors that we now know have a large influence on the development of breast cancer, in 1926. I like having a historical focus in my class, and it bugs me that that means reading exclusively male writers in a class that's predominantly made up of women.
I'd also like to include more of the large corpus of early writing from Spanish language authors, but I'm not familiar enough with it, and the few pieces I have seen translated just wouldn't fit well into my curriculum. (Perhaps it's time to expand my curriculum, then!)
But back to screening. I myself didn't think much about breast cancer screening, until my mom got a positive mammogram. It pretty well freaked her, and me, out. Weeks of anxious anticipation were not erased after minor surgery removed what turned out to be perfectly benign calcified lumps. But still, what if it had been cancer, wouldn't it have been good to know earlier rather than later?
The more I've thought and read about it, the more I've come around to a different point of view - it probably wouldn't have been better to know about it earlier. I know that sounds harsh to anyone with breast cancer, and easy for me, given that it wasn't breast cancer. But I don't say it glibly. The unnecessary anxiety, the unnecessary (if minor) surgery, these are not benign side effects. They may be mild inconveniences compared to mastectomy, chemo and/or radiation. But really how many unnecessary side effects are we generating with screening mammograms compared to how many treatable breast cancers that get detected (and wouldn't be equally treatable after they grew a bit and became diagnosed by other means)? How many breast cancers are detected and treated with highly toxic and invasive methods that, left alone, would never have caused a problem? Those are complicated questions that are technically challenging to answer.
Then, there's also an issue of where we, as a society, spend money. I don't think that costs should be a determinant of what health care people get. In a previous post, I lampooned the idea of doing a cost-benefit analysis of vaccination against HPV. The more effective a vaccine campaign is, the less cost-effective it would be, so it's just silly to do a cost-benefit analysis in the first place.
But at the same time, one wonders if all the attention paid to promoting mammograms as the one thing you can do to prevent breast cancer has crowded out other means of preventing breast cancer. Methods that may be less sexy, and less under an individual's control. Why does preventing breast cancer have to be something each woman does for herself? What about pesticides and environmental pollutants that probably have a very small influence on any one woman's risk of getting breast cancer, but by increasing all women's risks somewhat, have a large societal impact? What about the disparities in the levels of these pollutants that often mimic disparities in class and race in this country? What about addressing the structural poverty and disenfranchisement that keeps women from having symptomatic breast cancers dealt with early on when it's more treatable? These methods at least give men something to do!
Subscribe to:
Posts (Atom)