Sunday, December 29, 2013

Paid to Take Another's Punishment

I am, by any rational measure, a product of extraordinary privilege. I have a prestigious job. I own my own house. I can walk into pretty much anywhere and be taken seriously.
And yet, even though I can see that those things are true, it often doesn't "feel" like that.
It's not because as a gay man, I feel like a second-class citizen. I don't.
It goes back to high school. Really before that, but high school makes a better story.

St. John's Chapel, Groton School.
I went to a very prestigious boarding school. The same high school as FDR, and half of JFK's cabinet. When I graduated, I was disappointed because I only got into one Ivy League school, one that I (and most of my compatriots) thought of as a "safety" school.
But I wasn't like most students there, I was the son of a teacher, a "fac brat". My parents paid pennies on the dollar for tuition, and everyone knew my place, including me.
One odd tradition they had there was that if you got caught doing something you weren't supposed to do (like skipping services in the lovely chapel shown here), you got assigned to various work duties, the lowest infractions were "punished" by cleaning up the dining hall, wiping the tables down and straightening up the chairs. One of the most severe punishments was to wash dishes, a messy, hot, wet job that lasted for hours.
In 10th grade, I figured out that you could make a bit of money by doing other people's punishments for them. I used to charge $10 to do a night's worth of dishwashing, then when I figured out you could charge even more than that exorbitant rate, I started raising my rates to $20 and even more if it was a night I didn't want to do it, or if I thought the purchaser was a jerk.
I could (usually) get away with it because these jobs were also ones that everyone had to do on rotation, so the fact that I was washing dishes even though I didn't often break the rules didn't necessarily raise eyebrows. But occasionally, one of the faculty would notice and ask "Hey Bill, didn't I see you washing dishes earlier this week?" and I'd have to lay low, not taking on any more customers until suspicions would no longer be raised.
I loved washing dishes, I loved getting messy and wet, pounding the slop down into a trench where it would become feed for the local pig farm; piling the dishes as efficiently as possible into a washing rack; jamming it into the machine, and then yanking the clean dishes off and stacking them in the appropriate piles, throwing the plates airborne as much as possible to minimize skin contact with their scorching hot surfaces. I did it in college too, as a work-study job.
At the time, it didn't feel the least bit demeaning, I was making money, and having fun while doing it. I even felt a degree of pity for the jerks who paid me to work off their punishments.

When I wonder whether they ever felt bad about it, I doubt it. Maybe a little. But they learned a valuable lesson too, one that you see every time a bank settles rather than accepts blame for screwing people over. Just pay up and move on. Maybe it's even the same guys.

Sunday, December 22, 2013

Firearm-related Deaths, United States, 1968-2010

A few months back, I wrote about trends in motor vehicle accidents, and then about trends in hate crime statistics. Now with all the talk about firearm-related deaths I figured I'd look into those a bit.
So, the first obvious thing from the chart below is that there was a large increase in the firearm death rate from 1994 or so down to 1999, and it's been pretty level since then. There were also ups & downs before that, too.
The next thing I see is that changes in the total firearm-related death rate are closely linked to homicides, although the big drop in the late 1990's was due to a drop both in homicides and intentionally self-inflicted injuries, but that trends in homicides and intentionally self-inflicted injuries often follow each other, but not always (especially 2006-2010).
If 1994 sounds familiar, that could be because that's the year the Brady Handgun Violence Prevention Act took effect, requiring background checks for the sale of handguns. I don't know what happened in 1999-2000 to stop that encouraging trend line.

This next chart is a lot busier than it should be, but a couple things stand out clearly when you break the time trends down by age.
First, there are really different trends over time by age. There's an obvious surge in 20-24 year olds dying from 1985 to 1999, but an even more dramatic surge among 15-19 year olds, who start out (and end up) with some of the lowest firearm-related deaths, but really cranked up during the late 80's -early 90's.
All age groups saw a decline during that critical 1994-1999 period.
But when you look a little closer, something else becomes clear: the firearm-related death rates for 35-64 year olds pretty much decline throughout the whole time frame, while 75-84 year olds build up through the 80's, then decline through the 90's, and the 85+ year old group inclined through the 80's, but didn't really come down as much since then.
You may notice a sudden jump in firearm-related deaths among children in 1979, that's actually a fluke due to a change in the coding system (ICD-8 to ICD-9), but the subsequent rise, and dramatic fall in children's firearm-related mortality from that point on is real.


One of the frustrating things about working with US mortality data is that it's always 3-4 years out of date. I don't know why that's the case, because before there were computers, the delays in getting the death data out were measured in months. But that's a topic for another day...

Monday, November 11, 2013

Where are the Food Deserts?

A food desert is an area where healthy food options are out of reach. You know if you're in one, but it's surprisingly difficult for the Ivory Tower crowd (like me) to figure it out. For one thing, there are at least three components in that definition. What's "healthy"? What's "out of reach"? And even what's an "area"?
When I first started thinking about this, I figured, well, you just measure the distance from where a person lives to the nearest supermarket.
image from Data Underload, flowingdata.com
Turns out, that's a lousy definition. Makes for pretty pictures, though, like the one on the right, from Nathan Yau FlowingData.com (love your site by the way, Nathan).
The main problem with this approach it doesn't take account of of social space. Using this approach, the biggest food deserts are in actual deserts. Which would be fine if you were plopped down in a random part of the country each morning and had to figure out how to eat from scratch every morning.
But we tend to live, work, play, and "get by" in neighborhoods, neighborhoods that are highly structured in physical space in a way that reflects social relations.

When we looked for food deserts in Alameda county, we found that the "deserts" lay beyond the toney hills, in the outlying commuter suburbs, and the places we expected to see low food availability appeared to be chock-a-block full of supermarkets. Geographic space is part of the food desert picture, but somehow we need to get the idea of social distance in there as well to get at the idea of "out of reach".
And then, there's also the idea of "healthy" food. A supermarket may be short-hand for the availability of affordable healthy food options, but there are plenty of supermarkets whose produce aisle looks like the set for a horror movie, and there are also corner stores with gourmet appeal.
At the APHA meeting, I saw a bunch of posters where people had put a lot of work into figuring out food deserts in their communities, including a very ambitious project to describe food availability in great detail in New Orleans.
But I want to come up with a definition of a "food desert" that I can apply across the country, and without having to visit every supermarket, corner store and farmer's market. Lately, I've been thinking about coming up with some sort of relative distance measure, like the distance to the nearest supermarket, divided by the distance to the nearest outlet that sells tobacco or alcohol. So far, I've downloaded all the supermarket locations across the country, but the number of places that sell tobacco is just too huge. Hmmm.

Then, there are other important aspects to the social space that defines a food desert. I've got a job, and I drive about 40 miles to get there, so in the course of my day, I come across many food shopping alternatives. But there are many places along my route that someone without a car would have a great deal of difficulty getting to decent food. I can walk into any food vendor and get great service, even while wearing a hoodie. But not everyone wearing a hoodie gets the opportunity to pay cold hard cash for food, let alone get decent helpful service in the aisles.
I'm also re-thinking food deserts as being located in a clearly delineated physical space, and instead as a condition of what an individual or family experiences. I might not be in a food desert, but my neighbor might be.

A Glimpse of ActUp/RI

ActUp/RI at 'Stranvaganza. Photo by Tom Paulhus.
Image from archives at the John Hay library, Brown U.
I just came across this photo from the heyday of ActUp/RI, it's from a big bash at AS220 called 'Stravaganza. I'm struck by how comfortable I look in a 'teaching' role in this performance piece.
This past weekend I took part in a panel discussion about ActUp/RI after a screening of 'How to Survive a Plague'. It was very confusing to sit through the movie, try to make sense of it, try to make sense of my own feelings, all while trying to respond thoughtfully to an audience.
One thing that became very clear to me is that there was a huge void in my life after AIDS activism. For years, I had had an over-riding purpose, and then, after the protease inhibitors came on-line, and we had a somewhat less hostile President in the White House, it all fell apart. On this Veteran's Day, I'm struck by similarities to the stories I hear about veterans returning to civilian life after combat. You're completely consumed with the daily task of staying alive, and keeping your buddies alive, and then what? Raking leaves out of the driveway? It's impossible to replace that sense of urgency, and often really dangerous to try to.
Being HIV-, I had the privilege to be able to walk away from AIDS. And the new drugs made it seem like most of my HIV+ friends, well I could pretend that they were out of harm's way. Even when my friend Stephen called to let me know he was within a week of dying, I didn't want to burst the bubble, I said I was sorry to hear it, but the next time I saw him was at his funeral.

Wednesday, October 23, 2013

Interpreting Racial Disparities in Mortality

A few months back, I wrote about one problem plaguing interpretation of racial disparities in mortality rates. It isn't clear when comparing over time whether we should divide or subtract, and you almost always get different answers if you do one vs. the other.

Here's another problem facing how we interpret racial disparities.

It gets back to the difference between a fraction and a ratio - a ratio is dividing any two numbers, but for a fraction, the numerator has to be a subset of the denominator. So, miles per gallon is a ratio, but not a fraction. Dividing the number of people who vote in that town by the number of people who live there is a fraction, because everyone who votes there also lives there. Dividing the number of people who shop at a particular store by the number of people who live in the same town the store is located in is not a fraction, because some people who shop there don't live there. That "shopping ratio" looks a lot like a fraction, because you're dividing a target number of people by a broader baseline population, but it's technically a ratio because the numerator isn't a subset of the denominator.

In a similar fashion, race-specific mortality rates look a look like fractions, but they are actually ratios. That's because we get mortality rates by dividing the number of people counted as dying by the number of people the census counted as living at the beginning of the year. It's true that the dead are a subset of the once living, but here the issue is that the way we classify race differs between the death records and the census records.
On the census form, the head of household usually fills out the form, and describes the race/ethnicity of everyone in the household, someone that s/he knows intimately. On the other hand, death records are usually filled out either the physician certifying the death, or a mortician at a funeral home, so it is based on how someone outside the family perceives the deceased. In other words, denominator race (census) is self-defined, while numerator race (death records) is other-ascribed.

So, there are potentially problems where there is a large discrepancy between self-defined and other-ascribed race, for example, someone with Mexican ancestry who appears to strangers as White. That's how we end up with a ratio instead of a fraction: we're dividing the number of people who died with a given other-ascribed race by the number who were living with the same racial/ethnic classification, but on the basis of self-definition.

Camara Jones incorporated a question into the BRFSS that gives us a handle on the difference between self-defined and other-ascribed race: "How do other people usually classify you in this country?". Among self-identified Whites, 98% were perceived by others to be White, and 96% of self-defined Blacks were usually seen as Black by others. So far, not bad.
But only 77% of Asians said others usually saw them as Asian, 63% of Hispanics said others usually saw them as Hispanic, while 27% saw them as White, and a mere 36% of American Indians said that others usually saw them as such, even more (48%) saw them as White.

So, on that basis, we'd expect that a large proportion of Asians, Hispanics, and especially American Indians would be counted as White on the death records, which would result in lower apparent mortality rates in these groups than one might expect, and somewhat higher mortality rates for Whites than is truly the case.

And when you look at overall mortality rates, that seems to make sense: Of the major racial/ethnic groups, only Blacks appear to have higher mortality than Whites. American Indians appear to have about the same mortality as Whites, while Hispanics appear to have about 75% of the mortality rate that Whites do, and Asians appear to have about half the mortality that Whites do. But is it credible that American Indian populations have no higher mortality than Whites, or that Asians are twice as likely as Whites to live to old age?

I've been thinking about how to adjust for this problem, perhaps using the estimates from the survey data above to shift people in the denominator from the census-derived self-defined groups so that they match the other-perceived rates that the death records are based on. That's unsatisfying for two reasons: first because the classification of race on death records is other-ascribed, but it's still by someone who (ideally) has gotten to know the deceased and/or their family at least somewhat, so it's not like a random stranger passing on the street, which is closer to how the survey question is framed. Second, it makes the assumption that the death record classification is "correct" and the census data needs to be adjusted to fit the correct numerator. So, a kludgy tool, but perhaps useful.

Another way to look at this issue is to think about where in the country the mismatch between self-defined race and other-ascribed race is likely to be greatest, then look at the mortality rate ratios across those areas. So, I figured that there wouldn't be much difference between Blacks and Whites anywhere in the country. Over 95% of both groups say they are usually seen by others as being Black or White, because there's such an extensive history of anti-Black racism in this country, and because there are very few places where Whites never see Blacks and vice-versa. But, the other three groups (Hispanics, Asians, and American Indians) are distributed very unevenly across the country, and there are many Whites who wouldn't run out of fingers before counting all the people they know from one of these groups.
So, I would expect that in places where Whites are least likely to be familiar with people of one of these groups, the level of mis-classification would be highest. And that's in fact what I saw when I crunched the numbers.

In this graph (just women for the moment), I've arranged the states from those with very few American Indians, Asians or Hispanics (like West Virginia (1.7%) and Maine (2.7%)) to those with the highest proportion of these groups (California (48%), New Mexico (54%), and Hawai'i (73%)). The colored dots indicate the apparent ratio between each groups mortality and that of Whites, the Orange dots are for American Indians relative to Whites, the purple dots for Hispanics relative to Whites, the green dots for Asians relative to Whites, and the red dots for Blacks relative to Whites.
Looking at the red dots (Blacks vs. Whites), there's not a lot of difference from one side of the graph to the other, a little increase, but not much. Not surprising because there's not a lot of misclassification of these groups.
There's also not much difference for the Asians, despite the finding from the survey research that almost a quarter of Asians report usually being seen as another race by others. It is interesting to note that the apparent racial disparity is certainly the smallest in Hawai'i, a state where API populations make up the majority of the population.
But for Hispanics and American Indians, there are really big differences depending on which side of the graph they are on. Hispanics appear to have about 70% lower mortality in West Virginia, but nearly identical mortality to Whites in New Mexico. And although the national average for mortality between American Indians and Whites appears to be about equal, in states at the lower end of the graph the mortality appears to be much lower, and on the upper part of the graph, more mixed, some higher, some lower.
Looking more specifically at Native American populations, here I've ranked the states from those with the fewest American Indians to those with the most, and this graph is spectacularly clear. Most of the states with fewer than 1% American Indians appear to have lower American Indian mortality than Whites, while all but two states with American Indian populations above 1% show higher mortality. That picture is very consistent with the idea that "others" filling out death records in states with a low proportion of American Indians are more likely to classify them as White, while the "true" racial disparity in mortality between American Indians and Whites is likely to be quite a bit higher. (the darker circle is the national average).
And arranging the states according to the proportion Hispanic also shows a very strong gradient, suggesting that the true racial disparity in mortality is a lot closer to 1 than the national average of about 25% lower.

Next steps: I figure that accidental and suicidal deaths, and deaths among younger populations, are probably more prone to misclassification on the death records, because the people filling out the death certificates would have less connection to the decedent and their family, so I'd like to break these rates down that way, too.
Also, there are a bunch of datasets where they follow people up until they die, so in those cases, the numerator really does come from the denominator, so I could look in those to see what the "true" rates should be, even though these represent only a small sample of the whole population.

Monday, July 29, 2013

Hate Crime Statistics

My dissertation was about the impact of heteronormativity (a.k.a. societal homophobia) on suicide rates. Well, really it was about how to measure local variation in heteronormativity, and suicide happened to be an convenient health outcome: it's a "hard end-point" meaning that it is captured with little error, it's assessed pretty much the same way everywhere across the country and over time, and it's probably related to heteronormative societal attitudes.
One of the first ideas I had about how to measure local variation in heteronormativity was to look at hate crimes statistics. The logic is that hate crimes are a direct and extreme expression of heteronormativity. The FBI issues a report every year documenting the number of crimes reported as being bias-motivated, and also where they happen and against whom the violence is targeted.
But a strange thing happened when I looked at the data - there were a fair number of bias-motivated crimes reported from San Francisco and New York City, and virtually none from the places I expected to be havens of homophobia. The most likely explanation is that the number of hate crimes reported is a lousy measure of the number of hate crimes committed, and is a better measure of the degree to which a person reporting a hate crime to the police is taken seriously. So, in a way, hate crimes reporting may be a decent measure of heteronormativity, but in the opposite direction of what you'd expect at first: the more hate crimes reported, the friendlier the social environment is for TBLG people.
But, it gets more complicated. There are two ways not to have much conflict between dominant and subordinate groups. One way is for everyone to get along. Another way is for the subordinate group to "mind its manners" and steer clear of offending the sensibilities of the dominant group. So even if the incidence of hate crimes were a good measure of homophobia, it would be complicated because you'd expect the number of crimes to be low in areas where gay people have learned that the best thing to do is stay deeply closeted, or to get out of Dodge. And even though areas that are "gay meccas" allow us to express ourselves more freely, this can incite hardened haters in our midst to violence, like Dan White. "Gay meccas" can also attract hardened haters with violent intentions, and thus one often sees violent hate crimes centered around gay bars and cruising areas.

Anyway, it had been over ten years since I looked at the hate crimes data, and a lot happened since then. So I was curious to see what has changed.
Not as much as I expected. There are more and more local and state police forces reporting hate crimes to the FBI, but the number of reported hate crimes hasn't changed much, except for a spike in 2001 related to the violent backlash against Arabs and Muslims. If anything, there's a downward trend when you take the growing population into account (which I have not done in these graphs).

I have to admit, I'm intrigued by data like this. I don't know what story they are telling. I anticipated that with the rapid change in societal attitudes about homosexuality, we'd see a steady growth in the number of reported anti-gay hate crimes. But, as you can see in the graph below, the number of reported anti-gay hate crimes rose pretty steadily until 2001, and has pretty much leveled off since then.

So maybe that's a good sign - of increasing tolerance, acceptance, and even celebration breaking out in some corners of the country. But it could mean a lot of things, and when you dig down into where these anti-BLG crimes are being reported from, it's still predominantly from the gay meccas - large coastal cities and also university towns all across the country. I suspect that there are lots of anti-gay crimes not being reported at all, especially in rural areas and the South.
Maybe the peak in 2001 highlights a shift in the attention of bigots, towards a new bogeyman. There's certainly plenty of evidence that anti-Arab (much of the darker orange slice in the graph below), and anti-Muslim (the bright green slice in the next graph down) spiked hard in 2001, and there has been a sustained increase in anti-Islamic crimes since then compared to the 1990's. But I think the idea of bigots turning away from the gays and towards the Muslims is at best a partial story. Also of interest to note in the graph below is that the number of anti-Black crimes reported by the FBI was definitely lower in the first two years of the Obama administration. Evidence of a post-racial America? I strongly doubt it - although the post-racial narrative might explain it if one considers that some of the more "post-racist" (emphasis on racist) police may be harder to convince that a bias-motivated crime has occurred, and thus less likely to report it as such. It would certainly be interesting to look at those trends in the wake of the 2010 retrenchment election.



 So, another interesting thing to note in the graph above, is the absolutely tiny number of hate crimes motivated by anti-atheist sentiments. As a hard-core aptheist myself, I find it hard to believe that there are so few anti-atheist hate crimes reported. Maybe it's an issue of confusion - how do you classify a religiously-motivated attack when the recipient professes no religion? But I suspect another possible explanation, that theist (after taking the double negative out of "anti-atheist") biases are so entrenched that it is hard for police to see theist motivated crimes as bias-motivated, and therefore not report them as such.

Another interesting twist to the tospy-turvy  world of hate crimes reporting is the biases for which no reporting category is even available. There were no crimes reported as being motivated by ablism before 1997. It's not that a glorious heyday of equanimity passed in 1996, but rather that there was simply no category available even to describe these bias motivations in the FBI's system. Even today (or at least up to 2010), the number of crimes reported as being directed by ablist biases numbers in the dozens per year, across the entire country. So here's another example indicating that the nature of the bias itself prevents it from being recognized and recorded.
So, that seems like a pretty exhaustive list: crimes motivated by bias on the basis of race, ethnicity, religious preference, sexual orientation, and ability. Or does it? Notice that there's simply no category to record crimes motivated by bias against transgender people yet, or intersex, or even bias against women. I wouldn't be surprised if the number of reported hate crimes would double if rapes motivated by misogyny were reported as such.
Also, in a nation where most sources of intolerance are weakening, intolerance against fat people is on the rise. Plug for a great article on anti-fat bias and media portrayals of disembodied depersonalized fatness.

I have to admit, I'm pretty ambivalent about organizing around hate crimes as a means to end prejudice. It's not for lack of trying. As my time with ActUp/RI wound down, I turned to advocacy around hate crimes - even made myself into a bit of a spokesmodel in the wake of being beaten about the head on Thayer Street in Providence (that's me standing and gesturing to another victim in that attack). I got involved in training a few police departments in Rhode Island, but I found that re-hashing my story as a hate crime "victim" was a source of re-victimization, and left me feeling dis-empowered and alienated, especially after some of the more intense police training sessions.

Friday, July 12, 2013

Allowing Gay Blood Would Increase Safety

The FDA still maintains a lifetime ban on gay and bisexual male blood donors. It is tempting to see this ban as overt homophobia, although I'd like to think that the decision-making body at the FDA has some other rationale in mind, at least in part.
They claim is that the ban increases the safety of the blood supply.

And so we have the ideal set-up, pitting "Safety" against "Homophobia". A battle between Rights, with Science judging the fight.

Is a lifetime ban on gay blood donors safer than allowing gay blood donors to give without restriction? Sure, but that's not an alternative that anyone is advocating for.

Some advocates for changing the policy deferring gay/bi male donors claim that all the blood is tested anyway, so we don't need the ban.
All the blood is tested for HIV, but there are cases where the blood tests negative even though it is infected, and one of those circumstances can be when a person has just been infected, and often the blood is highly infectious during that "window period". So, it is judicious to reject gay/bisexual donors who might have been infected recently. I think the best solution there would be to apply the same criteria used to defer anyone else who might have been infected recently, to say you can't donate for a year after sex with another man, even with a condom.
I've heard two logical arguments for why to exclude gay and bisexual men from donating blood for longer than a one year window - one is that there are extremely rare cases where an established HIV infection would still test negative, and the other is that the blood is tested only for those viruses that are pretty common and that they have good tests for - it isn't possible to test for everything, certainly not things we don't even know exist yet. I think both of these arguments from the side of "Safety" are compelling, but they don't operate in a vacuum.

Nobody is arguing for gay and bisexual men to be able to donate without restriction, so the question is what restriction will maximize "Safety" while reducing the role of "Homophobia" in making blood donor deferral policy? Often this is portrayed as though it is a balancing act, where every reduction in homophobia compromises safety.

But there are good reasons to think that reducing the role of homophobia in blood donor deferral policy would actually increase safety. Notwithstanding all the discussion about "window periods" and emerging infections and so on, there are three important phenomenon going on related to how people respond to a deferral policy that reeks of homophobia. How do people react when confronted with a policy that sounds, smells, and tastes like prejudice?

Frankly, some people are comforted by it. I'm sure there are lots of people who feel like the blood supply is safer because they believe gay and bisexual donors are excluded from it. They may make my stomach churn, but they don't make much difference in my argument.

Most gay and bisexual men are revolted by the policy, and as a result wouldn't touch blood donation with a ten foot needle. Again, not relevant to my argument.

Some gay and bisexual men, however, have learned that the easiest way to negotiate homophobia is to lay low. Keep your voice down and your wrists locked in position. Where this presents a problem is that given the choice between potentially outing oneself or deflecting the question about whether you've had 'sex with another man, even once', some men who should be deferred just slip past the question using the same techniques they've learned in dealing with other homophobic situations. Changing the policy so that it doesn't reflect homophobia (say by changing the deferral criteria to be the same as other HIV risk factors) would actually make the blood supply safer in regards to this group.

The second group I'm thinking of is predominantly heterosexual, but really could potentially include any donor. By including a deferral policy that sounds, smells, and tastes like rank homophobia, it "cheapens" the validity of other deferral policies, leading to people being less careful answering them. What I mean is that when the basis of one deferral policy is so obviously shaky, some potential donors will think that the other criteria (such as which drugs you've taken recently, or travel history) are also not strongly based in the need to keep the blood supply safe, and may be "encouraged" to give a less than honest answer, especially if they feel any social pressure to donate.

The third group I'm concerned about are the people who don't start giving blood at all. And the blood banks are worried about them too. Lots of people become regular donors for life after getting started in high school and college. But young people these days are especially sensitive to the acrid stench of homophobia. So by maintaining this policy that sounds, smells, and tastes like homophobia, the FDA is turning potential donors away in droves. Potential donors who are at very low risk for HIV and other blood-borne pathogens. Potential donors who otherwise would be likely to save dozens of lives over the coming years. There have even been organized efforts to keep blood drives off campuses until the policy changes.

The most dangerous pint of blood is the one that's not there when you need it.

Dear FDA, it's time to bring your deferral policy into the 1990's. Dump the homophobia and increase the safety of the blood supply.

Wednesday, June 19, 2013

Obesity a Disease? You can kiss my plump ass, AMA!

The American Medical Association voted to call obesity a "disease" on Tuesday - and I'm scratching my head to figure out why. It's obvious that obesity not a disease. A "risk factor", sure. But a disease? Give me a break. What's next? Avarice? Impatience? Ugliness? Tanning?

I see prejudice in this vote. Prejudice against fat people. A prejudice that has barely been talked about at all in the public health debate about what to do about the rapidly growing number of fat people in this country, and around the world.

While Mayor Bloomberg and his interventionist allies in public health have been hounding us (everyone: fat, formerly fat, and fat-to-be) the major accomplishment has been to make generations of people feel bad about their bodies, bad about themselves, and ashamed to talk about it. Well, I for one am FED UP!

Despite our obsession with fat-free, sugar-free, xxx-free foods that are described primarily by what is absent from them, our collective waistline continues to expand. Perhaps it is that obsession that leads to the obesity "epidemic".

I don't pretend to know what's causing the obesity "epidemic", and I think anyone purports to know is too confident of their own opinions. What I do know though is that there is a ton of irrational prejudice about fatness, and I've certainly got my share of it. I hate how my body looks. And I hate that I hate how my body looks.
I'm thinking of parallels to the gay rights movement - should I "come out" as fat and proud? I can see some theoretical benefit to that approach, but to be honest with you, I'm not the least bit proud of being fat, and I don't feel like faking it just to see some potential benefit on the other side.
But I'll tell you this - I for one really don't appreciate the AMA telling me my fatness is a disease, and I'm working up an appetite to do some research on how anti-fat prejudice affects people's health.

And the winner of the gay marriage debate??? Heterosexuals!

I'm sorry to say it, but there is already a clear winner in the gay marriage debate: heterosexuals. In the 1970's we absolutely and flatly rejected marriage as oppressive, not to mention the ultimate definition of "square".
We tried every possible alternative. Vigorously.
And yet, we came crawling back, hat in hand, saying we want in too. A major defeat for gay liberation, a major coup for normative heterosexuality.

But, while we were out sowing our wild oats, we learned a few things - you could say we picked up a few tricks. We do marriage differently, and if straight people have any sense, they'll be paying attention. I'm not the first to say it, but in many ways, gay marriage has saved straight marriage from passing into obsolescence.

A lot of heterosexuals are paying attention. A couple weeks ago, Slate's Double X Gabfest had a good discussion about what straight people can learn from gay marriage. They dove deep into all the stuff about gender roles, and differentiation of tasks within couples, and how "gay" marriage shows that those two ideas can be de-coupled, re-arranged, and yet there are often strengths to being different, even unequal, in a relationship. But I was surprised that the Slate commentators didn't want to touch monogamy - or rather the ability to discuss its alternatives - the biggest and best innovation we've brought into the marriage covenant.

Wednesday, May 29, 2013

Trends in Motor Vehicle Accidents

Every once in a while, I like to see what's going on with motor vehicle accidents. It turns out there's a lot going on. This data is from the Fatal Accident Reporting System. I haven't done anything special with it, just graphed the rather bland spreadsheet there on the home page.


 The obvious thing that jumps out at me is that after decades of increases in motor vehicle deaths (the trend goes back to the very introduction of the automobile), we seem to have hit a turning point in 2005, and there were huge drops in motor vehicle fatalities in 2008 and 2009 especially.
The other thing that jumps out at me is the increase in the number of motorcyclists killed on the roads (the purple bars), and perhaps a decline in pedestrian deaths (green bars), and certainly a decline in passenger deaths (bright red bars).
The timing of the precipitous drop in 2008 and 2009 certainly suggests a connection to the recession - fewer vehicles on the roads = fewer deaths. That decline in vehicles would presumably come from three sources: fewer trucks delivering goods, fewer commuters, and fewer errands and pleasure trips. But why would there be more motorcyclist deaths? Perhaps the aging of the baby boom generation? And I haven't got a clue about why there would be fewer pedestrian deaths. It would be interesting to see whether the decline in pedestrian deaths is also linked to the 2008-2009 drops - and could that be attributed to fewer commuters? Or fewer errand and pleasure trips? The drop in passenger deaths seems to be pretty strongly linked to the recession - so is that about less car-pooling among the remaining commuters?
At any rate, graphing the number of deaths is a bit misleading, because the population keeps growing.
So, when you divide the number of deaths by the population (and multiply by 1,000,000), the peak year isn't 2005, but 1995. Actually, if you trace these trends back, the peak year on a per population basis is some time back in the 1920's, when cars were just mowing people down left and right, with very little effort to make the vehicles, the roads, or the drivers safer. What you see in the long term trends is a long slow decline in motor vehicle death rates, followed by a rapid decline in the 1970's, linked to that decade's recession, and also the high price of gas (much much higher than today once you take inflation into account), speed limit restrictions, the imposition of seat belts, investments in improved road infrastructure (guardrails etc.), and a radical shift in how we viewed drinking and driving. The slower decline continued in the 1980's through the late 2000's, especially due to air bags, improved vehicle construction, lighter vehicles that do less damage to others, and continuing trends in driver, vehicle, and road infrastructure safety. But that drop in 2008/2009 is still really dramatic, and I have to wonder if it can all be attributed to the recession.
Presumably, if the decline is due to the recession, it should be directly related to how many vehicles are on the roads. So, if you divide the deaths by 'vehicle miles traveled' instead, it should smooth out the trend...

And that seems to be the case. The long trend towards lower deaths per mile traveled dominates, but there is still an extra bump in 2008/2009, suggesting that the recession not only reduced the number of miles traveled, but also made the miles traveled safer, especially for passengers and pedestrians.

So, as we climb out of the recession, I'd expect to see the number (and rates) of motor vehicle deaths increase a litttle bit, maybe as high as 120 per million residents per year or 12 per billion miles traveled, and then continue the long slow decline.

So, here's another example of major improvements in health being made. Not as sexy a story as the latest fad in diet, but it's good to be reminded once in a while about what's going right.

Monday, May 20, 2013

Data Unicorns

How many unicorns are in your data? Sounds like a silly question. But there can be some major problems when we don't think to ask it. Because every dataset has what appear to be unicorns in it - impossible combinations of data made possible because of infrequent errors.

Rob Kelly, Blackout Tattoo Studio, Hong Kong
Usually it's not a problem because the unicorns make up a really small proportion of your sample. And if the data combination is in fact impossible, or makes up a tiny proportion of what you're really interested in, you can just ignore them, or even try to "correct" them if you have additional information. But when you're interested in a rare phenomenon, it can be hard to tell the difference between unicorns and the real cases you're interested in.

Gay Blood Donors

Take, for instance, a paper I've been working on for years about estimating how many gay blood donors there are.

If the American Red Cross's procedures were followed to the letter, there shouldn't be any because any man who has "had sex with a man, even once, since 1978" is supposed to be excluded. In other words, any apparent gay blood donors should be unicorns –impossible data combinations.

We know that there are some, because every once in a while, someone tests positive during the blood donation screening process, and when they go back to interview the donor, some donors admit to "having sex with a man, even once, since 1978". But we have no idea how many HIV- gay blood donors there are, how many men who are giving on a regular basis without incident, despite the ban.
So, I've been looking at various datasets trying to get a rough idea of how many gay blood donors there are, trying to make the point that the ban on gay male donors isn't just discriminatory, it's also ineffective. And if we could talk with the men who are giving blood regularly without incident, maybe we could develop new exclusion criteria based on what they are doing.

It sounds simple enough, look up how many gay men there are in these datasets, and count how many of them are giving blood. But here's the problem. There are errors in counting who's a gay man, and also errors in counting who gives blood. So, any heterosexual male blood donor who is inaccurately coded as gay or bisexual will appear to be a gay/bi blood donor. As will any gay/bisexual non-donor who is accidentially coded as a blood donor. Let's start out with some plausible (but made up) numbers to illustrate...

Let's give ourselves a decent-sized dataset, with 100,000 men in it. Suppose that 95% of the male population has not "had sex with a man since 1978", and 5% of them have given blood. That's 4,750 straight men who are blood donors.
In the 1970's the Census did a big study where they interviewed people twice, and found that in about 0.2% of the cases, the two interviews resulted in a different sex for the respondent - about one in 500. So, what if 0.2% of these 4,750 guys who are giving blood without bending the rules at all get mis-coded as gay or bisexual - that's about 9 cases of what appear to be excludable blood donors.
Let's just make a guess that instead of 5% of heterosexual men giving blood, that 0.5% of gay/bisexual men do. Then we've got 100,000 x 5% x 0.5% = 25 cases of gay/bi men who are giving blood despite the ban.
So, all told, it looks like there are 34 gay/bi blood donors, but only 74% of them really are gay/bi blood donors.
But what if 0.06% of gay/bi men are really giving blood? Then there would be 3 real gay/bi blood donors, but there would appear to be 12, and only 25% of them would really be gay/bi blood donors. Most of the time, we'd be looking at unicorns.
What's frustrating is that I can't tell the difference between these two scenarios. I can't tell if my unicorn ratio is only 24%, or if it's 75%.

There's another problem, too - with the blood donation questions. Sometimes, people want to inflate their sense of altruism, and they'll say they gave blood in the last year even if it was closer to two years ago. That I can live with, but an even bigger problem is that people get confused by the wording of the question, and they say they've given blood even if all they did was have a blood test at the doctor's office. So, there are some surveys where the blood donation rate appears to be upwards of 25%.
Let's assume that 5% of the population (gay or straight) who haven't given blood say that they have because they mis-understood the question (or that the interviewer was inattentive and hit the wrong button).
Then the number of straight men who say they've given blood would be 10%, not 5%, or 9,500. And if 0.2% of them were mis-classified as gay/bisexual, that would be 19 men who appear to be gay/bisexual blood donors. Then, if we take 5% of the gay/bisexual men as being mis-classified as being blood donors, that would be another 250 men who really aren't blood donors, but appear to be. In that case, if there are really 25 gay/bisexual blood donors, they would make up only 9% of the 294 men who appear to be gay/bisexual blood donors, and if there were really only 3 gay/bisexual blood donors, they would be 1% of the 272 who appear to be blood donors, or in other words, 99% unicorns.
And just to underscore the point, that's coming from errors of 0.2% and 5%.

There is a way to sort through this mess. You'd just need to call the men who appear to be gay/bi blood donors and ask them to clarify on a second interview. The number who would be inaccurately coded twice would be really small, because the relevant error rates are small (0.2% and 5%). But it is unlikely that anyone will do that kind of call-back.

Unicorns Ahead

There are a number of other contexts where we should expect to see unicorns in LGBT health research.
One is transgender health. There are a number of States that have been asking BRFSS respondents if they are transgender, and it looks like about 1 in 500 say that they are. But we need to be very careful in researching this population, because if the 1970's Census estimates hold, it's probably not unreasonable to think that 0.2% of the population will inadvertently be coded as being transgender, and that could easily be most of the people identified as transgender in these surveys. Again, the easiest solution is to call people back to verify. But in the absence of a call-back survey, we won't know whether 70% of the people identified as trans are actually trans, or if only 7% are.
Another group heavily influenced by unicorns is married same-sex couples. Before 2004, almost all people identified as married same-sex couples in the United States were unicorns, because it wasn't a legal status available to anyone. Another analysis I'm working on shows that the proportion of people identified in surveys as married same-sex couples who are really married same-sex couples can be as low as 10%, and rarely gets above 50%, but it's getting better in states where marriage is legal.

Sunday, April 28, 2013

Research Directions

    Hey there blogfriends, I'm super excited because I'm going to have a first-author paper coming out in a few days - about the racial distribution of trees and pavement across the US - and exploring a few reasons that may explain it, like segregation (yes) and poverty (no). It looks like there's going to be some press on it, so keep an eye out.
    And my next first-author paper is getting really close to submission - so it's probably six months to a year from publication. That one's about the influence of living in more segregated cities on the probability of experiencing racial discrimination. That one's pretty interesting - lots of studies within one particular city or another have found that experiences of racial discrimination tend to be less common among Blacks who live in predominantly Black neighborhoods, and more common among Blacks who live in predominantly White neighborhoods. As far as I can tell, ours is the first to look at the degree to which the overall segregated character of the city (and her suburbs) affects reporting of racial discrimination experiences. We're seeing pretty dramatic results in that more segregation results in more experiences of racial discrimination, for Blacks, Hispanics, Whites and Asians.

    But what I'm stymied with at the moment is where to go after my most recent first-author paper - showing that gay men are more likely to be in excellent health than straight men... I'd love to get another paper on TBLG health out there, relatively soon, but it's challenging, because I have to do the work on my own dime and my own time. So here's some ideas, and I'd love to hear your thoughts on what would be most helpful (helpful in any sense - informing policy, improving science, satisfying curiosity - whatever greases your gears).

ONE: Improving Identification of Same-Sex Couples in Large Probability Datasets
    I know. Boring title. But here's why this has been floating my boat lately. When I was working on gay men in excellent health, I looked at the biggest dataset I could lay my hands on, the BRFSS. There were a fair number of same-sex married couples, even before same-sex marriage was legal anywhere in the US, which struck me as odd. Another thing that was odd is that their demographics (how old they were, how many kids they have, whether they served in the military, etc.) were a lot like heterosexually married people. I figured that what was most likely happening was that a small number of heterosexually-married people were accidentally mis-coded - and ended up being counted as same-sex couples. So, I threw them out of the analysis.
    BRFSS is especially vulnerable to this kind of error, but the problem is ubiquitous in any of the large probability samples that get used for research on same-sex couples - and rarely acknowledged.
    So what this project would be about is systematically going through the major datasets and trying to estimate how many of the same-sex couples identified are really same-sex couples, and how many are mis-coded heterosexually-coupled people.
    The main reason that it's important to do this project is that there are a lot of publications out there claiming that same-sex married couples are "just like" heterosexually-married couples. That may be a comforting message, and there's probably something to it, but a likely explanation that is almost never discussed is that a lot of those same-sex married couples are in fact heterosexuals. If we want an accurate picture, we need actual same-sex couples.

TWO: BLG health in relation to voting on marriage restrictions
    OK, so my thesis (never was able to get it published) was about the occurrence of suicide in relation to heteronormativity - the more heteronormative an area is, the higher the suicide rate there - especially for young men. I measured heteronormativity in three ways: the legal status of employment discrimination; how people voted on restricting marriage; how many same-sex couples the Census counted in an area.
     Given that nobody seems to care about employment discrimination any more these days, I figure that I should focus on the voting thing. The way I see it, how people in an area vote on restricting marriage to "one man and one woman" is a pretty good heteronormativity thermometer. There are some complications in that the wording is different from State to State, and the change in public attitudes is so rapid that a 60% endorsement rate today probably corresponds to an 80% endorsement rate in 2004. But assuming I can figure out a way to handle that, the other part is finding a dataset that has good BLG health measures in it.
    For my thesis, I used the overall suicide rate, and I didn't particularly care whether the people who died of self-inflicted injuries were "gay" or not. In fact, I suspect that the highest suicide risk associated with being gay or bisexual is before one declares openly to anyone else, and even before having sex, so it would be kind of silly to try to figure out who's who after they're dead. But I think that's one of the reasons I had trouble getting anyone interested in publishing it - it seems like people want to know how BLG people are affected by homophobia. Well, I'm interested in how heterosexuals are affected also. I very much doubt that it's a zero-sum game where heterosexuals gain some advantage while BLG people pay the price. I suspect it's much more likely that heterosexuals, too, are harmed by heteronormativity. And since there are a lot more of them, it should be even easier to pin that down. But I digress.
    So, I need a dataset that A) is a probability (random) sample of the US, B) has a large sample size (ideally in the 10's of millions, but I'll have to settle for less), C) identifies who is gay, lesbian, bisexual, and heterosexual, D) has a high degree of spatial resolution so I can figure out what the local homophobia "temperature" is, and E) has decent temporal resolution so I can figure out when people were sampled relative to important dates, and F) has decent measures of health in it.
    There are some datasets that come close to fitting the bill, but it's a challenge.

THREE: Transgender health from large population datasets
    There's only one publication out there about transgender health based on a probability sample - from the Massachusetts BRFSS. But there's the potential to do so much more. There are seven States that have asked about transgender identity on BRFSS. I'd love to collect the data from all seven, compare the basic demographics of transgender-identified people across the different question wordings & hypothesize about which questions work best. And then get into the health outcomes, much like the Massachusetts study did, but with much more data. I suspect that all of the question wordings are going to have a significant problem much like the same-sex married people identified in large population datasets - that is, even a very small number of errors in the coding of cisgender people is going to be a major headache. There's really only one way to handle that that I can think of - call them back to verify it - but I really can't see that happening anytime soon.

FOUR: The Real Blood Donors of Gaytown, USA
    There are just so many things wrong with banning gay blood donors. It made sense in 1985 (and frankly, it would have made even more sense earlier). But it doesn't make sense now, and everyone knows it. Including lots of gay men who donate blood anyway, and increasing numbers of young straight people who won't donate because they don't feel right about the discrimination. I'd love to be part of qualitative research on gay men who give blood. Why do they do it? How does it make them feel? What 'rules' about donating have they made for themselves to decide when they should and should not donate?
    There's a lot of interesting policy angles to wrangle through on this issue, but I think getting to know these guys would be really interesting - and informative in coming up with better deferral guidelines.

FIVE: Wage Gap and Death
    Strangely enough, there are only a handful of studies out there measuring how sexism affects health at a population level. Most of them use some sort of complicated mash of different ideas into an "index", and I hate indices - you never know what's really going on in there. So I took a simpler approach, just looking at the wage gap between men and women. It varies a lot - there are some parts of the country where women make almost as much as men, and some parts where men make about twice as much as women. What I expected to see was that women's mortality would be higher in areas where men make more. But I saw something completely different: where men make more relative to women, they live longer, but women's mortality is unrelated to the wage gap. I basically put this project on ice because I can't figure out a narrative that makes sense. But I could go back to it if y'all have fresh ideas.

So let me know, what do you think I should work on? And if you're feeling especially generous, for only $62,000, you get to decide.

Saturday, March 2, 2013

Origins of the Health Disparities Narrative

I recently did a guest lecture at Berkeley where the students asked me two questions that left me scratching my head...
1) When did the 'health disparities narrative' become dominant in public health, and 2) What dominant narratives about the health of socially-marginalized racial groups preceded it?
I don't know. But those are intriguing questions that deserve answers, so I'll ask for your indulgence as I flail around with some possible answers.

Defining the 'Health Disparities Narrative'
In public health, we tend to think about minority health in terms of 'health disparities'.
When we see that the health of a minority group is worse than that of socially-dominant groups, that is expected based on our narrative of how minority groups fit into social structures, and how these social structures influence health.
When we encounter exceptions to that general rule (cases that don't fit the narrative of health disparity) we tend to doubt the data and dismiss the findings. In those cases where the data shouts out over our attempts to silence it, we call it a 'paradox'.
So that's what I mean by the 'health disparities narrative' - an overarching narrative structure that strongly influences what we intuitively believe or doubt about the health of socially-marginalized groups. Which stories are 'easy' to tell, and which leave us tongue-tied and confused?

From the 'Sign of the Gene'...
I was first introduced to epidemiology in the mid-1980's. My recollection is that the go-to explanation for health differences between racial groups was that 'race' described biological distinction - that the environments of the various continents had 'bred' races of humans with differential susceptibility to disease. This go-to explanation was so ingrained that it was rarely stated explicitly. Implicitly, one message was that if racial difference reflects biologic difference, then an observed health disparity reflects something 'natural'. A racial disparity could be considered a 'risk factor', and be the basis for 'raising awareness', but would have little application in primary prevention (one would not 'prevent' someone from being one race or another).
The classic example of this was sickle cell anemia, usually quickly followed by cystic fibrosis, to demonstrate that every race had it's unfortunate susceptibilities.

In the early-mid 1990's, running up to the sequencing of 'the' human genome, news stories hit hot and heavy linking any and all manner of diseases and even personality traits to genes. Almost all of these reports were not confirmed in replication studies, but one thing became increasingly clear: the genes that were implicated in diseases were never the same genes that had different racial distributions. And in that handful of cases where there was some overlap, like in HLA markers, nothing panned out in further study in a way that explained racial disparities in health.

...and racism...
Despite the complete lack of evidence for the genetic basis of disparate health outcomes, genetic origins continued to be (and continues to be) the go-to explanation for many people in medicine and public health.

Fortunately, I was taught epidemiology by Sally Zierler, who countered the 'biologic distinction' interpretation of observed racial disparities, and offered instead the interpretation that racial disparities in health could be attributed to the relative social standing of those groups. Implicit in that interpretation was that a racial disparity should not be seen as 'natural': it should shock the conscience. It also leads to very different prevention strategies. Racism itself should be the focus, and Sally got a lot of heat for promoting that viewpoint.
A terrific example of this way of thinking is the ground-breaking analysis in 1997 by James Collins and Richard David, who pit the assumption of genetic origins head-to-head against an alternative hypothesis: that something about living as Black in America, especially during childhood, was the cause of high rates of premature deliveries seen among African-American mothers.

...to 'health disparities'...
The early 2000's is when I'd say, based only on my gut, that the way we think of 'health disparities' today really blossomed. I'm going to try to do a historical word count type of analysis to check that gut assumption, but in the meantime, I think it's safe to assert that the dominant interpretation of 'health disparities' as reflecting social structure is a recent phenomenon.
The prevention lessons we draw from the 'health disparity' narrative today are pretty varied - access to care, cultural competency, and also the 'fundamental cause' people like me - racism itself, the rest is downstream of that...

Epidemiologic Transition
The epidemiologic transition refers to the shift in patterns of causes of death, from chiefly infectious diseases striking all ages (and especially those under 5 years old) to chronic diseases that are more restricted to older populations. Epidemiology itself also had a few major transitions, but a little out of sync with the shift in mortality patterns. After 60 years or so of social epidemiology, infectious disease epidemiology rose in stature in the early 20th century. Infectious diseases dropped dramatically during both phases, but once the shift to chronic diseases as the major killers was largely complete in the 1950's and 60's, a new epidemiology arose, a chronic disease epidemiology which stressed multiple risk factors rather than single bugs. So, I suspect that the transition to chronic disease epidemiology was not linked to the development of the health disparities narrative, but it was a pre-condition.

Civil Rights Movement
I think the civil rights movement probably played a big role in the development of the health disparities narrative, but not as directly as one might at first think. Landmark legislation in the mid-1960's led to the involvement of the courts in race relations by the 1970's in a new way. Rather than being limited to assessing discrimination in individual cases, a new statistical reasoning made it's way into legal wranglings and regulatory frameworks: between affirmative action and desegregation orders, the quantification of inequity became a paramount consideration. My hunch is that these routinely quantified comparisons of racial groups played a big role in the development of the health disparities narrative. If one conceives of racial groups as being separate biological groups subject to evolutionary forces, then comparing racial groups to one another is like comparing apples to oranges - or really Granny Smiths to Cortlands. There are circumstances where comparisons make sense, but there is an assumption of difference built-in from the beginning. So I think the quantification of racial difference that the civil rights era ushered in certainly played a role, but other factors had to come into play as well.

Office of Management and Budget Standards for Data on Race and Ethnicity
As various Federal agencies struggled to enact regulations and enforce them, the fact that there is no agreed-upon definition of racial categories became clear. Rather than acknowledge that race is a complex characteristic, composed of many dimensions including self-identity and perception by others, the Federal Government tried to create a standardized set of categories that would be shared for all administrative purposes, with the Office of Management and Budget Directive #15 in 1977.
That may seem like an obscure bureaucratic detail, but the reason I connect it to the rise of the health disparities narrative is that by requiring governmental agencies to use the same five categories to describe race, data from multiple sources became comparable in a way that they had not been before. Death records could be matched to Census data (or at least appeared to be comparable), so race-specific rates were easier to calculate.
The use of these standardized categories was diffused throughout the government, and in particular required for research grants, including medical and public health research grants. As a result, not only was it possible to use comparable racial groups for comparisons, but the requirement that racial breakdowns be reported back to granting agencies implied an importance attached to race that encouraged researchers to analyze their results using racial categories as well. Steven Epstein has written a lot about that whole process.

Healthy People 2010
When I showed the charts below to Rachel, she made a great observation: that the rapid rise in the term 'health disparity' after 2000 is probably linked closely to the release of the Healthy People 2010 document, which had the secondary goal of 'eliminating health disparities'. Why did they use that phrase? When did they start using it? I've tried to find the exact date that this phrase entered the Healthy People documentation, but it'll take more research to nail that down.

In sum...
I suspect that the main shift in interpreting racial disparities in health has been from revealing inherent racial differences in biology to mirroring social structure. I'm not sure exactly when this happened, but my gut tells me that this shift happened in public health mostly in the late 1990's, early 2000's - certainly there were vanguards who foreshadowed this shift much earlier, and just as certainly there are laggards who have yet to embrace it. I'll be curious to see what text searching through publication databases reveals...

addendum: here is a quick & dirty analysis - the proportion of articles indexed in WebOfScience.com with 'racial difference', 'racial disparity', or 'health disparity' in the topics field. "Racial difference" (in purple) rises from 1991 2003, then plateaus or even drops in frequency. 'Racial disparity'(in green) was at low levels before 2001, then rises exponentially. 'Health disparity' (red) was virtually non-existent in articles published before 2000, then rises even more rapidly than 'racial disparity' as a topic term.


Addendum2: Another quick analysis of word counts in PubMed (which goes further back in time) shows Identical patterns (unfortunately, I swapped the green and red in these two charts). It is interesting to note that there was a jump in articles using the phrase 'racial difference' in the mid-1970's, and potentially a second jump in the mid-1980's








Thursday, February 21, 2013

Insight on Why Gay Marriage is Threatening to Some Christians

New to the blog? Skip to the Highlight Reel.

To me, one of the great mysteries is why so many people feel "threatened" by similar-gender marriage.
I think I get why some people are skeeved by (male) homosexuality - frankly for the same reason I was before I tried it - the "ick" factor of imagining sexual acts themselves in the abstract before you have any idea what they actually feel like.
So there's the "ick" factor, and its close kin, rank homophobia. And that's probably 80-90% of it right there.

I listen to certain religious right commentators every day - one might say religiously. In particular Bryan Fischer and Tony Perkins. In part because I want to know what they're talking about - they drive so much of the political and social opposition to me just having a normal day - so I want to know what's coming next from them. But I also listen to them because I'm curious, and I really do struggle to understand how they see the world.

I start from the supposition that people usually try to tell the truth (to the degree it is apparent to them), and that people at their base nature are good-hearted. I want to believe that these rabidly anti-gay commentators are honestly representing their perspective. Very often people like Tony and Bryan get written off as being cynical, dishonest, hypocritical. But I don't think that's the case. I actually think that they are giving a full-throated defense of their deeply-held beliefs. We've certainly seen plenty of cases (Larry Craig, Ted Haggart, Eddie Long, George Rekers...) of vehemently anti-gay men who turned out to be turning tricks.
But I think there are also plenty of people, like Tony and Bryan, who aren't hypocritical - they're just critical.

So, that's the crux (so to speak) of the mystery for me - how could a man who is heterosexual to the core, who does not hate homosexuals, yet feels so threatened by homosexuality? So threatened that they can't let a week go by without railing against it on a nationally syndicated radio program.

I finally had an insight about that. Unlike my assumption that people are basically good, and want to tell the truth, a key belief for many Christians is that we are born with a sinful nature, that without the restraints of morality, without the constraints of vows and pledges, we would naturally sin in any and potentially every way. In other words, without their faith and adherence to religious principles and practices, they would be unable to help themselves, and it would only be a matter of time before they finally got around to sinning in a homosexual fashion.

I know that sounds simple, and I can't believe it took me so long to figure it out. I have vague memories of a ninth grade teacher trying to explain "original sin" to me. It sounded like the weirdest work-around. In a lot of ways, listening to these guys is like being in a dream where you understand all the words someone is saying, but the meaning is absent. Except that I think I understand what they mean, but what they are really saying escapes me.

I'll keep listening - so you don't have to.

Friday, January 25, 2013

Are You Prepared for a Nuclear Detonation?

I'm not. And I doubt more than a handful of people in the US are. And I'm not sure whether that's a good thing or not. On the one hand, I don't want to be alarmist. On the other, If we as a nation value our nuclear weapons so highly that we won't get rid of them, can we reasonably expect that nobody will ever try to use them against us?

When the debate about the Seabrook nuclear power plant was happening, I was of an impressionable age. I lived in Amherst, New Hampshire, and Groton, Massachusetts, just across the border from one another. The gym at my school was labelled as a "fallout shelter" and looked like a bunker. I read a few books about the Manhattan Project and its terrible debut in Hiroshima. I was fascinated by the science and the scientists, and also by the majestic horror that they unleashed. I had dreams about running to that shelter and what might happen as we waited days and weeks for help to arrive.

Unlike people a few years older than me, I had never undergone a 'duck and cover' drill. By that point, they were mocked as silly - inducing unnecessary nightmares in our youth - not to mention useless in the face of a bomb capable of incinerating an entire city. I took some comfort in the fact that Fort Devens was in the next town over. In the event of an all-out exchange, I prayed we wouldn't have a chance of survival.

And yet, we as a nation still maintain an enormous arsenal of nuclear weapons, and devote considerable resources to them. Why? I'll leave that to others for the moment. A great discussion can be found on KQED's Forum here.

Although it has gone out of fashion to speculate about nuclear weapons use, I would argue that it is foolish to be as ill-prepared for it as we are. Unlike the nightmares of my youth, the prospect of an all-out exchange between the US and the Soviet Union has passed. The prospect of an all-out exchange between the US and any current nation is exceedingly remote.But with all the crazy people in this world, the more likely scenario is that someone, somewhere, will be able to put late-1930's technology together with evil intent to deliver a rudimentary nuclear weapon to our doorstep.

A Dirty Little Secret
Nuclear weapons are more survivable than we imagine. The 'duck and cover' drills of the 1950's and 1960's may have indeed been silly. But I suspect that the real reason they went out of fashion isn't because they were futile, or terrifying, but because continuing them made the idea of using nuclear weapons seem plausible. By preparing for nuclear war, we were countenancing the possibility that it might happen, and nobody wanted that. In that light, I want to be clear that claiming nuclear weapons are more survivable than we imagine should in no way make them easier to use.
In college, I read about an epidemiologic follow-up of survivors of the atomic bombings of Hiroshima and Nagasaki, to trace the longer term effects of nuclear weapons exposure (and from a more cynical perspective, to help set regulatory guidelines as to acceptable levels of x-ray exposures in the US). When I taught my epi classes at SFSU, I incorporated this article into teaching about cohort study design.
One thing that was shocking about this study was how many survivors there were. I don't in any way want to minimize the number of deaths, but it was shocking to me to learn that some people who were within 1 kilometer of the blast center survived, and relatively few people 10 kilometers away from the centers of the blasts were killed. Not only that, but the study treated these people beyond 10 kilometers as the 'unexposed cohort' - that the radiation dose one received from the initial blast at a distance of 10 kilometers was not much higher than background. Or in other words, if a Hiroshima-sized bomb went off over the Transamerica building in downtown San Francisco, we in our classroom at SFSU would be considered 'unexposed' in that study design.

Back during the debate about opening the Seabrook plant, there was a lot of scare-mongering using mushroom clouds to illustrate the risk of a nuclear meltdown. But a nuclear power plant can't explode, so I think that hyperbolic representation may have really undercut the cause. What nuclear weapons do share with nuclear power plants in terms of risk is fallout. That is, a nuclear power plant is not going to blow up and cause the explosive damage of a bomb, but both a bomb and a plant have the potential to release a lot of fine dust particles carrying radioactive elements over a relatively wide area. That's the scary part.

The hopeful part is that with a bit of preparation, you can offer yourself a great deal of protection from the fallout. First, close your windows and seal them up with plastic. I know, it sounds silly, but the biggest danger fallout presents is if you breathe it in or swallow it. Your skin is pretty good at dealing with two types of radiation (alpha and beta), but your lungs and stomach are very susceptible. That's because we are constantly bombarded with radiation from the sun, so we've evolved pretty good external defenses. The third type of radiation, gamma radiation, gets less harmful the farther you are away from it. As an analogy, if you hold a light bulb up to your face, it is blinding.  In the ceiling, it provides a nice glow, but the light from that bulb is practically useless if you are out in the driveway hunting for a dropped wallet.
So, making your home as air tight as possible makes it harder to breathe in or swallow fallout particles, and by keeping them outside the home, it keeps you farther from the gamma radiation.
Similarly, you want to stay in the middle of the house, away from ground level (where the fallout settles), but also not too near the roof (because it falls there too). And, if you can surround yourself with stuff, even better, because stuff absorbs radiation. Water is a great radiation-absorber, but books, even blankets, will help a little bit. You got a water bed? Awesome place to crash.
You may think that hunkering down in the center of your plastic-wrapped house, surrounded by buckets of water may not be the ideal way to spend the rest of your life, and you'd be right. But there is a saving grace - called "half-life". Radioactive atoms can release radiation at any moment, but on average, half of them will "go off" within a set amount of time. And because a lot of radioactive fallout elements have a short half-life, it is estimated that the danger of fall-out is reduced about 90% within 3 days, and well over 99% within 3 weeks. So, even after only three days of hunkering down, the risks from fallout are considerably lower.

The other thing you'll want to do is pray for rain. Rain is really effective at pulling any remaining fall-out out of the air (so it will be harder to breathe it in), and also does a decent job at washing the radioactive dust off your roof, off the sidewalk, and either into the sewer, or down into the ground a bit. And fallout in the ground is a lot less dangerous than fallout on the ground (imagine taking an x-ray with even a quarter-inch of soil between you and the camera). At that point, the major concern would be from radioactive elements (like iodine-131) that get absorbed from the ground into crops that you (or your cows) eat.

Hope that helps you sleep better tonight....