Hey there blogfriends, I'm super excited because I'm going to have a first-author paper coming out in a few days - about the racial distribution of trees and pavement across the US - and exploring a few reasons that may explain it, like segregation (yes) and poverty (no). It looks like there's going to be some press on it, so keep an eye out.
And my next first-author paper is getting really close to submission - so it's probably six months to a year from publication. That one's about the influence of living in more segregated cities on the probability of experiencing racial discrimination. That one's pretty interesting - lots of studies within one particular city or another have found that experiences of racial discrimination tend to be less common among Blacks who live in predominantly Black neighborhoods, and more common among Blacks who live in predominantly White neighborhoods. As far as I can tell, ours is the first to look at the degree to which the overall segregated character of the city (and her suburbs) affects reporting of racial discrimination experiences. We're seeing pretty dramatic results in that more segregation results in more experiences of racial discrimination, for Blacks, Hispanics, Whites and Asians.
But what I'm stymied with at the moment is where to go after my most recent first-author paper - showing that gay men are more likely to be in excellent health than straight men... I'd love to get another paper on TBLG health out there, relatively soon, but it's challenging, because I have to do the work on my own dime and my own time. So here's some ideas, and I'd love to hear your thoughts on what would be most helpful (helpful in any sense - informing policy, improving science, satisfying curiosity - whatever greases your gears).
ONE: Improving Identification of Same-Sex Couples in Large Probability Datasets
I know. Boring title. But here's why this has been floating my boat lately. When I was working on gay men in excellent health, I looked at the biggest dataset I could lay my hands on, the BRFSS. There were a fair number of same-sex married couples, even before same-sex marriage was legal anywhere in the US, which struck me as odd. Another thing that was odd is that their demographics (how old they were, how many kids they have, whether they served in the military, etc.) were a lot like heterosexually married people. I figured that what was most likely happening was that a small number of heterosexually-married people were accidentally mis-coded - and ended up being counted as same-sex couples. So, I threw them out of the analysis.
BRFSS is especially vulnerable to this kind of error, but the problem is ubiquitous in any of the large probability samples that get used for research on same-sex couples - and rarely acknowledged.
So what this project would be about is systematically going through the major datasets and trying to estimate how many of the same-sex couples identified are really same-sex couples, and how many are mis-coded heterosexually-coupled people.
The main reason that it's important to do this project is that there are a lot of publications out there claiming that same-sex married couples are "just like" heterosexually-married couples. That may be a comforting message, and there's probably something to it, but a likely explanation that is almost never discussed is that a lot of those same-sex married couples are in fact heterosexuals. If we want an accurate picture, we need actual same-sex couples.
TWO: BLG health in relation to voting on marriage restrictions
OK, so my thesis (never was able to get it published) was about the occurrence of suicide in relation to heteronormativity - the more heteronormative an area is, the higher the suicide rate there - especially for young men. I measured heteronormativity in three ways: the legal status of employment discrimination; how people voted on restricting marriage; how many same-sex couples the Census counted in an area.
Given that nobody seems to care about employment discrimination any more these days, I figure that I should focus on the voting thing. The way I see it, how people in an area vote on restricting marriage to "one man and one woman" is a pretty good heteronormativity thermometer. There are some complications in that the wording is different from State to State, and the change in public attitudes is so rapid that a 60% endorsement rate today probably corresponds to an 80% endorsement rate in 2004. But assuming I can figure out a way to handle that, the other part is finding a dataset that has good BLG health measures in it.
For my thesis, I used the overall suicide rate, and I didn't particularly care whether the people who died of self-inflicted injuries were "gay" or not. In fact, I suspect that the highest suicide risk associated with being gay or bisexual is before one declares openly to anyone else, and even before having sex, so it would be kind of silly to try to figure out who's who after they're dead. But I think that's one of the reasons I had trouble getting anyone interested in publishing it - it seems like people want to know how BLG people are affected by homophobia. Well, I'm interested in how heterosexuals are affected also. I very much doubt that it's a zero-sum game where heterosexuals gain some advantage while BLG people pay the price. I suspect it's much more likely that heterosexuals, too, are harmed by heteronormativity. And since there are a lot more of them, it should be even easier to pin that down. But I digress.
So, I need a dataset that A) is a probability (random) sample of the US, B) has a large sample size (ideally in the 10's of millions, but I'll have to settle for less), C) identifies who is gay, lesbian, bisexual, and heterosexual, D) has a high degree of spatial resolution so I can figure out what the local homophobia "temperature" is, and E) has decent temporal resolution so I can figure out when people were sampled relative to important dates, and F) has decent measures of health in it.
There are some datasets that come close to fitting the bill, but it's a challenge.
THREE: Transgender health from large population datasets
There's only one publication out there about transgender health based on a probability sample - from the Massachusetts BRFSS. But there's the potential to do so much more. There are seven States that have asked about transgender identity on BRFSS. I'd love to collect the data from all seven, compare the basic demographics of transgender-identified people across the different question wordings & hypothesize about which questions work best. And then get into the health outcomes, much like the Massachusetts study did, but with much more data. I suspect that all of the question wordings are going to have a significant problem much like the same-sex married people identified in large population datasets - that is, even a very small number of errors in the coding of cisgender people is going to be a major headache. There's really only one way to handle that that I can think of - call them back to verify it - but I really can't see that happening anytime soon.
FOUR: The Real Blood Donors of Gaytown, USA
There are just so many things wrong with banning gay blood donors. It made sense in 1985 (and frankly, it would have made even more sense earlier). But it doesn't make sense now, and everyone knows it. Including lots of gay men who donate blood anyway, and increasing numbers of young straight people who won't donate because they don't feel right about the discrimination. I'd love to be part of qualitative research on gay men who give blood. Why do they do it? How does it make them feel? What 'rules' about donating have they made for themselves to decide when they should and should not donate?
There's a lot of interesting policy angles to wrangle through on this issue, but I think getting to know these guys would be really interesting - and informative in coming up with better deferral guidelines.
FIVE: Wage Gap and Death
Strangely enough, there are only a handful of studies out there measuring how sexism affects health at a population level. Most of them use some sort of complicated mash of different ideas into an "index", and I hate indices - you never know what's really going on in there. So I took a simpler approach, just looking at the wage gap between men and women. It varies a lot - there are some parts of the country where women make almost as much as men, and some parts where men make about twice as much as women. What I expected to see was that women's mortality would be higher in areas where men make more. But I saw something completely different: where men make more relative to women, they live longer, but women's mortality is unrelated to the wage gap. I basically put this project on ice because I can't figure out a narrative that makes sense. But I could go back to it if y'all have fresh ideas.
So let me know, what do you think I should work on? And if you're feeling especially generous, for only $62,000, you get to decide.
I'm Bill. These are my observations on queer health, and other things I care about for one reason or another. Tuna was my adorable dog, a companion of 16 years.
Showing posts with label BRFSS. Show all posts
Showing posts with label BRFSS. Show all posts
Sunday, April 28, 2013
Friday, December 23, 2011
Research Worth Reading (4) - trans health in Massachusetts
Gunner Scott, Sewart Landers and pals have served up a very interesting paper in January's AJPH - the first time anyone anywhere has published anything peer-reviewed on a population-representative sample of transgender people.
In Massachusetts, the Behavioral Risk Factor Surveillance Study (BRFSS) has asked the adults it interviews "Do you consider yourself to be transgender?", and a whole lot of demographic and health-related questions.
Many studies in the past have sought out a transgender population to try to say something about the health of the group, but this is the first one to rely on a "random" sample, meaning calling people up at random; and that's probably the best way to be sure that you've got a study population that is fairly representative (at least of people with phones).
In addition to addressing trans health from a population perspective for the first time, this study is also the first to report simple basic demographics of the transgender population in the US as well, including the most basic one - how many transgender people are there?
The answer, in this study, is about 1 in 200 in Massachusetts, about 1 in 110 in a similar study from Vermont, and 1 in 170 in Boston. It is likely (for reasons I've discussed before) that these are overestimates, meaning that the true proportion is probably somewhat lower than that, but how much lower? That's hard to guess, it depends on how many nontrans people answer the wrong way because they are distracted or misunderstood the question. The only way to figure that out is to call back the people who said they were trans and ask them again.
The study is very interesting in that it validates some things trans health activists have known for years, but there isn't strong evidence to support all the health disparities that have been identified from "convenience" samples. Members of the trans population in this Massachusetts study were less likely to be employed, and more likely to be living in poverty than the nontrans population. The study also documented that 36% of the trans population were smokers, compared to 17% of the nontrans population. But markers of access to health care were not particularly different. The trans population was less quite a bit less likely to have health insurance (86%) than the nontrans population (94%), but this did not translate into not having a regular health care provider or not seeing a doctor because they couldn't afford it, and the trans population was even more likely to have had a checkup in the last 12 months (85%) than the nontrans population (75%).
Mental health measures did show some substantive differences: 70% of the trans population reported usually or always getting needed emotional support, but this was quite a bit lower than the 90% of nontrans people who got their emotional needs met; and 14% of the trans population reported being dissatisfied with their life, but only 6% of the nontrans population did.
The authors were very thoughtful about ways that these results might be misleading - for instance that trans people are probably less likely to be stably housed and have a telephone, so these figures may well present a rosier picture than a fully representative sample of trans people would be. And also, not knowing how many cis-gender (nontransgender) folks inadvertently classified themselves as trans, it is hard to know the degree to which true differences between the trans and cis populations are diluted by these inaccurately coded folks. Another possible source of bias might be people who have transitioned, but no longer consider themselves to be transgendered, although I suspect this is pretty unlikely to be a substantial part of the population, because the way the question in Massachusetts was asked, they made it clear that they meant "experience(d) a different gender identity from their sex at birth. For example, a person born into a male body, but who feels female or lives as a woman". But excluding people who no longer consider themselves to be trans would, probably, make the differences seem larger than they actually are. A similar bias would arise from trans people not feeling comfortable describing themselves as trans to a stranger on the phone.
Thanks Gunner & Stewart!
Friday, August 26, 2011
Thoughts on the Behavioral Risk Factor SURVEILLANCE Study
The Behavioral Risk Factor Surveillance System (BRFSS) is a telephone survey conducted every year. It's been growing and growing every year, and it gives a lot of people (including me) a data woodie every year in April when the annual data dump comes out.
Nominally, the survey is about "behavioral health", things like smoking & drinking, seatbelt use, exercise, diet, getting your cholesterol checked and a mammogram done. It has become a cornerstone of our data surveillance infrastructure - used to track progress against the Healthy People goals, and to reiterate endlessly repetitive health disparity analyses.
In 2010, 429,630 people responded to the survey. That's almost the population of Wyoming. Sure Wyoming is the least populated State, but do we really need to call that many people every year to look at trends in how often people smoke, use seat belts, and eat five fruits & veggies a day? Imagine talking to each and every resident of Kansas City, Missouri in one year, asking them some 75 questions about their personal behaviors. That's the scale of this thing.
Since 1988, there have been almost 5 million interviews - about the population of South Carolina, or the combined population of the 7 least populous States: Wyoming, Alaska, North & South Dakota, Delaware, Vermont & Montana.

And it has been growing at a rate of about 8-9% a year, which means that is has been doubling in size about every 7 years or so.
One of the ways that BRFSS is complicated is that it tries to have about the same number of responses from each State. Think Senate vs. House of Representatives. So whereas the response burden in New York, Illinois and California is a relatively manageable 1 in 2,000 or so residents getting called in any one year, in New Hampshire and Hawai'i, about 1 in 200 people have to answer this survey every year. And in Vermont, it's as low as 1 in 92 people! That means you Vermonters probably know several people who get surveyed this year, and given the survey growth rate, it will be almost impossible not to get interviewed at some point in your life.
The record goes to the Virgin Islands, though, where about 1 in 35 people get surveyed every year. We're going to know every detail about every resident of the Territory before long!
This raises a couple of issues for me. There's the inevitable risk of some hacker breaking into the State Health Department and snagging detailed information on tens of thousands of State residents linked to their phone numbers. Although that's a scary idea, it doesn't get me too exorcised, because there is probably very little value to that information - it is hard to imagine who would want to know about your dietary habits, or even drug use or sexual behavior.
Another issue is just the level of surveillance, or monitoring of the population. Gathering information from a small number of people to keep tabs on trends in the population as a whole makes sense, but it seems to me that BRFSS is getting out of control, moving towards a degree of surveillance that is quite intrusive on a high proportion of the population. I mean, at this rate, it might make more sense to just mass mail the survey to every State resident every five years or something like that.
But the biggest problem I have with BRFSS is only tangentially related to it's size. It's the fact that it asks really boring questions. How that's related to its size is that by becoming the largest health survey in the country by an order of magnitude, BRFSS is where lots of people will look for answers to what is causing our public health problems. It's the centrality of "behavioral health" that I've got a gripe with.
Let's take obesity as an example.
BRFSS can demonstrate in very great detail the growth in obesity rates over time, in very minute detail. But it can't tell us much of anything about why any individual or group is getting heavier. Partly that's because it's a prevalence survey, so there is no way to track individuals over time (talk about invasive surveillance). There is no way to know if an overweight person became overweight recently, or even if they have lost a lot of weight recently. But mostly it's because the questions are boring.
You would think that we'd have learned by now that asking people about their behaviors doesn't tell us much about behavior change. And even when we do learn something about behavior change, we have learned that such efforts are incredibly difficult, time-consuming and often barely effective, especially when they are administered at an individual level.
I won't pretend to know what's causing the growth in obesity rates, but I can assure you that the answer won't come from asking 2 million more people about their exercise habits and vegetable intake. If the answer lay there, we'd have licked the problem a long time ago.
One thing I guarantee that asking 2 million more people about their behaviors will accomplish is cementing in the minds of most public health researchers and practitioners that the answers to our public health issues lie in personal behaviors - and the corollary to that is that any health problems you have are because you have failed yourself. Is that the message we really want to send?
Subscribe to:
Posts (Atom)