best practices for media use of local trends

My work as an urban transportation reporter for a couple websites involves close scrutiny of the ACS commuting data on bicycling, public transit, etc. Because this behavior has been changing fast in Portland and other cities, I often use several consecutive one-year estimates to draw rough conclusions, despite differences that are within the margins of error. For example:

bikeportland.org/.../census-portland-biking-stalls-for-fifth-year-while-other-cities-climb-94248

And here's a comment that, understandably, takes me to task for doing so:
bikeportland.org/.../census-portland-biking-stalls-for-fifth-year-while-other-cities-climb-94248

Here's what I wrote in response:


Yes, I agree with both of you that five-year rolling estimates would be more accurate/precise than these year-on-year estimates, and I'm glad that at Alex's urging I've been including more caveats about margins of error when I write about these numbers. In this case, unfortunately, the five-year rolling data just isn't very meaningful, because we've only got two cycles of it (soon three): 2006-2010 and 2007-2011 since ACS upped its data collection.

In lieu of that, what I'm doing is tracking one-year estimates over time and focusing only on the ones that show what seems to be a clear trend year after year. The lack of precision here is also the reason the chart above has wavy lines rather than bars or sharp angles, for what it's worth.

Please, take this data with a fat grain of salt. But year-on-year ACS figures are widely used by media organizations, governments, advocacy groups, etc. -- the city actually issued a press release yesterday about these same numbers, though this report wasn't based on theirs. I disagree that widely used and respected data should be off-limits for reporting when the reporting is careful about its claims. In my mind, it'd be worse for our city to use the increasingly remote possibility that this is a four-time statistical anomaly to disregard a situation that's staring it in the face.


I'm not trained as an analyst; I'm just an English major who did enough social science in college to have a basic understanding of statistics, and then poked around Factfinder 1 until he figured out what it offered. I do try to take care, in my reporting, to limit myself to covering trends that seem consistent over time or in multiple geographic areas, to prominently include caveats, etc. But am I off base to be even working with this data? How, if at all, could I strengthen my practices?
Parents
  • Hello again, I mistyped in the above response, I meant to say Standard Error instead of Margin of Error when calculating the Coefficient of Variance (CV). The CV is:

    The ratio of the standard error (square root of the variance) to the value being estimated, usually expressed in terms of a percentage (also known as the relative standard deviation). The lower the CV, the higher the relative reliability of the estimate.
Reply
  • Hello again, I mistyped in the above response, I meant to say Standard Error instead of Margin of Error when calculating the Coefficient of Variance (CV). The CV is:

    The ratio of the standard error (square root of the variance) to the value being estimated, usually expressed in terms of a percentage (also known as the relative standard deviation). The lower the CV, the higher the relative reliability of the estimate.
Children
No Data