Hi, all. We often use coefficients of variation (CV = standard error ÷ estimate) as a rough guideline for statistical reliability: CV < 0.15 is pretty trustworthy; CV > 0.30 is not to be trusted. But the CV value depends on how the estimate is expressed. Here's an example:
This is an extreme example, but it does get across the general idea. I'm curious how others have dealt with this, and whether there's a better way to give our audiences a quick and simple way to assess reliability.
Thanks for any perspective you all can offer.
I don't have a direct answer, but I'd posit that the source of the problem is how the guidelines for ACS MOEs almost universally assume normal distributions, which small counts and proportions rarely follow. This is the same problem that leads to negative lower bounds on values that could never be negative (e.g., by subtracting a large MOE from a small count).In the case of proportions, analysts frequently use logit models, converting proportions or probabilities to odds ratios and logging those, which conveniently allows for normal distributions that never extend beyond p=0 or p=1. (I may be getting the details wrong here, but I'm quite sure the basic principle is right!) I suspect there'd be a way to handle MOEs for proportions in a similar way, in which case the variability on the log of the odds ratio would identical for both p and 1-p (3.5% and 96.5% in your example). Maybe you could come up with a different rule of thumb for reliability based on this, but I'm not sure how I'd compute the MOEs on the logs of odds ratios from ACS MOEs!
Anyway, I don't think a standard CV is a valid measure for a proportions, given the hard lower *and upper* limits on their distributions. A CV makes more sense for distributions with a zero minimum and no upper bound.