Calculating MOEs from derived ACS estimates

Over the years, we have heard from a lot of people interested in getting an easy-to-use tool that would help people calculate margins of error from derived ACS data (e.g., data combined across categories or geographies). There are several organizations that have developed some basic applications that might be useful. Here are the links:

If you are using a different application in your organization, feel free to post it here.
  • There are at least two other ACS statistical calculators. One is from an agency of the DC government. I have a copy of the spreadsheet but have been unable to track down a URL. The other is the "American Community Survey (ACS) Statistical Analyzer" from the University of South Florida's National Center for Transit Research:

    Here is a PDF of a webinar from 2011 explaining how to use this calculator:

  • I have a question. I asked the Census Bureau how to calculate margin of error when aggregating groups.

    The Compass for Understanding the ACS: What researchers need to know, says on page A-14, says to sum the squares of MOEs for each sub population. Then the square root is the MOE of the aggregation of the subgroups.

    However, I tried that with this table for New York State: B17024, AGE BY RATIO OF INCOME TO POVERTY LEVEL IN THE PAST 12 MONTHS. Universe: Population for whom poverty status is determined. 2009-2011 American Community Survey 3-Year Estimates.

    According to the table, the MOE for children under 6 is 4,562. But if I do the calculations using the MOEs for each income subgroups, the calculated MOE that I come up with is 13,002, which is more than twice as large as the stated MOE for total 6 year olds.

    I contacted the Census Bureau and they replied "The method in the Compass Products is an approximation method for calculating MOE. The formula provided (taking the square root of the sum of the squares) does not incorporate the covariance between each term. This can lead to the approximation diverging from the the true MOE, which you saw when you created your approximate MOE."

    The person I talked to also indicated that there are no covariances available for the public to calculate actual MOEs.

    So I was wondering whether the web sites above use the Compas formulas, and how they might account for the covariances?
  • Gene, I can not talk for all of the sites above, but the Cornell site uses the Compass formula's. None of the sites have access to the covariances, so the covariance cannot be taken into account. The technical documentation of the ACS multi-year products has some examples of these calculations and shows that aggregating indeed does not always come close to the value in the table. MOE's that come with zero count estimates can further complex the situation as they inflate the aggregated MOE. I wanted to check if the latest technical documentation was changed on that point, but I guess that has to wait until the documentation is back online.
  • This is not a very cheery addition, but I have posted a Census Bureau presentation that was shared with me and discusses the formula for calculating CV, but note slide 25:

    "We have found that the approximation formula seriously breaks down when aggregating more than four estimates. So we suggest you aggregate the fewest number of estimates as possible." Other options they recommend are: calculate the estimates using the Public Use Microdata Sample (PUMS) or request a special tabulation (fee based and certain criteria apply).
  • Jan and Kathy's comments are the same thing we were told. In a presentation on ACS basics, I talk about MOE and gave some tips on how you might be able to reduce the number of estimates you need to sum up. See the presentation at slides 13-27.

    When we generate estimates from the PUMS file, we calculate the MOE using the rep weights in SAS. BUt this doesn't work for small levels of geography.
  • I use the PUMS data with replicate weights for the MOE calculations.
  • Yes, I think those were the suggestions of the Census Bureau person. 1. use the PUMS or 2. Request a special tabulation (fee based).

    The only other alternative I could come up with was, when I have some groups I want to aggregate, find some other tables that present that aggregate group and the variable is sort of similar, and see what the MOE is for that table.

    So, for example, if I want to aggregate children under 5 below 185% of poverty (presented at american factfinder as several age groups), look at the tables for children under 5 below 100% of poverty, which is already aggregated by the ACS into one group and has an MOE with the table. The MOEs won't be the same, but might be similar.
  • When I read Nancy Gemignani's presentation ( See the presentation at slides 13-27.) I noticed that slide 15 suggested a recommendation of below 33% is a good "Relative MOE" and anything above to use with caution.

    The Compass handbooks say to use CVs > 15% with caution. Since the CV is the ratio of the SE to the estimate and the MOE is 1.645 * the SE then the Compass recommendation would imply anything over about 25% Relative MOE (i.e. 15% * 1.645) should be interpreted with caution.

    Recognizing the context affects the interpretation, what rules of thumb are others using?

    stan drezek
  • We currently follow the “>15% CV use with caution” standard recommended by the Compass Guides in our office but I have been doing a little research to explore the idea using of tiered reliability categories especially for working with small area estimates (tracts, zip codes). So far I have found three sources that provide tiered categories, but they are all a little different.

    1. State of Washington, Office of Financial Management (I think this is the State Data Center?)
    Good: CV
    Fair: 15%
    Use with caution: CV > 30%

    2. ESRI
    High Reliability: CV
    Medium Reliability: 12%
    Low Reliability: CV>40% (considered very unreliable)

    3. Missouri State Data Center
    Most Reliable: CV less than 9.1 (RMOE less than 15%)

    Middle Category: CV between 9.1 and 21.2 (RMOE between 15%-35%)
    Least Reliable: CV 21.2 or greater (RMOE greater than 35%)

    I am interested in hearing other’s responses to Stan’s question and also learning about whether anyone else uses tiered reliability guidelines and what your breakdowns are.
  • We don't use a tiered CV guideline (very good idea, though), but we use another (yet debatable) CV rule-of-thumb: if CV > 20%, you have a very unstable estimate. This is from an APDU webinar "User Perspectives on ACS 5-year Data – May 11, 2011" which can be viewed at (scroll down the page to the above title).
  • Where in the webinar is this? There are two presentations, I didn't see the rule of thumb you menioned. Did I miss something?
  • Gene, I reviewed both presentations and you are correct...the CV rule of thumb I mentioned was cited by one of the presenters, according to my notes from that webinar. Sorry for the confusion.
  • Hello, thank you for all of these resources. After some deliberation, our organization decided to use the following guidelines when publishing the estimates:

    Indiviudal Estimate CVs:

    Excellent- 0-10%
    Good- 10-30%
    Fair- 30-60%
    Poor- >61%


    Ashenfelter, Kathleen, "Study Series: Data Reliability Indicator Cases on the Coefficient of Variation: Results from the Second Round of Testing," Statistical Research Division, U.S. Census Bureau, April 2010.

    For groups of estimates (such as all of the Census Tracts in a county:

    High- >80% of geographies' CV
    Medium- 60-80% of geographies' CV


    Michael Stasinic et al, "Quality Rating Classification of 5-year ACS MYES Estimated by Population Size Groupings and in Comparison with Census 2000 Long Form Estimates," Decennial Statistical Studies Division, U.S. Census Bureau, May 2011.
  • Just read the string of discussion. The topic changed from "how to calculate MOE for aggregated data" to "what CV standard to use". The last post was almost a year ago and I did not see a solution for the MOE problem.

    Has anybody found a solution for calculating the correct MOE for aggregated data? In additional, for aggregated data, is it true that if the calculated MOE/SE are incorrect, then the CVs are also incorrectly calculated? Thanks.