Proposed changes to the ACS Summary File format

The ACS Office at the Census Bureau is currently testing a new format for the ACS Summary File, which is a comma-delimited text file that contains all the Detailed Tables for the ACS.  

Information about the proposed updates to the ACS Summary File are described on the Census Bureau's website. 

We are starting this new Discussion Thread so that ACS data users can post any comments or questions about the proposes changes. ACS Summary File users are also encouraged to participate in the webinar scheduled for this afternoon on this topic.

  • Sure. I merely added the second underscore as a possible mitigation for the confusion problem Bernie mentioned, that would still be usable in SAS programs with only minor modification

  • The proposed new naming convention (e.g., B01001_001E) is consistent with the Census API, which my organization makes great use of. We use the summary file a lot as well, and the first step we do with the summary file is convert the field names into the API format, so that we're using one naming convention across our work.  I think the new naming convention is a welcome change.

  • This seems like something we could adapt to fairly readily. 

    I'd like to make a plea for structured metadata which is published in something other than a variety of XLSX files.  Things that application builders need to know which is maybe taken for granted in data analysis use cases.

    • table name
    • table universe
    • column name
    • data type (int/float, or possibly count/median/etc)
    • parent/child relationships between columns (e.g. these children should sum to this parent)
    • geographies which are categorically excluded from a given table (basically Appendix B from this page on Data Suppression)
    • the character encoding used for text (only applies to geoheaders and metadata, but it's important)

    and some things which would be really nice to have

    • table new or changed since last release
    • clearer articulation of data suppressed on a per-geography level, currently just represented by blank values
    • which ACS question(s) are the source of the data for a given table
    • something which helps map when a table universe is a proper subset of another table, like table/column (I know not all universes are so straightforward)
    • A better explanation of the prefix part of geoheaders, specifically the "M4/M5" geographic variant used for CBSAs and CSAs, which map to specific delineation vintages, but not in a way which is made clear to data users.

    Sorry if this is just hijacking the thread...

  • We use the API, as well, but we mostly use the Summary Files, and keeping the same nomenclature year to year makes the most sense to me.  That said, I think the table based format for all tables at all geo levels will be a big improvement for us, since we currently process all of the summary files to create the data that we input into Social Explorer.  The 255 character limit was not helpful.  I assume the Geofiles will be the same and will be linked to the tables us an LOGRECNO as now.

  • I think it is hijacking the thread, since the changes to the format of the data files won't affect the metadata files, but I like a lot of these ideas, and I think they'd be very much worth discussing in a separate thread.

  • That is a good thing to know.  Since I haven't really been using the Census API, yet, I didn't catch this.  After using the Decennial and ACS data for so many years, I find it very odd that the Census API developers would add a character to the end of a variable name that would prevent it from being used in range calculations.

  • I'm sure that having one scheme would make it easier for Census Bureau staff, and for users who might need to join API and summary file data.

    At the same time, I use the API and the summary files very differently, and I personally don't need the naming conventions to be consistent. The API is great when I just need a few tables for a single set of geographies, but not if I need many tables for many kinds of geographies (which is more often the case). I and many others have so much code that depends on the existing naming convention--specifically the ability to refer to ranges of variables by numbers, which will be much harder with the proposed framework. This is true for users of SAS, R, Stata, and probably other programs. I realize that we users can always convert the API-style names back to existing summary file-style names (B01001_001E --> B01001e1), but that extra work for users (which would be quite difficult for novices) seems to undermine one of the reasons for this change. 

    I'd love any information that would assuage my concerns, though--have you found advantages to using the API's naming convention rather than the summary file's naming convention?

  • I am conflicted with some of the proposed changes. We import the data into our SQL database and we normally import only 4 areas (US, TX, NM, AR), and the proposed changes would require us to process an extraordinarily large number of records that we do not use. If there are over 500K geo entries (approx. 280K Non Track/Block Group) that would mean we would process an estimated 300M records to retrieve about 60M records (*see calculation comment below). For those that use the entire set this is not an issue, but for those of us that use 4 areas or less it does have an impact. Having the state level files is a great service that you provide and I absolutely understand that painstaking process to generate all the files, but it seems to me that that process would not fall on each of the Data Users that do not use the entire set.

    What I do like is the addition of the column headers for the files and the single GEO file. As far as the GEO file it would be nice to have the Land/Water area and LSAD code columns added. I also noticed on the example files that most of the columns in the geo file are no longer formatted entries, for example summary levels are show as 10, 50, 150 and not 010, 050, 150, etc. same for all area identifiers, for example counties are show as 1,3,5 as opposed to 001,003,005.

    If decide to move forward with a single geo file, is there any reason why you would not have a LOGRECNO go across all states as opposed to being reset every state? This way the LOGRECNO could be the unique identifier to join geo files with data files as opposed to using the GEOID which is a 19 character variant alphanumeric value. For us having the LOGRECNO for the join is much more efficient way to join tables.

    For those that use databases the new file structure add another layer of complexity because some of the data files now contain more the 1100 column, and in SQL server the natural (non-sparse) column limit is 1024, not sure, but I believe Oracle has a limit of 1000 columns per table. Just putting that out there for those that do import the data to a database.

    As far as the column names, as others have mentioned, I would prefer:
    B01001_e001, B01001_m001, B01001_e002, B01001_m002 … or
    eB01001_001, mB01001_001, eB01001_002, mB01001_002 …

    *Record Calculation Estimate: Since each file varies in number of records I took the number of Non Track/Block Group areas as the most common set of rows and multiplied it with the 1,100 tables being produced.

  • We likewise process the data files in SQL Server and the column limit of 1024 would be an issue for us as well. Currently, it looks like at least the following tables would be impacted:

    B24114
    B24115
    B24116
    B24121
    B24122
    B24123
    B24124
    B24125
    B24126