Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding export.socrata function #187

Open
wants to merge 6 commits into
base: dev
Choose a base branch
from
Open

Adding export.socrata function #187

wants to merge 6 commits into from

Conversation

tomschenkjr
Copy link
Contributor

This pull request introduces a stable version of an export.socrata() function as outlined in #126. This allows users to download the contents of a data portal to a local directory. This function will download CSVs (compressed), PDFs, Word, Excel, PowerPoints, GeoJSON, Shapefiles, plain text documents (uncompressed), etc. It will not download HTML pages. As part of the process, the function also copies the data.json file to act as an index for other downloaded files.

I've proposed the version as 1.8.0.

Testing portal export

To test this function, I used the City of Norfolk, VA to export all of the data sets. Looking at their data.json file, I counted 32 data sets that were not HTML pages or did not have a downloadable file. Executing export.socrata("https://data.norfolk.gov) resulted in 32 downloaded files plus the copy of the data.json file. Thus, the expected number of files match the actual number of downloaded files.

Testing non-CSV documents

All of the testing for Norfolk resulted in compressed CSV files, however, also needed to test the ability to download non-CSV files. Kansas City, Missouri's data portal has an unusually large number of non-CSV data sets on their portal, such as PDFs, word documents, Excel documents, etc.

I tested the function on downloading files from their data portal. The function downloaded PDFs, Words, Excel, and other non-CSV files along with CSV files.

However, I did encounter frequent network timeouts after approximately 80 items were downloaded. I believe this is limited to the network and not an issue with the function itself. While this may not be a bug, it may be a limitation on the ability to export files from Socrata.

Unit Testing

I have not written a unit test. I think any unit test will take too much time and space for typical unit testing. The smallest portal download, Norfolk, elapsed over 30 minutes to complete all downloads.

In general, a recommended method for testing is to choose a reasonably small portal and do the following:

  1. Export all files from the portal.
  2. When finished, open the data.json file and count all of the entries with the following exceptions:
    * distribution/mediaType is blank
    * distribution/mediaType is text/html
    * distribution/downloadURL is blank
  3. Compare the counts of download files (except the data.json file) and the count from step (2).

Ideally, the portal being used to test contains CSV files as well as non-CSV files.

geneorama and others added 5 commits December 4, 2017 13:45
Save data.json to file system
------------------------------
A copy of the data.json file at the beginning of the download process is
saved alongside the actual downloaded data. Since `export.socrata()` uses
data.json as the index to download data, this will allow users to
cross-reference the downloaded data with other metadata associated with it
available through [Project Open Data](https://project-open-data.cio.gov).

Handle non-data file
---------------------
Socrata lists non-data files, such as Socrata Stories--HTML websites that
contain text but no machine-readable data--in the data.json file. This
causes errors when trying to download those sites because they do not have
a "distribution URL". While it's arguable that these "sites" should not be
included in the first place, the script now simply skips those files.

Since a copy of the data.json file is downloaded (see above), users will
have transparency into which URLs were not downloaded.
Socrata supports external links which direct to web pages (e.g., HTML).
These would cause an error when `export.socrata()` attempted to download
them. This fix will simply skip those files and proceed to the next file.
  * Ignores HTML files (e.g., Socrata Pages)
  * Ignores on occassions there isn't any data
  * Will download (uncompressed) PDFs, Word, Excel, PowerPoint, plain text attachments.
Rebased branch with most recent `dev` branch and generated documentation.

Merge branch 'dev' into issue126

# Conflicts:
#	DESCRIPTION
#	R/RSocrata.R
@tomschenkjr tomschenkjr added this to the 1.8.0 milestone Jan 5, 2020
* Removed user-defined option for file output (not available yet)
* Clarified documentation where `export.socrata()` files will be located.
* Fixed incorrect date in `DESCRIPTION` file.
* Iterating build number.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants