Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extended test concept? #116

Closed
bluenote10 opened this issue Jan 11, 2020 · 2 comments
Closed

Extended test concept? #116

bluenote10 opened this issue Jan 11, 2020 · 2 comments

Comments

@bluenote10
Copy link
Collaborator

Before looking into #114 I wanted to make sure that I cannot easily introduce regressions with upcoming contributions. I initially thought that the files test/fixtures/issueXXX.geojson are regression tests for the corresponding issues, but it looks like they are not tested at all. Then I found the files in featureTypes which looked liked generic test cases at first, but if I understand it correctly they are only supposed to operate against the same clipping polygon.

What do you guys think if we introduce an extended concept for quickly adding generic test cases without having to touch code? A test case could look like:

{
  "type": "FeatureCollection",
  "features": [
  {
    "type": "Feature",
    "properties": {},
    "geometry": {
      "type": "Polygon",
      "coordinates": [[ ... ]]
    }
  },
  {
    "type": "Feature",
    "properties": {},
    "geometry": {
      "type": "Polygon",
      "coordinates": [[ ... ]]
    }
  },
  {
    "type": "Feature",
    "properties": {
      "operation": "union"
    },
    "geometry": {
      "type": "MultiPolygon",
      "coordinates": [[[ ... ]]]
    }
  }]
}

With the convention:

  • The first two features are the subject/clipping polygons.
  • All remaining features are result polygons, and feature.properties.operation encodes which operation it is the expected result.

The test cases would be auto-discovered so that no code has to be added for adding tests. The workflow for adding a test would be to generate the file according to the above pattern, leaving the result coordinates empty. The initial values of the result could be obtained from running the test and grabbing the expected/actual output. Alternatively the test runner could write the results back to the files (plus erroring)?

This only leaves the question of how to assess the results. Personally I'm using a nodejs cli + Python solution, which I could contribute, but is fairly hacky. Visualizing geojson should be possible online as in the demo, but it also imposes a limitation for the test cases: Test cases are not necessarily in WGS84 latitude/longitude. Cases that are specific to other value ranges would actually be invalid geojson, and standard geojson visualization would fail. Is this reason enough not go for geojson and favor our own simplified json format?

Would be interesting to get your opinion on that. If it makes sense to you, I would prepare a PR.

@rowanwins
Copy link
Collaborator

Hey @bluenote10

Yeah the test suite has evolved a bit over history. Your proposal sounds reasonable.

I'd suggest perhaps approach it like

  • a new file called issues.test.js
  • use some filename match module (eg this one is popular) to look for any files named fixtures/issue*
  • Your proposal of having feature 1 and feature 2 as inputs and the result in feature 3 sounds good. In terms of writing the outputs to file vs reading them you can use a node environment variable like
        if (process.env.REGEN) write.sync(directories.out + filename, results);
        t.deepEquals(results, load.sync(directories.out + filename), name);

then running the command like npm run test:issues REGEN=true

The existing test demo site can visualise coords that fit in the normal geographic range as it uses L.CRS.Simple

Does that make sense?

@bluenote10
Copy link
Collaborator Author

Was added in #117

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants