These integration tests verify the correctness and consistency of maplibre-gl-js and maplibre-gl-native rendering.
Tests are contained in a directory tree, generally organized by style specification
property: background-color
, line-width
, etc., with a second level of directories below that for individual tests. For example, the test for specifying a literal circle-radius
value lives in test/integration/render/tests/circle-radius/literal/
.
Within a leaf directory is a style.json
file (e.g. circle-radius/literal/style.json
), which contains the minimal style needed for the given test case. The style can specify the map size, center, bearing, and pitch, and additional test metadata (e.g. output image dimensions).
The expected output for a given test case is in expected.png
, e.g. circle-radius/literal/expected.png
.
There can be multiple files with the expected
prefix since the output can vary slightly with each platform.
Supporting files -- glyphs, sprites, and tiles -- live in their own respective subdirectories of the test/integration/assets
directory. The test
harness sets up the environment such that requests for these resources are directed to the correct location. For example, images in the assets/tiles
subdirectory can be referenced using "local://tiles/{z}-{x}-{y}.satellite.png"
in the style.json
file.
The contents of vector tile fixtures can be read using the vt2geojson
tool (see below).
All tests are run for every PR. If you're not sure yet if the tests are good, you may use a Draft PR to indicate that the work is still in progress. Each jos, or a group of tests, will create an artifact of any of its tests fail. The artifacts are found at the bottom of the jobs summary
Download the appropriate artifact as a zip file, open it and view the results.html
file it contains.
The "Actual" image of a failed test can be saved and used as the new "Expected" image.
To run the render tests:
npm run test-render
To run the integration tests (except the render tests):
npm run test-integration
This includes the browser tests.
To run the build tests
npm run test-build
For running a subset of tests, you may use vitest filters e.g.
npm run test-integration -- browser
Additionally, it may be helpful to use a visual frontend (Vitest UI). Note that since render tests do not use Vitest, these will still have to be run from the command line. The UI can be started by replaceing run
with --ui
in package.json:
- "test-unit": "vitest run --config vitest.config.unit.ts",
+ "test-unit": "vitest --ui --config vitest.config.unit.ts",
To run a subset of tests or an individual test, you can pass a specific subdirectory to the test-render
script. For example, to run all the tests for a given property, e.g. circle-radius
:
npm run test-render -- circle-radius
...
* passed circle-radius/antimeridian
* passed circle-radius/default
* passed circle-radius/function
* passed circle-radius/literal
* passed circle-radius/property-function
* passed circle-radius/zoom-and-property-function
6 passed (100.0%)
Results at: ./test/integration/render-tests/index.html
Done in 2.71s.
Or to run a single test:
npm run test-render -- circle-radius/literal
...
* passed circle-radius/literal
1 passed (100.0%)
Results at: ./test/integration/render-tests/index.html
Done in 2.32s.
Render tests are executed in browser, and by default console messages are hidden. If need to see them, please pass --debug
parameter:
npm run test-render -- raster-masking/overlapping-zoom --debug
During a render test run, the test harness will use puppeteer to drive real browser and create an actual.png
image from the given style.json
, and will then use pixelmatch to compare that image to expected.png
, generating a diff.png
highlighting the mismatching pixels (if any) in red.
By default render tests generate reports in ./test/integration/render/
directory:
npm run test-render
...
1211 passed (99.8%)
2 failed (0.2%)
Results logged to './test/integration/render/results.html'
...you can view the results graphically by opening the results.html
file generated by the harness:
open ./test/integration/render/results.html
If want to skip the report, please pass --skip-report
parameter
npm run test-render -- circle-radius/literal --skip-report
Note that the CI is running the render tests. If they fail, the report.html
is uploaded as an artifact. This file can be download, opened in the browser and with right click - save the image the actual CI render test result can be stored as the expected image.
To run this manually you can use the following commands On Linux:
xvfb-run -a UPDATE=true npm run test-render
On Mac:
UPDATE=true npm run test-render
Or on Windows with PowerShell:
$env:UPDATE=$true; npm run test-render
In test/integration/browser/browser.test.ts a web server is automatically started to expose static assets from the integration folder. In order to start a similar server manually with npm run start
.
We currently run each test in a new tab. Alternatively we might gain some speed by clearing the webgl context instead, and running everything in one tab.
delete map.painter.context.gl;
The output for each test is a true/false, regarding whether the expected and actual output has deep equality. To get a better test output, we can use:
generateDiffLog(fixture.expected, actual);
Query tests can be run in the browser, the server for serving up the test page and test fixtures starts when you run
npm run start
A filter can be specified by using the filter
query param in the url. E.g, adding
?filter=circle-pitch
to the end of the url will only run the tests that contain circle-pitch
in the name.
The terminal window can be very noisy with both the build and the test servers running in the same session. So the server uses platform notifications to inform when the build has finished. If this behaviour is annoying, it can be disabled by setting the following env-var
DISABLE_BUILD_NOTIFICATIONS=true
Note: Expected results are always generated with the js implementation. This is merely for consistency and does not imply that in the event of a rendering discrepancy, the js implementation is always correct.
To add a new render test:
-
Create a new directory
test/integration/render/tests/<property-name>/<new-test-name>
-
Create a new
style.json
file within that directory, specifying the map to load. Feel free to copy & modify one of the existingstyle.json
files from therender/tests
subdirectories. In this file, you can add additional information to describe the test and expected outcomes using thedescription
metadata field. -
Generate an
expected.png
image from the given style by running the new test with theUPDATE
flag enabled:UPDATE=1 npm run test-render <property-name>/<new-test-name>
The test will appear to fail, but you'll now see a new
expected.png
in the test directory. -
Manually inspect
expected.png
to verify it looks as expected, and optionally run the test again without the update flag (npm run test-render <property-name>/<new-test-name>
) to watch it pass (enjoy that dopamine kick!) -
Commit the new
style.json
andexpected.png
🚀
You can update the expected results of query-tests by running them with with the UPDATE
flag enabled, for example on Linux:
UPDATE=true npm run test-query
Check carefully if all changes are intended.
Read the contents of an entire vector tile
npx vt2geojson -z 14 -y 8803 -x 5374 test/integration/assets/tiles/14-8803-5374.mvt
Read the contents of a particular layer in a vector tile
npx vt2geojson --layer poi_label -z 14 -y 8803 -x 5374 test/integration/assets/tiles/14-8803-5374.mvt