Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draco compression of large mesh (350MB glb, 19.3 million faces) causes crash #521

Open
JulieWinchester opened this issue Feb 14, 2020 · 4 comments

Comments

@JulieWinchester
Copy link

The attached mesh is a 350 megabyte GLB with 19.3 million faces and 9.6 million vertices.

Large mesh: https://drive.google.com/open?id=1-_R_TvorTrp_g_tVbi1wKGGuBmbxSlv3

gltf-pipeline has no difficulty converting this GLB to a GLTF with separated textures (example 1). But trying to use gltf-pipeline to Draco compress the GLB causes a crash (example 2). This seems to be a due to a compilation issue with Google's draco3d NPM package, which I've documented previously.

Example 1: Working

$ gltf-pipeline -i mesh.glb -o mesh.gltf -s
Total: 2190.989ms

Example 2: Not Working

$ gltf-pipeline -i mesh.glb -o mesh-draco.glb -d
Cannot enlarge memory arrays. Either (1) compile with  -s TOTAL_MEMORY=X  with X higher than the current value 1811939328, (2) compile with  -s ALLOW_MEMORY_GROWTH=1  which allows increasing the size at runtime but prevents some optimizations, (3) set Module.TOTAL_MEMORY to a higher value before the program runs, or (4) if you want malloc to return NULL (0) instead of this abort, compile with  -s ABORTING_MALLOC=0 
Cannot enlarge memory arrays. Either (1) compile with  -s TOTAL_MEMORY=X  with X higher than the current value 1811939328, (2) compile with  -s ALLOW_MEMORY_GROWTH=1  which allows increasing the size at runtime but prevents some optimizations, (3) set Module.TOTAL_MEMORY to a higher value before the program runs, or (4) if you want malloc to return NULL (0) instead of this abort, compile with  -s ABORTING_MALLOC=0 
abort("Cannot enlarge memory arrays. Either (1) compile with  -s TOTAL_MEMORY=X  with X higher than the current value 1811939328, (2) compile with  -s ALLOW_MEMORY_GROWTH=1  which allows increasing the size at runtime but prevents some optimizations, (3) set Module.TOTAL_MEMORY to a higher value before the program runs, or (4) if you want malloc to return NULL (0) instead of this abort, compile with  -s ABORTING_MALLOC=0 "). Build with -s ASSERTIONS=1 for more info.

Perhaps this is out of scope for gltf-pipeline due to it likely being a draco3d issue. If this is the case, are there any alternative tools for Draco compressing a GLB/GLTF that could handle a mesh this large? It does not seem so massive that it should be impossible to create a Draco-compressed mesh here.

@lilleyse
Copy link
Contributor

lilleyse commented Mar 2, 2020

@JuliaWinchester I see that as well... one idea would be for gltf-pipeline to split up large primitives before encoding them, but I'm not sure when we'd try that.

I'm not aware of other tools that do draco encoding. It looks like tinygltf may support it in the future syoyo/tinygltf#207. That github issue links to the Compressonator which may have support too, but I haven't confirmed.

@ryanfb
Copy link

ryanfb commented Apr 9, 2020

In case it's useful to anyone else, I just ran a binary search to find the exact spot where Draco compression fails with gltf-pipeline on a simplified version of the original mesh attached to this issue.

  • 8204187 vertices, 16425202 faces - works
  • 8204188 vertices, 16425203 faces - fails

GLB's for both here: https://www.dropbox.com/s/wk3loxmuqyhsch2/gltf-pipeline-issue-521.zip?dl=0

Test run on Ubuntu 19.10, gltf-pipeline 2.1.9, node 10.15.2. Confirmed on macOS 10.14.6, gltf-pipeline 2.1.9, node 13.12.0.

@ryanfb
Copy link

ryanfb commented Apr 10, 2020

As a further test to see where the approximate upper limit on mesh size is, I used a binary search to generate/compress fractal landscape meshes from trimesh2's mesh_make command:

  • 8398404 vertices, 16785218 faces - works
  • 8404201 vertices, 16796808 faces - fails

GLB's here: https://www.dropbox.com/s/jq158g0y5wa1aoe/gltf-pipeline-issue-521-frac.zip?dl=0

So it seems like around 8M vertices and 16M faces is likely to be safely compressable, but anything 9M/17M+ is likely to fail.

Another related issue to watch is assimp's proposed support for Draco-compressed encoding: assimp/assimp#2195

Of course, there's also the question of what the upper bound will be for JS decoders reading large Draco-compressed glTF files produced outside of gltf-pipeline.

@ryanfb
Copy link

ryanfb commented May 21, 2020

As another note for anyone needing Draco compression of arbitrarily large meshes, the Blender GLTF/GLB import/export module applies Draco compression by using a compiled version of the library, and as a result does not have these memory limits. So it's possible to use a Blender Python script (Python docs here) which imports a mesh and exports a Draco-compressed GLB: blender_compress_mesh.py

However, a further wrinkle is that even with what should be the exact same Draco compression settings, meshes compressed in this way are significantly larger than those made with gltf-pipeline. For example the uncompressed working GLB from this comment is 283MB, and using the Blender script above compresses it to 241MB, but gltf-pipeline compresses it to 14MB.

The final comment on this Blender issue notes: "gltf-pipeline seems to actually reduce the attribute count, instead of just compressing the data, yielding in smaller file sizes."

It might be worth investigating this discrepancy and opening another issue if it's not the intended behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants