-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draco compression of large mesh (350MB glb, 19.3 million faces) causes crash #521
Comments
@JuliaWinchester I see that as well... one idea would be for gltf-pipeline to split up large primitives before encoding them, but I'm not sure when we'd try that. I'm not aware of other tools that do draco encoding. It looks like tinygltf may support it in the future syoyo/tinygltf#207. That github issue links to the Compressonator which may have support too, but I haven't confirmed. |
In case it's useful to anyone else, I just ran a binary search to find the exact spot where Draco compression fails with gltf-pipeline on a simplified version of the original mesh attached to this issue.
GLB's for both here: https://www.dropbox.com/s/wk3loxmuqyhsch2/gltf-pipeline-issue-521.zip?dl=0 Test run on Ubuntu 19.10, gltf-pipeline 2.1.9, node 10.15.2. Confirmed on macOS 10.14.6, gltf-pipeline 2.1.9, node 13.12.0. |
As a further test to see where the approximate upper limit on mesh size is, I used a binary search to generate/compress fractal landscape meshes from trimesh2's
GLB's here: https://www.dropbox.com/s/jq158g0y5wa1aoe/gltf-pipeline-issue-521-frac.zip?dl=0 So it seems like around 8M vertices and 16M faces is likely to be safely compressable, but anything 9M/17M+ is likely to fail. Another related issue to watch is assimp's proposed support for Draco-compressed encoding: assimp/assimp#2195 Of course, there's also the question of what the upper bound will be for JS decoders reading large Draco-compressed glTF files produced outside of gltf-pipeline. |
As another note for anyone needing Draco compression of arbitrarily large meshes, the Blender GLTF/GLB import/export module applies Draco compression by using a compiled version of the library, and as a result does not have these memory limits. So it's possible to use a Blender Python script (Python docs here) which imports a mesh and exports a Draco-compressed GLB: However, a further wrinkle is that even with what should be the exact same Draco compression settings, meshes compressed in this way are significantly larger than those made with The final comment on this Blender issue notes: "gltf-pipeline seems to actually reduce the attribute count, instead of just compressing the data, yielding in smaller file sizes." It might be worth investigating this discrepancy and opening another issue if it's not the intended behavior. |
The attached mesh is a 350 megabyte GLB with 19.3 million faces and 9.6 million vertices.
Large mesh: https://drive.google.com/open?id=1-_R_TvorTrp_g_tVbi1wKGGuBmbxSlv3
gltf-pipeline has no difficulty converting this GLB to a GLTF with separated textures (example 1). But trying to use gltf-pipeline to Draco compress the GLB causes a crash (example 2). This seems to be a due to a compilation issue with Google's draco3d NPM package, which I've documented previously.
Example 1: Working
Example 2: Not Working
Perhaps this is out of scope for gltf-pipeline due to it likely being a draco3d issue. If this is the case, are there any alternative tools for Draco compressing a GLB/GLTF that could handle a mesh this large? It does not seem so massive that it should be impossible to create a Draco-compressed mesh here.
The text was updated successfully, but these errors were encountered: