Skip to content

gltfpack: Deindex meshes with abnormally large vertex accessors #885

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 12, 2025

Conversation

zeux
Copy link
Owner

@zeux zeux commented May 11, 2025

If the index buffer is much smaller than vertex accessors, and vertex accessors
are shared, we end up re-unpacking the accessors repeatedly and producing huge
meshes that are inefficient for the rest of the processing pipeline.

This change detects these cases and switches to reading the individual elements
according to the index buffer, producing an unindexed mesh; the rest of the
processing will re-index it again.

For some extreme cases this may result in significant improvements in parsing and
processing time; the mesh from #884 could not be converted before this change,
and with this change it takes just 0.3s to do so.

Before settling on the approach here, I've also tried trimming the index buffers to
their min..max range and extracting an associated subrange from the accessors;
this worked but only got the problematic model down to 30s of conversion time
because many indices spanned a large range. Doing an early reindexing in this case
got this down to ~3s, but it still felt too long and too complicated, whereas this
approach generically solves this issue.

This relies on support for sparse accessors in cgltf_accessor_read functions,
which also fixes interaction between sparse accessors and parsing files with GPU
instancing extension; the patch has been submitted separately as jkuhlmann/cgltf#273.

Fixes #884.

zeux added 4 commits May 11, 2025 19:54
Previously we have supported sparse accessors for cgltf_accessor_unpack_floats
but not for various read_ functions. This could lead to applications using
read_ functions for specific cases and the code would work for most glTF files
but fail on files with sparse data.

This change supports sparse indices by using a binary search to find the index;
the keys in sparse accessors are guaranteed to monotonically increase. This is
still slower than a single linear pass that unpack_floats does, but will work
correctly and have acceptable performance.
The last parameter here is the number of floats per element read.
per element read. It happens to be 4, as Attr has 4 floats, but
should be specified explicitly.
…e fly

If the index buffer is much smaller than vertex accessors, and vertex accessors
are shared, we end up re-unpacking the accessors repeatedly and producing huge
meshes that are inefficient for the rest of the processing pipeline.

This change detects these cases and switches to reading the individual elements
according to the index buffer, producing an unindexed mesh; the rest of the
processing will re-index it again.
Point clouds use empty indices, so indices.size() would be less
than the vertex count. The actual code that would execute then is a no-op,
as it would leave sparse empty, but if the mechanism of enabling sparse
mode changes in the future this may regress, so add a condition to be
explicit.
@zeux zeux merged commit 5cb7f8f into master May 12, 2025
13 checks passed
@zeux zeux deleted the gltf-shacc branch May 12, 2025 19:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

gltfpack needs too much memory/time to parse models with redundant accessors
1 participant