-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
demo: Initial hierarchical clustered simplification demo #760
Commits on Sep 3, 2024
-
demo: Add nanite.cpp along with -n (make nanite)
This is a stub that will be expanded to provide a demo of Nanite-like (hierarchical clustered level of detail) processing. This is necessary both to serve as an example of how to implement it using all best practices as well as a testing harness; right now meshoptimizer has enough functionality to implement *a* pipeline, but for best performance currently some algorithms need to be swapped out. Long term the goal is for this to be close to optimal while using just meshopt_ functions.
Configuration menu - View commit details
-
Copy full SHA for 9366cd6 - Browse repository at this point
Copy the full SHA 9366cd6View commit details -
demo: Add METIS as an optional dependency
For initial Nanite implementation to work well we will need METIS for graph partitioning. Eventually we will implement enough algorithms in meshoptimizer itself to not need this.
Configuration menu - View commit details
-
Copy full SHA for 1f34f1d - Browse repository at this point
Copy the full SHA 1f34f1dView commit details
Commits on Sep 4, 2024
-
demo: Implement initial hierarchical clustering
This should be a more-or-less complete, if basic, merge-simplify-split pipeline for cluster DAG build, with the exception of tracking the actual DAG data (errors and parent links). The caveat is that to merge clusters, we simply partition them sequentially; this is suboptimal and where METIS can help. We also use meshopt_buildMeshlets that can sometimes leave unwanted gaps but we don't have a great way to quantify the issues yet without collecting stats.
Configuration menu - View commit details
-
Copy full SHA for f1b2bc7 - Browse repository at this point
Copy the full SHA f1b2bc7View commit details -
demo: Expand cluster statistics to have some basic numbers
We can now compare things like total simplified triangle count assuming every cluster goes all the way down to lowest LOD to judge the quality of simplification, as well as number of stuck clusters at every level. config=trace also now prints just the decisions about stuck clusters which is the key factor wrt efficiency at the bottom level. It's likely that we can adjust the process to be more graceful about this in certain cases.
Configuration menu - View commit details
-
Copy full SHA for f54bb44 - Browse repository at this point
Copy the full SHA f54bb44View commit details -
demo: Extract cluster size into a tunable constant
Nanite uses 128 so for now let's stick to this; max vertices will need to be refined later, as well as the various thresholds associated with merge/simplify.
Configuration menu - View commit details
-
Copy full SHA for 0d2053d - Browse repository at this point
Copy the full SHA 0d2053dView commit details -
demo: Rework cluster processing to work with stable cluster indices
This is required to be able to properly compute and update LOD information; additionally, this reduces the amount of data copying we have to do for indices and will make it easier to reintroduce meshlet local indexing in the future. As part of this, simplify() also returns indices instead of a cluster, as it would not make sense to compute meshlet local indices here.
Configuration menu - View commit details
-
Copy full SHA for 48f18d1 - Browse repository at this point
Copy the full SHA 48f18d1View commit details -
demo: Implement cluster LOD bounds tracking
We now compute LOD bounds for each cluster as well as the parent information that propagates through DAG and should make it possible to select a DAG cut just based on bounds alone. This also makes it possible to compute the lowest LOD (as well as any other LOD!) from cluster data alone without relying on stuck_triangles diagnostics which has been reworked to just analyze the current LOD level.
Configuration menu - View commit details
-
Copy full SHA for 440b084 - Browse repository at this point
Copy the full SHA 440b084View commit details -
demo: Implement debug LOD cut output
To make it easier to understand the results, in addition to numerical stats for the DAG we can now output a simplified mesh by computing the error for each cluster from a viewpoint and using hierarchical information to effectively select LODs at every level. We always do this to output a number, but also can save the .obj to stderr for subsequent external visualization.
Configuration menu - View commit details
-
Copy full SHA for cedaf9a - Browse repository at this point
Copy the full SHA cedaf9aView commit details -
demo: Add DAG error validation to check monotonicity
For correct hierarchical LOD selection we need to maintain the bounds monotonicity: any parent cluster needs to have error >= any child cluster from any viewpoint. This is something that we don't currently get right because our merged sphere might not cover all child spheres; for now we will print an error if this happens, but this code might be removed in the future.
Configuration menu - View commit details
-
Copy full SHA for 1059fba - Browse repository at this point
Copy the full SHA 1059fbaView commit details -
demo: Instead of using a precise merged bounds, merge bounds manually
This falls out of monotonicity requirement: it is not enough to make LODBounds::error monotonic, the real requirement is that for any viewpoint, boundsError is monotonic through the DAG. To achieve that we need to make sure that the bounding sphere of any parent cluster contains the bounding sphere of the child cluster, which may not hold if the parent sphere is computed precisely based on vertex data. Fixing this and fixing boundsError to return FLT_MAX if the viewpoint is contained inside the sphere makes the DAG checks pass, so they can now use assertions instead of logs.
Configuration menu - View commit details
-
Copy full SHA for 09c270c - Browse repository at this point
Copy the full SHA 09c270cView commit details
Commits on Sep 5, 2024
-
demo: Implement initial version of Metis partitioner
We use a k-way partitioning scheme and ask for ~c/4 partitions in hopes that it groups clusters reasonably well; for now we use the number of shared edges as the connection weight although it is likely that the number of shared vertices is a reasonable proxy that is easier to compute and more useful. Note that the rest of the pipeline was structured to deal with a more strict partitioner, as such in some cases the results are better and in some they are worse. This is good because the rest of the pipeline needs to have better heuristics anyway.
Configuration menu - View commit details
-
Copy full SHA for 9865e40 - Browse repository at this point
Copy the full SHA 9865e40View commit details -
demo: Add partitioning visualization
Using environment variable DUMP, we can now control the contents of the output .obj: -1 means "output DAG cut", 0-n means "output grouping at a given LOD level". When we output the cut, we also output individual clusters as separate objects for ease of debugging.
Configuration menu - View commit details
-
Copy full SHA for a9fa0d5 - Browse repository at this point
Copy the full SHA a9fa0d5View commit details -
demo: Implement an initial version of Metis clusterizer
We might need to split this into a separate option because, while this seems to work, it also results in disjoint clusters and in addition to this doesn't fill the clusters very well; at the minimum this will need further tweaks.
Configuration menu - View commit details
-
Copy full SHA for 94baf79 - Browse repository at this point
Copy the full SHA 94baf79View commit details -
demo: Reinsert stuck clusters into the queue
This means that small clusters that end up being too small to merge or edge-locked enough so that simplification is not effective may get another chance further into the process to get merged with other clusters. This is generally beneficial for quality although sometimes results in worse results and definitely results in slower processing as the stuck triangles keep being reevaluated on every pass.
Configuration menu - View commit details
-
Copy full SHA for d51797d - Browse repository at this point
Copy the full SHA d51797dView commit details -
demo: Tweak Metis clustering to get more uniform subdivision
Also use remapped vertices to identify adjacency for weighting; this results in better spatial clustering for disconnected components. Finally, for now triangle clustering requires METIS=2 because it has complex tradeoffs and is not universally better from what it seems like...
Configuration menu - View commit details
-
Copy full SHA for d33f95e - Browse repository at this point
Copy the full SHA d33f95eView commit details -
demo: Rework DAG thresholding to be more lenient
Especially Mhen Metis triangle clustering is used, we often get much smaller clusters in the initial split, because it gets to 129 triangles and splits that into 64+65. Then partioning may either take three clusters with sizes a little under 128, or four clusters two of which are 64/65, and the resulting cluster will be too small to merge. Instead we remove the merge criteria outright if we have two clusters to merge, and replace it with percentage reduction. Right now the percentage is very lenient; it would not be ideal to have every single level to just be a 85% reduction. However, this can be controlled on a macro level and ideally we want to prevent the processing getting stuck if possible.
Configuration menu - View commit details
-
Copy full SHA for 35a3dbb - Browse repository at this point
Copy the full SHA 35a3dbbView commit details -
demo: Use meshopt_generateShadowIndexBuffer instead of remap
Remap generation expects count % 3 == 0 right now, and we're just feeding it our vertices as if they were a triangle buffer. Instead we can use shadow index buffer for edge matching, which is a little simpler anyway.
Configuration menu - View commit details
-
Copy full SHA for 78505ff - Browse repository at this point
Copy the full SHA 78505ffView commit details -
demo: Fix visualization of single clusters
Since this is a separate branch now that happens before the merge, we should visualize it separately; using a different label helps tell these apart.
Configuration menu - View commit details
-
Copy full SHA for e887153 - Browse repository at this point
Copy the full SHA e887153View commit details -
demo: Add introduction comments to nanite.cpp
In general this code needs a lot more work but it is in a state where it can be useful for testing and development so it's probably a reasonable place to stop for now.
Configuration menu - View commit details
-
Copy full SHA for 9ed2fc3 - Browse repository at this point
Copy the full SHA 9ed2fc3View commit details -
Configuration menu - View commit details
-
Copy full SHA for b0fd7c7 - Browse repository at this point
Copy the full SHA b0fd7c7View commit details
Commits on Sep 6, 2024
-
demo: Add the average tri/cluster counter to nanite
The "full" counter is a little less helpful for METIS which very often has almost-full clusters, so add an average as well to gauge the distribution. We also now count the number of singleton clusters; these are rare but it's important that they stay that way for the quality of the partition.
Configuration menu - View commit details
-
Copy full SHA for fce398f - Browse repository at this point
Copy the full SHA fce398fView commit details -
demo: Implement connectivity analysis for meshlets
Since whether meshlet is connected or not is a very important criteria for how well it will do in a hierarchical clusterization process, we now compute the number of connected components in meshlet demo and count the number of meshlets with more than one. Notably, for now we do this analysis using indices; this means we do connectivity analysis on original, non-positional, topology, and on some geometry that looks visually connected we will naturally have many disconnected meshlets.
Configuration menu - View commit details
-
Copy full SHA for bd12308 - Browse repository at this point
Copy the full SHA bd12308View commit details