Hello world! My name is Eugene, I’m a backend developer at Bimeister. Today I would like to continue the story about our 3D engine Spatium. In this article, we will talk about another optimization algorithm – the search for and removal of redundant vertices from 3D models.
The material may be of interest to engineers involved in 3D design and development.
Context
In computer graphics, geometry can be defined in various ways – a list of faces, a table of angles, vertex or “winged” representations, etc. Therefore, in order to work with different formats of models in a single space, it is first necessary to bring the geometry of different representations to some unified description. Such a description is generally a set of Positions vertices (points in space with three x, y, z coordinates) and Normal vertices (vectors with three components x, y, z) mapped to these positions. If you connect the three vertices with segments, you get a triangle. The GPU can either paint over this triangle with a color or “stretch” a texture over it. And normals are needed to correctly calculate reflections and shadows when using light (without light, we can’t see anything). The aggregate of such triangles forms the socalled “mesh” is a figure that defines the shape of a polyhedral object in space.
The more complex and detailed the shape of the figure, the more vertices and triangles it contains. An additional increase in the number of triangles can be caused by Tessellation or export to a format external to the modeling environment. In all of these cases, it is possible that the number of triangles in the geometry exceeds the minimum number necessary and sufficient for the graphics card to render the geometry correctly. This means that the use of memory for storing information will be increased, but this information will not be of any practical use – the shape of the geometry will not be affected by the extra triangles formed by redundant vertices. There is a desire to remove these redundant vertices in order to reduce the memory consumption for storing, transferring, and processing this geometry. For the sake of brevity, we’ll call this optimization “geometry filtering”. Let’s make a reservation about the fundamental difference with another optimization, in which vertices are also removed from the geometry – Simplification. When simplifying The shape of the geometry changes, and when filtering, No.
So, under Redundant Let’s understand vertices whose removal from the geometry will not cause it to lose its shape.
In addition, there are a few limitations:

Geometry for us consists of two arrays: an array of vertex coordinates (positions) and an array of vertex normals. Therefore, we work not just with coordinates, but with coordinatenormal pairs. If several vertices have the same coordinate (position), but the normals are different, for us it is Different Vertex.

The vertices have a transformation matrix. We will be interested in the scale component of this matrix.

Polygons can have holes.

As a result of the optimization, the shape of the original geometry should be preserved.
Our optimization will consist of several stages. Let’s take a closer look at each of them.
Step 1 – Finding and Deleting Degenerate Triangles
Degenerate triangles are triangles in which:

Two or more vertices coincide.

All three vertices are located on the same line.
Why look for them? Degenerate triangles do not define any shape, but memory is used. That’s why we’ll start cleaning up the geometry with them.
Let’s remember about constraints 1 and 2. Let’s decide for ourselves what acceptable positioning accuracy we will work with: to do this, we will determine the order of characteristic distances (mm, cm, m, km). If your subject area is factories with dimensions of hundreds of meters, then drawing triangles with micron accuracy is clearly redundant (the difference is 79 orders of magnitude).
Therefore, we round up to the established value, taking into account the scale, check for signs of degeneration, Remove the excess.
Step 2 – Getting the Geometry Topology
You’ll need geometry topology to find different areas, such as polygon boundaries, groups of adjacent triangles, and so on.
Construct triangles from positionnormal pairs. Then, for each triangle:

Calculate the normal according to the righthand rule and specify the orientation of this normal according to the vertex normals.

Build edges (the hash of the center of each edge is important to us) and memorize them.
After all the triangles are built, fill in the dictionaries “Vertex – Triangles with this vertex” and “Triangle – Adjacent triangles”. They are useful for dividing candidate polygons into contiguity areas and defining their boundaries (constraint 3).
Step 3 – Find Polygons with Potentially Redundant Vertices
Since the shape of the geometry cannot be changed (constraint 4), we will work with redundant vertices within the polygon. A polygon is a set of triangles lying in the same plane.
But first, you need to find a suitable landfill. How to do it? Let’s remember that every peak has a normal. For any point on the plane, the normal will be the same. So, we need to group the vertices by normals. Next, you need to check a few conditions:

The number of points in the group must be greater than three (more than one triangle in the plane).

The number of points in the group must be a multiple of three (the geometry is in a “raw” form, there is no indexing of vertices).

All points in a group must not belong to the same line.

All points in the group must belong to the same plane – vertices can have the same normals, but lie in several parallel planes with the same orientation in space.
In addition, even if they are in the same plane, vertices with the same normals can belong to different polygons. Therefore, it is necessary to divide them by “Connectivity Areas” – triangles with adjacent edges.
Beforehand, you can group the triangles themselves by normals, and group vertices by normals within a group of triangles.
Those groups of vertices that meet all of the above conditions become candidate polygons for filtering. Let’s move on.
Step 4 – Find and Remove Redundant Vertices
For each candidate polygon, you need to find the vertices that form the boundaries of the polygon: the outer (shell) and the inner (holes).
Vertices that are not included in the list of boundary vertices are considered redundant – they are located in the “body” of the polygon and do not carry useful information about its shape.
And for the boundary vertices, we draw segments, which we check for belonging to a line. If there are more than two such positions, then the positions “inside” the segment will be redundant – they also do not say anything about the shape of the polygon.
Step 5 – Retriangulation of Cleared Polygons
Since the graphics card works with triangles, after removing the vertices, the polygon must be turned back into a set of triangles, i.e. triangulated.
Since we are dealing with polygons that may have holes, we need to take this into account. The right algorithm for us would be Delaunay triangulation In Constraints – boundary edges will be taken into account and will not be lost (good visual example In here).
Hurrah! The polygon is cleared, triangulated, and ready to go to the graphics card. The only thing left to do is to remove all occurrences of the found redundant vertices from the original geometry, and then add new data from the triangulation to it. That’s it, we can save the mesh and use it instead of the original one.
Conclusion
Removing redundant vertices while preserving the shape is an algorithm that can be useful if your system is working with a lot of unoptimized geometries that can’t be simplified.
We have conducted studies on its effectiveness on CADdesigned models of our domain. According to the results, for such models this optimization can be very costly in terms of power and time (especially for large geometries of several million vertices), and the final effect is quite low – less than 0.5% of all vertices are considered excessive. Typically, this is the center of the circle (Figure 5, top).
It is possible that for models obtained by other methods (scanning, artistic modeling, multistage conversion, etc.) the effect will be more pronounced, but we did not conduct such studies.
Therefore, it is necessary to make a decision on whether to use this algorithm or not individually for each project, focusing on a preliminary assessment of the amount of “empty” data in the models of a particular subject area.
———
Acknowledgment and Usage Notice
The editorial team at TechBurst Magazine acknowledges the invaluable contribution of Евгений Котоврасов the author of the original article that forms the foundation of our publication. We sincerely appreciate the author’s work. All images in this publication are sourced directly from the original article, where a reference to the author’s profile is provided as well. This publication respects the author’s rights and enhances the visibility of their original work. If there are any concerns or the author wishes to discuss this matter further, we welcome an open dialogue to address potential issues and find an amicable resolution. Feel free to contact us through the ‘Contact Us’ section; the link is available in the website footer.