Optimizing Massive Unity Scenes
The Problem
    At VIMTrek, we process a lot of scenes into Unity3D. Some of the scenes can be small, with only a few hundred unique objects and meshes. Other scenes used to make me cringe thinking of the potential draw calls and poly counts.
    We had several problems when it came to how we should optimize our scenes. The scenes didn't really exist in Unity until they were built (on loading a file), which meant we had to preprocess the data somehow to optimize it, as optimizing on load would be unnacceptable. To make matters even worse, we had to maintain some sort of integrity concerning the original meshes. In short, you should still be able to click on any portion of a mesh and reference data about the original object.
    Our problem in a nutshell: High Draw Calls and Batches, High Polycounts, Individual Object References, Scenes averaging 50k meshes and objects.
The Solution
    Our need to minimize draw call and batch counts led me to think about how we treat our data internally. We needed a revolutionary change to how we treat Revit data in our pipeline. I decided to bring things down to its basics: 1 Mesh Surface per Mesh (1 Material Reference). This would initially increase our overall number of meshes, as we would no longer have submesh data (meshes with multiple materials), but it would drastically reduce our number of batches.
    This was a good start, but it didn't really hit the mark. We still had the problem of sheer volume of meshes. Once they were all seperated by material, it was easy enough to think of the next solution. Combine meshes that share materials. This would reduce the overall number of renderers active in Unity, thereby reducing draw calls and batches simultaneously.
    Combining this approach with rigorous polygon optimization (getting rid of lots and lots of unnecessary vertices by using a bucket threshold approach) produced really good results across the board. We saw framerate increase of about 500% as well as staggering drops in draw calls and batches. There was only one last thing to deal with: using these meshes throughout our pipeline.
    At the time I produced these new optimizations, we were still using Unity to lightmap our scenes and delivering the content as AssetBundles. Unity/Enlighten just simply refused to work with the combined datasets. At VIMTrek, we now have an internal lightmapping/UV mapping application. I developed these applications to solve the Unity lightmapping issue. We still have realtime lighting, but that's another story...
    Using UV layers to store additional data on the meshes (the original object references) I was able to combine meshes by material (also by region for culling purposes) and still maintain connection to the original data. The combination of all the optimization approaches gave us the ability to maintain extremely large scenes in Unity, without having the typical overhead involved. 

© Copyright 2016 Jordan Stevens |