VR is a resource-hungry beast. So are characters. Doing them both on mobile means that you have to start planning right away for optimizing your mobile VR character art. You can’t get away with waiting until the last minute. Your assets need to be created with optimizations built-in ahead of time!
A big challenge we faced on our most recent mobile VR title, VRSE Batman, was creating multiple enemies for Batman to fight. We all know characters are expensive to begin with: they have high polycounts, multiple high-resolution textures, and complicated shaders. On top of that, they are skinned! That significantly decreases performance on mobile VR.
Optimizing your mobile VR character art becomes a huge priority. But if you’ve never done a mobile project before, much less mobile VR, you’re probably not prepared for what’s coming! Where to begin?
Start with Unity’s mobile optimization docs
This was a Unity project, so the first thing I’d suggest to anyone trying to create art for mobile is to read Unity’s own recommendations on optimizing. That should give you a good basic understanding of how to start, and the principles carry over regardless of the game engine you’re using. (Unreal has similar documentation here.)
For example, none of the objects in our game used specular or reflectivity in any fashion. That’s a huge limitation, if you encounter it after the fact. But by planning ahead, it is easier to create an art style that works despite that drawback. Our cell-shaded style meant that specular wasn’t even necessary!
Keep in mind that in VR, everything gets rendered twice. This means you’ll have to optimize twice as much! Plan accordingly. Optimization should be a key element in your dev schedule.
Modularity goes a long way
Our project had a tight timeline, and we knew we had to create a large variety of thugs for Batman to fight. We chose to create our characters modularly to maximize the number of thug varieties in the game. But we knew that we wouldn’t have a lot of flexibility when it came to creating unique geometry or textures; we’d need to make the most of existing assets with a lot of re-use.
Mix and match textures
Our first step was standardizing UVs. This made it easier to swap textures between characters. It also to made it easier to paint textures!
Every character class would get one shared diffuse and one shared normal map texture. We authored at 4096 square, but at runtime these were crunched down as low as 256 square for LODs. We fit every character piece into those individual textures, by dividing the texture into sections for different ‘outfits’.
For example, each character ‘style’ (biker, military, clown, etc) would occupy one quarter of the 0 to 1 UV space. Essentially, there were four base texture templates, all on a single texture. Character variety could be quickly created by simply moving UVs from one quadrant to the next.
We could quickly swap green biker pants for black combat fatigues by moving UVs. The result is a character that uses the same mesh, but looks completely different. Once we mixed and matched between all the different combinations, we could create a huge variety of characters!
Obviously lots of planning has to go into the color choices and materials for these textures, to make sure they don’t look repetitive. We focused most of our detail on the torsos and heads, leaving the rest of the body a little more generic. Most players won’t notice!
Mix and match geometry
One of the disadvantages of making assets for a mobile VR game is that the poly count for anything had to be VERY low, which meant object silhouettes had to be simplified.
Since we can’t create an individual mesh for every individual character, we relied on object re-use to create distinctive silhouettes. Basically, the same way we re-used textures, we re-used geometry as well!
For example, we looked for ways to repurpose existing geometry instead of modeling something new. An armor chunk starts off on the knee as a knee pad, moves to the arm as an arm bracer, or is stacked on top of itself multiple times to create a piece of layered armor.
These simple changes give us very different character silhouettes. If we combine these mesh changes with the texture changes from above, we end up with wildly different characters!
Again the key here is creating a shape that doesn’t stand out too much. Simpler shapes work better, because the player won’t spot them and see them being re-used everywhere.
Combine geometry before putting it in-engine!
Last step – if we left our characters divided up into multiple pieces, then each one of those pieces would have been a separate draw call at run-time. Since we need to keep our draw calls to a minimum, we manually combined characters into the combinations we wanted, and then copied skin weights in Maya from a base mesh. There was usually very little clean up needed. Lastly, we exported the finished combined meshes as individual FBXs.
The disk storage needed for all the unique meshes was a bit of a stretch for mobile footprint, but it was the only way to get realistic performance at run time.
What about LODs?
Character LODs were definitely important, and we needed to create those manually. We got halfway using Maya’s awful Mesh Reduce tool, then the rest was manually reduced. Our highest in-game LODs were 3000 polys, the lowest was around 500 polys.
Optimizing mobile VR character art
Character art in VR is a tricky combo, but it’s not impossible! Because both are resource-intensive, planning is key. If you start with both modular character concepts and a careful visual style tailored to the limitations of VR, you won’t get bottle-necked by clunky character models later in development.