Thanks to texture compression, you can explore those beautiful huge high-resolution open worlds with the best performance possible within the capacities of your hardware. On the development side of things, though, there's a lot to do before this beauty gets nicely delivered on the many screens available today. With the variety of platforms, you can run a game on, game developers have to pick the best techniques for compressing their game textures if that means that's how they'll get their game running smoothly in the first place. For learning more about that intricate technical subject, we interviewed Stephanie Hurlburt, who has worked for both Oculus Rift and Unity Technologies and now runs a company specialized in texture compression for games and other applications.
IndieWatch: Can you talk about your company?
Stephanie Hurlburt: I run Binomial with my friend Rich Geldreich, and we’ve been in business for about a year now. We make Basis, which is a texture compressor. Textures - which can be any kind of image data - take up a lot of data in apps. In many games, we see them take up over seventy percent of the game’s data easily.
For high-end titles on PC or console, this means that better compression can get you better-looking assets - you’ll be able to fit in higher quality data for less space! It also allows for things like large streamed open worlds.
For many titles on platforms like mobile, better texture compression might be what makes you able to ship your game at all, and a lot of our customers tell us their download times are directly correlated with sales. Getting that download time and storage size down is crucial!
IndieWatch: What about the companies your worked for? Can you give us an overview about your functions in each one of them?
Stephanie Hurlburt: At Oculus I worked on Medium, the sculpting tool for VR. I helped develop various capabilities in their custom engine, and helped launch the project for the Oculus Connect conference.
At Unity I worked on various low-level features on the engine. I helped refactor their multithreading system and worked on many optimization efforts related to low-level graphics. I developed their animated splash screen that ran while the game loads in the background and made it work on all the platforms needed (no small task). I represented graphics on the UX team and organized UX research around graphics features in the engine.
And before that, I worked at Downstream, which was the firm I described above.
At Binomial, we occasionally take on contracts in addition to working on Basis. Our biggest project we can talk about publicly was helping with the launch of Intel’s Project Alloy headset. We’ve also done consulting work for various VR startups.
IndieWAtch: Can you give an overview about the different techniques available for texture compression today? How can the regular indie game developer benefit from that?
Stephanie Hurlburt: First, it helps to have a basic idea of how games process texture data. GPUs help us render images and process graphics, and are structured in a very different way from CPUs. All texture data must end up in a format the GPU can read well. I typically refer to these as GPU Formats, some of the general names you’ll hear are BC/DXT (desktop, consoles), BC7 (higher end desktop/consoles), ETC (Android, modern iOS), PVRTC (older iOS), ASTC (high end mobile). These GPU Formats are fantastic for the parallel architectures of GPUs, but they sacrifice size with highly repetitive data that isn’t necessary for CPU storage. GPU formats are an unacceptably large file size for CPU storage, and we know we can shrink this.
So how do we deal with this big texture data? The most common method today is to simply losslessly compress this GPU data. If you’ve ever made a .zip archive, you know what lossless compression is like-- shrink it, decompress the archive when you need the data, no data’s been lost in the process. Developers decompress this data when they need to send it to the GPU.
The trouble with the above method is the textures typically get a 50-75% size reduction, but we know formats like JPEG are more like 25% the size. This is a huge difference. So what some developers do is store the image as a JPEG, and when they’re ready to use it they decompress that and recompress to a GPU Format using a fast encoder.
What we do with Basis is smartly trade off quality for size (the users can decide how much quality they’re willing to give up). You can then choose to use your normal lossless compression pipeline-- in this case, we prepare the data so it can be better compressed by lossless codecs-- or you can turn it into a .basis file. When you’re ready to render textures stored as .basis, instead of a lossless decompression step you use our transcoder to turn the file into the GPU format of your choice.
Indie developers can benefit in lots of ways. If you use Basis to generate GPU format data directly, it can fit into a lot of existing engine pipelines-- just use Basis to compress your textures, then you get free size improvements! Awesome! If you have the ability to rework your engine a bit, integrating .basis files into your pipeline usually improves size reduction as well as runtime performance (transcoding files is typically faster than lossless decompression).
IndieWatch: What should we expect from the future of VR/AR with the texture compression techniques you are working on?
Stephanie Hurlburt: Texture compression has really, really exciting impacts on VR and AR! In fact, it was a huge motivation for launching Basis.
In VR/AR, performance is tight. On mobile platforms, you have all the usual performance struggles mobile developers are very used to, while now also needing to hit framerate so your users don’t get nauseous. That is no small hurdle to cross. Most mobile VR developers totally understand why improving their texture compression can be crucial.
For higher-end platforms like desktop or console, needing to hit framerate and not having a huge download size are hard enough. On top of that, users can now get up really close to textures and expect a whole new level of quality for the game to actually feel realistic and very immersive. And if a game relies on streaming texture data to help solve these issues, that texture data better be compact. Games have come out with low-poly styles to try to easily combat this, but we can do better than that. We can have high quality VR experiences. Let’s do it.
Basis can also have a huge impact on web-based technologies. The .basis format can be totally universal-- one format that outputs any GPU Format you need. This can improve performance in web apps drastically, because they no longer need to store every GPU format the user could need.
In terms of engine integration, if you can send GPU format data to your engine of choice, Basis can be used to improve performance. We’re also happy to help people integrate Basis into their engine workflow, especially if you use an engine with source access. It’s usually not much trouble for us to check out your pipeline and help out, just let us know! Our e-mail’s firstname.lastname@example.org.
IndieWatch: What is your background? How did you end up in this business and what do you recommend for beginners interested in doing the same?
Stephanie Hurlburt: I have a degree in Computer Science. I originally got a job programming creative installations in the advertising business and as a result of that work I ended up needing to do a lot of lower-level C++ and graphics development, mostly modifying frameworks like Cinder and OpenFrameworks. The hardware we were building for was just too unique to be able to use any game engines available at that time.
I really loved the low-level code, much more so than design work and interfacing with end users. I liked building tools for other developers. So after that, I started to apply to jobs in the game engine business and used the low-level coding I’d done so far as evidence I could do the work.
If someone wants to do the kind of work I do today - low-level C++, game engines, graphics - I’d say you should definitely study C++, and learn about a graphics API like OpenGL or Vulkan. Lots of great tutorials online for these topics. I’d say you should start building projects as soon as you can - pick a niche you think you’d like, and get good at that and learn the concepts you need to complete that project. Don’t worry too much if you don’t have a degree-- sample projects and showing you can do this work matter a lot.
I’d also recommend getting to know people! The low-level/engine coding community on Twitter is surprisingly fantastic, and there are a lot of Slack chats about this as well. I actually made a list of people willing to mentor others in my field: https://stephaniehurlburt.com/blog/2016/11/14/list-of-engineers-willing-to-mentor-you. Don’t hesitate to reach out to anyone there - having one-on-one mentorship is surprisingly valuable.