Metal - Rendering Pipeline

12/20/2021

Rendering Pipeline

I have been very fortunate in the past few years of working with not only talented Engineers and Programmers, but most importantly, generous and kind enough to support my desire and eagerness to learn more about how what I do on a computer is actually generated, rendered, in form of pixels.

This endeavor started many years ago with the goal of understanding a bit more how PBR materials work, so I could do my work better and more accurate.

During my time at Naughty Dog, I worked with Yibing Yang and was amazed by how she implement the cloth and hair shaders on Uncharted 4. I had close to zero idea of what she was doing, but remembered being intrigued by it.

At Sony Santa Monica, I had the pleasure of working with Florian Strauss, responsible for developing the entire Material System and Editor. I learned a ton from a software development and user experience point of view. Unfortunately, my technical knowledge at that time was very superficial so I couldn’t really understand the shading models or the rendering features we had available. So I focused on what was exposed, and the logic behind composing those materials.

During God of War 2018, I worked with Rupert Renard on designing and developing the Dynamic Material system on Characters, that created runtime blood, dirt, water, poison, snow effects across all of the characters. I was responsible for the technical part behind the characters, like sharing uvs and material properties, but on the coding side I had very little involvement, just participated on a more high level/front end type of contribution, defining specs and pushing the feature forward.

On my last year over there, I had the pleasure of working super closely with Doug Mcnabb, which was in charge of rebuilding the Material Editor. At that point, I was starting to study Unreal and play with shading network, constructing materials using nodes and very simple logic. Doug was always super kind on explaining the logic behind simple functions and even tried going a bit deeper on certain features like retro-reflectivity, but it was still too abstract. I could follow the very surface of those things.

I ended up moving gigs and started as a Character Director at Striking Distance Studios, at the very start. I was in charge of building and developing the entire character pipeline, and ended up helping shape our current Material Editor as well.

A few months later my start, Jorge Jimenez joined the studio as the head of Creative Engineering. I remember reading Jorge’s Activision papers many many years ago and I couldn’t be more excited to get to meet and work with him. Most of the skin shaders nowadays and eye renderings in realtime are based upon his studies and developments.

Jorge has been incredible kind with me for the past two years. We have a super tight relationship at work and we are pushing things. Can’t say much but I definitely need to thank him for all the help and time explaining how things actually work. I was able to fully start to understand the more behind the scene cg rendering, which led me into this new journey here.

I feel like I’m writing an essay here, but if you are reading it till now, great.

My relationship with programming started a few years ago with Python. Took some very basic classes on Udemy on Python for Maya - https://www.udemy.com/course/python-for-maya/

That gave me the basic introduction to scripting and since then I was hooked up. I try to write little snippets in maya as much as I can, and was able to write tiny batch actions for Unreal as well. Very simple stuff but definitely fun and productive.

If you saw the project before this one - I started building a full facial rig from scratch and I knew I could use ARKit to drive the rig. There are many live link apps already out there, but when I started looking into how to implement my own facial rig on my own app, I was fully stuck. I was able to download git projects and Apple sample code projects and replace the geo, but I couldn’t do absolutely nothing else.

So I decided to learn the language (Swift). I started with an Apple app called Playgrounds. Did all of the exercises there, more than 200. It takes you from very simple functions to writting very simple algorithms. After I finished all the exercises, I had a good grasp of the very basic language and very very basic logic, but I had no idea how the frameworks were structure and how to implement them. So I started following online tutorials, such as https://www.youtube.com/c/realityschool . I learned a lot too but still felt like it was too advanced, so searching for more courses like from zero to hero type of class, I stumbled upon Tito Petri website - https://hotmart.com/product/en/aprenda-programar-com-tito-petri and joined his school. I’m going through the entire AR path and built around 8 example apps already. That was exactly what I was looking for. Totally worth mentioning too, Tito was my first 3D Instructor and kick started my CG journey back in 2004, 2005. We later became friends and even had a small studio together for a few years, circa 2009-2010.

Well, obviously I couldn’t stop there. During one of his classes, we built a simple material, and the parameters were already there, exposed. Roughness, Metallic, Diffuse Color… I was intrigued to understand what was defining it, and learned that it was all written in Metal shading language. So upon searching more of it, I found https://www.raywenderlich.com and decided to take their Metal - Render Pipeline class.

The course takes you from starting with a simple clean and empty text file and write all of the code and building blocks to get the CPU talking to the GPU, all using the Metal API. We write everything from scratch and it touches on many conceptual ideas as well some math. We create buffers, discuss how triangles are created, how obj files are structured, and so on. All of the boilerplate code is somewhat hard to follow, but the vertex and fragment shaders, the parts that we truly control, definitely easier and more interesting.

Obviously when discussing fragment shaders with Jorge, I got introduced to https://www.shadertoy.com. I’ve seen this project before but honestly couldn’t comprehend what was going on. In a nutshell, imagine rendering a big quad (two triangles) the size of the screen. The fragment shader is what control how the pixels of the geometry will be interpreted/rendered. And shadertoy is basically working within that canvas. So all the cool stuff we see there is all analytical rendering, using ray marching and other mathematical techniques to get the cool and beautiful effects.

And here I am, starting my journey to finally understand what truly happens behind the scenes when rendering 3d objects, and I have finally reached to the point I can write or import 3d objects (using Model I/O Framework) and built my first simple phong shader. This has been a truly dream coming true, and I feel like achieving a big milestone.

I will share my journey along the way here, so come back later to see more progress.

Next goal is to implement shadows and study the many different ways of doing them. The cool thing about studying these core concepts is that there are so many examples already out there, so I just need to search for them. Everything I’m currently learning has been done a trillion times before so it gives me the confidence I need to continue pushing forward until I’m ready to explore the unkown :)

Previous
Previous

GDC - 2023 - The Character Rendering Art of The Callisto Protocol

Next
Next

Arduino - UV Lightbox - Rabbit hole