Driving Art Features in Video Game Development
Some things come naturally to all of us.
Driving initiatives and pushing a vision forward is something I’ve always done in my life, through example, passion, communication and clarity.
I’ve been fortunate enough to have helped and driven small and large features in AAA studios/games, such as God of War and The Callisto Protocol.
First and foremost, having a solid idea and goal in mind is crucial. The target will always be somewhat a moving target, having to adapt to either production schedules, project needs, unsolvable problems, questions and so on, but I need to reiterate how important it is for the driver, and the strike team associated with the effort, to understand the mission.
I use the word mission very deliberate here, because not always a full description will be given or composed right away.
On my 2 last big projects, I can put two of them into perspective. On God of War 2018, me and Rupert Renard (engineer), with support from other teams, wanted to build a Dynamic Material system, that would allow snow, poison, blood, dust, etc, built and manipulated at runtime, but cheap enough that not only all the characters would have them, but also cinematics and combat could leverage the same system.
With that goal in mind, we sketched different ideas and solutions. What we ended up building had a lot of compromises, but those were easy to identify and choose since we knew what we were striving for. We wanted to focus on realtime gameplay feedback, scalability and moving pixels, rather than high fidelity and really bespoke visuals.
When we started designing the Gore System for Callisto Protocol, me, Kelvin Chu and Jorge Jimenez followed the same structure, but we had different goals. We wanted to build a very robust system that could be also leverage by gameplay and cinematics, but we wanted to raise the bar and fidelity across the board.
We started by looking at other competitors and seeing what they were doing and later on became clear which features we wanted to purse and which ones had the best ROI for our case.
We had to make strong decisions right away. How do we cut the characters ? How do we handle the multi extra drawcalls ? How do we handle the rigs ? How do we activate/branch anim states ? How do we scale it across all characters ? So on and so forth.
At that point, the development of the feature started, and I was fully driving it. By that I mean I was holding the vision and handling all the communication across the different disciplines. We rapidly identified one engineer to own the feature - Jon Diego Martines, and we started developing the entire system. The vision didn’t only had the visual part of the feature, how I wanted it to look and feel, but also how I wanted it to behave and connect with different departments. The experience from GOW really allowed me to bring ideas to the table when it comes to designing a system that empowers animators, artists, designers to utilize very simple flags or tags to expand their repertoire of possibilities.
Working with Fabio Silva and the VFX team, and the Spain Rendering and Art team, we were able to create dynamic blood across all enemies, activating region masks based on capsules mapped to universal uvs, neighboring system for better transitions, cool dynamic effects like hanging limbs and spawning chunks, mutations and regrowth of limbs and many more. We also added a really cool Scalable and Promotion based Wound system, that not only changed based on angle of hit but also generated different visuals based on material properties.
With that being said, we also cut a lot of features and simplified a lot of them during the production.
My production partner Eddy was also a key contributor to managing our time and efforts on this endeavor, so thanks for putting up with me :)
The main point I want to make here is that make sure whoever is driving the feature, has a solid reason why the feature is being created and designed, and its use. Also, adapting and playing by ear is essential. Feature creep is real and people will always ask - but can we do that or does that feature handles this case etc. These are the key moments where experience and good sense of MVP / ROI comes in play, and it’s the difficult part.
Being a champion for the feature during different endeavors, aligning the team and keeping them motivated is also crucial, but these are more of global responsibilities of leaders IMO, and I strongly believe that holding a vision and pushing things through is tough but probably the most important thing.
Having an idea is great, but executing that idea is what makes it real, I tend to believe.
GDC - 2023 - The Character Rendering Art of The Callisto Protocol
When I joined Striking Distance Studios at its inception, one of the goals was to push photorealism of Characters in videogames further.
When I heard Jorge Jimenez was also joining, I was completely sure we would be able to achieve it.
So we partnered for 3.5 years, alongside many other artists, engineers, producers, and we did it.
This presentation summarizes the mindset, the attitude and the care and love we all shared amongst this goal of ours.
It was a true pleasure to share the stage with Jorge Jimenez and Martin Contel and tell a bit of what we went through and what we did to get the results we got.
You can watch the entire presentation for free here
https://www.gdcvault.com/play/1029339/The-Character-Rendering-Art-of
Metal - Rendering Pipeline
12/20/2021
Rendering Pipeline
I have been very fortunate in the past few years of working with not only talented Engineers and Programmers, but most importantly, generous and kind enough to support my desire and eagerness to learn more about how what I do on a computer is actually generated, rendered, in form of pixels.
This endeavor started many years ago with the goal of understanding a bit more how PBR materials work, so I could do my work better and more accurate.
During my time at Naughty Dog, I worked with Yibing Yang and was amazed by how she implement the cloth and hair shaders on Uncharted 4. I had close to zero idea of what she was doing, but remembered being intrigued by it.
At Sony Santa Monica, I had the pleasure of working with Florian Strauss, responsible for developing the entire Material System and Editor. I learned a ton from a software development and user experience point of view. Unfortunately, my technical knowledge at that time was very superficial so I couldn’t really understand the shading models or the rendering features we had available. So I focused on what was exposed, and the logic behind composing those materials.
During God of War 2018, I worked with Rupert Renard on designing and developing the Dynamic Material system on Characters, that created runtime blood, dirt, water, poison, snow effects across all of the characters. I was responsible for the technical part behind the characters, like sharing uvs and material properties, but on the coding side I had very little involvement, just participated on a more high level/front end type of contribution, defining specs and pushing the feature forward.
On my last year over there, I had the pleasure of working super closely with Doug Mcnabb, which was in charge of rebuilding the Material Editor. At that point, I was starting to study Unreal and play with shading network, constructing materials using nodes and very simple logic. Doug was always super kind on explaining the logic behind simple functions and even tried going a bit deeper on certain features like retro-reflectivity, but it was still too abstract. I could follow the very surface of those things.
I ended up moving gigs and started as a Character Director at Striking Distance Studios, at the very start. I was in charge of building and developing the entire character pipeline, and ended up helping shape our current Material Editor as well.
A few months later my start, Jorge Jimenez joined the studio as the head of Creative Engineering. I remember reading Jorge’s Activision papers many many years ago and I couldn’t be more excited to get to meet and work with him. Most of the skin shaders nowadays and eye renderings in realtime are based upon his studies and developments.
Jorge has been incredible kind with me for the past two years. We have a super tight relationship at work and we are pushing things. Can’t say much but I definitely need to thank him for all the help and time explaining how things actually work. I was able to fully start to understand the more behind the scene cg rendering, which led me into this new journey here.
I feel like I’m writing an essay here, but if you are reading it till now, great.
My relationship with programming started a few years ago with Python. Took some very basic classes on Udemy on Python for Maya - https://www.udemy.com/course/python-for-maya/
That gave me the basic introduction to scripting and since then I was hooked up. I try to write little snippets in maya as much as I can, and was able to write tiny batch actions for Unreal as well. Very simple stuff but definitely fun and productive.
If you saw the project before this one - I started building a full facial rig from scratch and I knew I could use ARKit to drive the rig. There are many live link apps already out there, but when I started looking into how to implement my own facial rig on my own app, I was fully stuck. I was able to download git projects and Apple sample code projects and replace the geo, but I couldn’t do absolutely nothing else.
So I decided to learn the language (Swift). I started with an Apple app called Playgrounds. Did all of the exercises there, more than 200. It takes you from very simple functions to writting very simple algorithms. After I finished all the exercises, I had a good grasp of the very basic language and very very basic logic, but I had no idea how the frameworks were structure and how to implement them. So I started following online tutorials, such as https://www.youtube.com/c/realityschool . I learned a lot too but still felt like it was too advanced, so searching for more courses like from zero to hero type of class, I stumbled upon Tito Petri website - https://hotmart.com/product/en/aprenda-programar-com-tito-petri and joined his school. I’m going through the entire AR path and built around 8 example apps already. That was exactly what I was looking for. Totally worth mentioning too, Tito was my first 3D Instructor and kick started my CG journey back in 2004, 2005. We later became friends and even had a small studio together for a few years, circa 2009-2010.
Well, obviously I couldn’t stop there. During one of his classes, we built a simple material, and the parameters were already there, exposed. Roughness, Metallic, Diffuse Color… I was intrigued to understand what was defining it, and learned that it was all written in Metal shading language. So upon searching more of it, I found https://www.raywenderlich.com and decided to take their Metal - Render Pipeline class.
The course takes you from starting with a simple clean and empty text file and write all of the code and building blocks to get the CPU talking to the GPU, all using the Metal API. We write everything from scratch and it touches on many conceptual ideas as well some math. We create buffers, discuss how triangles are created, how obj files are structured, and so on. All of the boilerplate code is somewhat hard to follow, but the vertex and fragment shaders, the parts that we truly control, definitely easier and more interesting.
Obviously when discussing fragment shaders with Jorge, I got introduced to https://www.shadertoy.com. I’ve seen this project before but honestly couldn’t comprehend what was going on. In a nutshell, imagine rendering a big quad (two triangles) the size of the screen. The fragment shader is what control how the pixels of the geometry will be interpreted/rendered. And shadertoy is basically working within that canvas. So all the cool stuff we see there is all analytical rendering, using ray marching and other mathematical techniques to get the cool and beautiful effects.
And here I am, starting my journey to finally understand what truly happens behind the scenes when rendering 3d objects, and I have finally reached to the point I can write or import 3d objects (using Model I/O Framework) and built my first simple phong shader. This has been a truly dream coming true, and I feel like achieving a big milestone.
I will share my journey along the way here, so come back later to see more progress.
Next goal is to implement shadows and study the many different ways of doing them. The cool thing about studying these core concepts is that there are so many examples already out there, so I just need to search for them. Everything I’m currently learning has been done a trillion times before so it gives me the confidence I need to continue pushing forward until I’m ready to explore the unkown :)
Arduino - UV Lightbox - Rabbit hole
This was a really fun project.
I wanted to build a UV Lightbox to cure my Resin 3D Printed pieces… (to be honest, it was more of an excuse to dive deep into Arduinos, mechanical pieces and so on).
It all started with getting an basic Arduino Kit, and playing with the tutorials.
Then I modelled my prototype in Maya in scale. I should have used Fusion or any other CAD program, but after a few trials, I was able to correct my mistakes with the Maya model.
In order to build my lightbox, I decided to buy a very simple FDM printer, which was a kit I could put together. That part itself was also super fun because it gave me more insight on how the machine operated as well.
Then I got into the mechanical aspect of the turntable, designed a few different bearings, and ended up using a microwave turntable system (kind of).
The coding in Arduino was all C++… I only had a very basic understanding of Python, but I was able to pull it off. I only needed to build a UI screen with a few options, select the timer and let the motor and uvs do their thing, and turn it off. Took me a couple of days to be able to run the code correctly but it worked.
Then I had to get into the soldering/electronics. That part is very foreign to me, so I just did the very basic and crossed my fingers.
After a month or so into this project, my lightbox was completely built and fully working.
I then 3dprinted some objects, and put the machine to use.
After 1H, the motor burned… I was probably pushing too much current there…
It was a super fun project to explore many different subjects I was not familiar with, and learn more about the makers scene as well.