Tuesday, January 28, 2014

KeyShot and improving workflow

Spent a total of 10 hours today Zbrushing, Maya'ing, KeyShotting, Nuking and Photoshopping to improve my pipeline and increase the speed at which I go through this creative thought process to a quantifiable output. Like previously I avoided using tutorials except for times I couldn't find what I needed. Tomorrow I'll start reading up again and use that knowledge to improve on my pipeline again. I really like running into issues and solving them by simply using the programs logic and a little bit of brain matter. Makes it much more fun to learn new stuff than to take everything from tutorials.

Below are some screens of today's results.

Zbrush 4v6 viewport screensnap
Keyshot 4 render
NukeX 8.0 composite
After a little bit of Photoshop

Kind regards,
Pim

ps. I made this with a friend.


Monday, January 27, 2014

Global Game Jam 2014 using Oculus Rift

First things first. Here is the GGJ page for our game, Schizophrenia. At the bottom of this page are two links. One called source files and the other is the executable. If you prefer a more immersive experience you can log into Facebook using this and copy your number when the game boots.

I applied for the Global Game Jam 2014 only a week before it was actually being held because I wasn't sure I could combine it with moving to Ghent for my internship and finishing course work. But I managed and participated in this incredible event. My team and I consisted out of six people of which two were programmers and four were artists. We registered ourselves 'Fat Kids are Hard to Kidnap' and thus our team sprung into existence.

At around 4 pm on Friday we were entirely set up and ready for GGJ14 to commence and so were the hosts as around that time the presentation started where (among other things) the concept of this year's contests was being revealed. It being really vague just supported the idea we had before it was announced.

"We don't see things as they are, we see them as we are."

So with this knowledge we individually started concepting idea's, writing down single word brain farts on individual post-its. After that we combined all of the post-its and put together the ones that were similar. Quickly the word schizophrenia was tossed around and we decided to build our game around the experiences a schizophrenic has. The multitude of totally different and unrecognizable impulses for the player to deal with in an environment that featured a symbolic meaning for losing one's self in an area devoid of structure save for the environment itself... and since we didn't want to creep too far into the 'experience' part of games, we stayed relative close to the horror cliche of an abandoned subway system. All this combined into an Oculus Rift experience made it a little daunting for some people to proceed into the dark.

Below are some images from the game.


Oculus Rift gameplay

Unity Editor screengrab

Unity Editor screengrab

Unity Editor screengrab

Unity Editor screengrab

During the next few days I have nothing on my plate except for sleeping and doing other stuff to survive so I'm planning on working on the game for another day or two to get rid of the bugs and some ugly textures.

Kind regards,
Pim

Thursday, January 23, 2014

Nuke Compositing Setup

During the past two days I spent my time making many mistakes while rendering a 100 frame clip using Arnold as a render engine. I created a shader network for a single read node (one image input or one sequence input) that doesn't need much editing apart from altering it's exported channel layers. So far the shader network I created takes apart nearly all renderable passes from Arnold's output and provides with many different ways of editing their output of the rendered passes provided by the renderer. Examples are grading, exposure and saturation for grey-scale'd RGBA beauty renders, (in)direct diffuse/specular, normal maps, reflection, refraction, point position, subsurface scattering and deep/mid/shallow scattering. I'm still looking into how to add Z Depth as Arnold hadn't given me an image for this. Below is an image.

Complete network
Beauty, Diffuse, Normal and Object ID
Specular, Reflection and Refraction
Point Position and Subsurface Scattering (the rest of it going up)
As you can see from the previous images I didn't bother to connect all the different outputs to the final write node as many contained information I didn't think would change the final image at all. For example, there was an almost invisible bit of information in all the scatter layers combined so I decided not connecting those. Their final result would have been negligible and it would have increased render time. Right now it took 15 minutes to render out 100 frames on my laptop at 1920x1080 resolution. That's 9 seconds per frame. Not bad at all. Below is the 4 second render this shader network managed to put out after some small tweaks.




I totally forgot to turn off the Arnold license check so I'll totally have to render it again. However, I won't be continuing this until the end of Sunday as tomorrow I'll be joining my team for the GGJ 2014. We're called 'Fat Kids are Hard to Kidnap'.

Kind regards,
Pim

Tuesday, January 21, 2014

Solid Angle Arnold AOVs & Object IDs

So after yesterday I was left with a taste for more figuring out what this render engine is actually capable of outputting as I really want to get some animated batches in Nuke for editing. I am intentionally not following any start to finish tutorials since I want to figure out what can go wrong and what other hidden things are tucked away in corners I wouldn't check otherwise. I feel I'd circumvent problems that could help me understand why certain renders look better than others.

I rebuilt the shaders for the Red Skull model to have a light subsurface scatter. This turned out to clamp the brightness of the rim shader but really - that's okay since they were a little bit too present in the render I had made previously. I also created a ObjectID input for the shaders since I want to be able to edit the values of the background and foreground individually. It's really quite simple to do this since Solid Angle provides AiUtility nodes that already have the attribute in a drop down menu. All you have to do is turn on ObjectID in the AOVs tab for your render settings and drag the AiUtility node to it's corresponding input of the rim shader shader engine. Image below.

Small shader with ObjectID.

After that I spent a few hours figuring out how to render a batch set to a single EXR file but couldn't find the option. Turns out it isn't in the render settings for the Arnold renderer but is presented when you select the driver running your custom AOVs. A simple check box under the header 'Advanced Output' called 'Merge AOVs' turns it on and off. I wish this setting was in the render settings. Would have saved some time.

Merge AOVs.

While writing this blog post I've been rendering a 2k image with around 18 different AOVs which I will now have the pleasure to edit in Nuke. For tomorrow I'm planning to create a small animation in the same scene to see how to use sequences in Nuke with this many channels. Hopefully my trusty laptop will run all of this; otherwise I will have to turn to the renderfarm at my college. I'm not sure if I have enough HDD space to render this many channels (even if I don't need this many) for a few second animation. The .exr output I have right now is already 720MB... I might have enough space for one second. Below is a picture of ObjectID working in Nuke.

ObjectID working

Kind regards,
Pim

ps. If I get a good result I'll post it together with tomorrow's update... in case I'll make such an update... No promises!

Monday, January 20, 2014

Solid Angle Maya to Arnold v0.25.2

I kind of rediscovered SolidAngles's Arnold renderer yesterday and have been playing around with it to get some good results that I can export to Nuke for compositing. However, I've been having such a blast I spent more time on arranging shaders than actually getting something ready for compositing.

To achieve some more fundamental knowledge about the program I decided to create some simple shaders and from that a scene. Unfortunately none of the shaders from the SolidAngle support side worked so I figured out most shaders myself with some help from the support file. Next to this I figured it would be fun to create a bulb shader that I would apply to some cubes that I thew around the scene using drag, gravity and newton field simulations. Can't even express how much fun that was, though the gravity field simulation didn't always work well with more than 50 objects. Below is an image of the scene I quickly assembled. From left to right, top to bottom, the shaders are: thin plastic - car paint - ceramic - chrome - clay - glass - bulb - metallic - gold - wood - skin - matte plastic

Shaders rendered in Arnold

When I finished this I found out I couldn't make use of the batch render option box using the Arnold renderer which I found odd since I could circumvent the problem by not clicking the option box but directly rendering the images. I still haven't figured it out entirely but I guess it has to do with how Arnold deals with outputting an image.

After this I creating a rim shader that corresponded with the facing of the camera. Below is an image of what it looks like. What it comes down to is a SampleInfo node sending it's facing ratio to a Clamp node's InputR (or G or B; whatever you want but keep using it). The Clamp node sends that its OutputR to both the vCoord of the Ramp node and the Blender input of the BlendColors node. This way every part of the mesh that is facing the camera directly gets multiplied by 1 and will receive the top color in the Ramp node (v=1). The parts of the mesh that are facing away of the camera will get multiplied by 0 and will receive the bottom color in the Ramp node (v=0). Then the Output of the BlendColors node connects to the Color input of the aiStandard shader. Voilà‎.

Rim shader

I decided to use this rim shader on my Red Skull model because it has some nice definition in the normal map. I played with some light settings until I found something I was happy with and rendered it at a 4k resolution just because I had never done it before. I decided to go for a double three light setup as I wasn't able to light the head the way I wanted with just the three lights. So now it has two back lights, two key lights and two fill lights; all varying in color, saturation, intensity and exposure.

Light setup. The small planes are the lights for the head.

Arnold rim light render.

This was fun. Tomorrow renderfarm and compositing.

Kind regards
Pim

Sunday, January 19, 2014

Datagate trailer

Back again with more about Datagate. Apparently we have a trailer. I didn't know! But since it isn't on my youtube channel I can't embed it so here's a link and some more in-game screenshots.







Kind regards,
Pim

Friday, January 17, 2014

Animation and Datagate

These upgrades are becoming nearly monthly and I'm pretty okay with that. I feel, art dump or not, the work posted here should still be representable and not to be taken too lightly. It's still a representation of how serious I take what I try to achieve.

During the past three weeks I've spent most of my time working on a 15 second animation that is supported by two different 100 frame attitudes and during the same time I created a rig for a character provided by my teacher -Colin Morrison- using video tutorials provided by Perry Leijten. The difficulty in this rigging assignment was creating several dynamic rigs for multiple separate meshes that needed to be influenced. Below is are two screenshots of the rig.

Back of the character. Both guns, the head and the sword are dynamic.

Front of the character.


As for the animation I tried to delve a little further into rendering and compositing. I decided to stick to my rendering habit of using Mental Ray and not render out too many passes for each frame as I was using a renderfarm at my education. It still took 18 seconds for each frame, amounting to 1h30m for a 390 frame animation. Rendering more passes would have increased render time by 500%. After this I imported the .exr sequence into Nuke PLE, where I learned that this version gives artifacts in the shape of random blocks both in the viewport as in the renders.

Nuke Personal Learning Edition viewport artifacts

So after finding out about this I decided to just install the trial license for Nuke 8.0 since I already used the Nuke 7.0 license a little over a month ago during my application for an internship at Grid-VFX. The result was, obviously, much better. After a little fun with some of the nodes I exported as a .mov with the h.264 codec and uploaded it. If you prefer vimeo here is the link to that one.



And then there is Datagate. A course project created in 14 days by a team of 12 people. It's difficult to explain what it exactly is but it comes close to a path traced real-time investigation game forcing the player to come to a solution outside the game since it doesn't offer a win or loss condition inside the game. The website for this can be found here. I'll post the download link on my blog if we decide to make it universally available... but I won't leave you with nothing. So, see below for two in game real-time images for reflections and refractions.




Kind regards,
Pim