Thursday, December 4, 2014

Specialisation Progress Pt. 3

To usher this blog post in here's some news in French explaining how last week Astérix apparently overtook Interstellar and the new Hunger Games film in French cinema's. I feel like there's a whole lotta snowballing going on with this movie. Pretty awesome! I saw it Wednesday a week ago and it was pretty sweet. Also, Wiplala, a movie I never posted about, also premiered and although it's a children's movie spoken completely in Dutch.. it's actually pretty cool.

So specialisation wise, here we go again. I'm pretty much on schedule with it so far, though during the last few weeks have been a little bit busy if I'm honest. As the deadline was getting closer it kind of closed in a lot more than I had anticipated. Due to a fairly large misunderstanding between me, my supervisor and my old curricular terms I now have to complete this project on December 19th instead of Januari 9th... which is kind of a bummer! On the other hand, it did push me to take the next step instead of dwelling some more on my renders, so that's good I think!

Like previous blogposts I'll structurize this one into chapters so it's easier to read. So let's see where I left off... ah yes. Simulations.


A bunch of, admittedly, very ugly renders of very ugly simulations. That's the first thing I changed. I upped the anti and scaled my realflow scene's with a factor x20 because I wasn't getting the right resolution. This gave me much better results at the cost of taking 7 to 8 hours of calculating the simulation.

Vorticity of the fluid.

Velocity of the fluid.

I then went on attempted to render these simulations in separate VRay scenes but quickly found that VRay actually didn't render this stuff very fast. The renders themselves also weren't very impressive quality wise as a lot of the refractions just seemed to be internal reflections. Something which I still don't completely understand but alas. Because of this I decided to switch to Mental Ray as I had been getting decent results rendering the limited amount of BiFrost fluid simulations I had previously created. This turned out to be a good decision as this allowed me to cut render times in half for double the resolution I was rendering at in VRay. A test result below:

The problem with this was that it took the color for refractions and reflections from lamberts I really quickly set up. I didn't think it was an accurate representation of what refractions would look like using the actual VRay shaders, so I transferred my VRay materials to Mental Ray. Arguably they could have looked much better than this if I had spent more time on them, but really all I needed was the base color and maybe some surface definition.

The tests using these shaders turned out to be much better than I had expected. Another one in the pocket and something less to worry about. Some results below.

Simulation 1 
Simulation 2
Simulation 3

I am least satisfied with the first one as it almost seems like I had forgotten to turn up the liquids viscosity. Or maybe it's the gravity daemon I didn't turn up high enough in Realflow. I'll be returning to this simulation in the end but only if I have enough time to finish this short. That's including the documentation I have to write about workflow and partly pipeline. Fortunately, because I am writing all these blogs I'll have a lot to look back and reflect on. Hurrah!

Shading revisited

I was really unhappy with how my shading looked on the larger surfaces. Very undefined and really not fitting the style I wanted to go for. During one of several bi-weekly meetings with my supervisor we both agreed my maquette looked like something out of a clown horror movie and I decided to mash up all the shaders. Below are some images I stuck together to show the difference in both lighting conditions and shaders.

I also made the windmills turn in their opposing direction because it didn't feel natural to me. Might have been the lighting or the way I've set up the camera paths. I felt the windmills should have turned against the camera movement.

Final Pre-vis

Then, finally, before I could let myself start rendering I created playblasts of each shot and composed a short of them. Similar to my previous blog post I simply stitched the clips together using the same Sony Vegas template I used before, although this time I though it would be nice to make it more alive. This way it was also easier to understand what exactly was going on. It also helped guesstimate the rhythm of the clips so it wouldn't feel rushed.

The applause at the end is a joke, plz...


After this step I spent some time setting up all my render scenes and render layers. For this I used several common setups like disabling the primary visibility for reflection meshes and adding a holdout material to a duplicate of those meshes, but with primary visibility enabled. This is a really neat trick and it turned out to render my stuff even faster. Next up was setting up some render layers in both Mental Ray and VRay. Fortunately I didn't need that many as I only wanted a shadow layer and a beauty layer.

I set up my renders in such a way that it would first output a beauty layer so I could start creating simple pre-comps. I was then planning on recreating my beauty pass, steal the alpha just in case and keep the whole comp script the same for every shot. This didn't work perfectly, but I'm sure I shaved a lot of time off by creating these comps before my renders were finished. Below are the results of some quick and dirty pre-comps.

Then, when the rendering was underway, I made a render sheet so I could calculate drive space and timely start new renders without losing much time due to bodily needs like sleeping, eating and visiting the toilet. That was last week and I feel like I'm still recovering from the 150+ hours of managing all my renders on two machines. I'm glad that stuff is over. Lessen learned. Next time I'm going to write a macro that simply just batches through all that stuff. Or use a render manager like Thinkbox's Deadline. Below is an image of my total render time. This excludes many of the tests and creation of irradiance and light caches.

To summarize. I've rendered out the following passes.

Mental Ray
Alpha - Beauty - Diffuse - Reflection - Refraction - Shadow - Specular
Alpha - Beauty - Diffuse - GI - Lighting - Normals - ObjectID - Reflections - Refractions - Shadows - Specular - Velocity - zDepth

After some thought I decided I'd render everything to a drive which from that point on would act as my plate drive. It's only 500GB's but for the current purpose that's more than enough. It's also extremely handy that it's a 2.5" external drive because this way I am not limited to my workstation. If, for a moment I don't feel like working at that location I simply move to my laptop. It's absolutely fabulous.


Immediately after the first batch of passes were ready I spent some time setting up base comps for all the shots. For some this required just to gather all the render layers and merge them together to mimic the beauty. For the first and last shot this required me to export the original shot footage to a sequence plate that needed to be 24.97fps opposed to 29.97fps. At first this gave some issues, though with a short expression in combination with forcing it to calculate in integers I was able to cut out frames without seeing an actual difference in the footage.

Base comp to mimic the beauty. Another one of this is on the right for the fluid render passes.

A pre-comp to throw away all unnecessary information. At this point I am not using 32-bit float values for the image sequence I am exporting since that will take up too much time to write/read and I want to fly through this part of comping. For now the 16-bit .png format keeps the size small and maintains nearly all RGBA information. For the final image output I will go back and rerender all of this to 32-bit tiff's so I am not throwing away information that would otherwise increase the quality of this short.

This is an image from the presentation I gave yesterday to indicate what I've done in comp so far and what I am still planning on adding. During this presentation my supervisor and I came to the conclusion that the floor is reflective and that the maquette needed a reflections. I had not thought about that while rendering so I needed to faked one. I also added another global discoloration overlay to integrate the CG better into the scene and I used my normal pass to increase the lighting/shadows effects from both windows.

Something else entirely, GenArts is doing gods work with their plugins. I'm having a great time using their plugins and the only downside to 'em is that they rely heavily on GPU acceleration, which is something the Nuke dedicated nodes do also. When both are present in the same comp and both are being allowed to use the GPU (if available) a mismatch occurs and the Sapphire plugins stop working. Not one of them. All of them. I have yet to find a suiting solution to this issue. I've kind of circumvented this problem by recreating this effect using regular blur nodes. The difference will probably not be noticeable when the camera is causing motion blur however, I will need to find a proper solution for this because I am now missing camera lens information like lens shaped artifacts, bokeh, lens noise and highlights. Below a comparison.

And after all of this I built something similar to the pre-comp above, only this is for the final touches, like grain, re-formatting, camera shaking and re-framing. I felt I didn't have enough processing speed left to make the final judgement and I definitely didn't want to do it in proxy mode as I would lose a lot of information right there. For this step I simply render the "final" sequence out to my plate drive and then import it again. This is also using 16-bit png's, but I'll be sure to upgrade to 32-bit tiff's once I feel the shot is ready to go.

2.35:1 anamorphic format

I'm considering reformatting the footage to a 2.35:1 anamorphic format since that will probably feel more cinematic than a short at 1.77:1 widescreen format, though I don't want to lose the 'home-video' vibe that's going on right now. It's something I can decide on later, so I'll postpone that decision for now. Right now, I just went ahead and did it anamorphic. Below is the result.


I've been live-streaming on and off because of the amount of bandwidth it generates.. or better yet, consumes. The videos are only for a number of days available on Twitch, however it gives you an option to export your recordings to 2-hours chunks to YouTube if you link your account to it. So really what I'm saying is, all my work can be found on YouTube. This is also incredibly useful for when I having to write my documentation. Weeee!

That's it again. Lots and lots of text. Hope I didn't bore you. Thanks for reading,

Tuesday, November 4, 2014

Specialisation Progress Pt. 2

Almost exactly two months ago I started this phase called my 'Specialization' and nearly a month ago I posted a fairly elaborate blog post in which I explained what I had done up to that moment. This time is no different.

It's funny how I keep making these posts when I'm in Ghent and not at home. Clearly my productivity spikes every time I move to this city. I should be here more often, I would make great progress! Anyways, without further ado, below a wall of Facebook-esque proportions.

A tiny stab at my previous blog post. This is what I ended up with after some tweaking inside Arnold and Maya. I was quite happy with the result of the render even though there was still a LOT of noise left in the render. I figured I'd up the anti for the AA multiplication render setting and leave the rest of the camera multipliers as they were. While doing this I also downgraded the number of passes for all the lights and set up my scene using various references and ultimately also a number of AOV's. (render layers). Unfortunately my render times exploded and I found no way to decrease them to an acceptable amount.

Switching from Arnold to VRay:
I discussed the problem with my supervisor and he suggested I'd forget about Arnold and started using VRay for the bulk of the rendering. This way he could also help me out and actually practically supervise me. I thought that was a pretty good idea. Another thing we discussed was cutting the 20 second shot into several parts so a) it will be a more interesting combination of shots that may or may not actually tell a short story, b) added difficulty by forcing me to set up a coherent folder structure that allows for quick iteration, c) increase the workload since according to my supervisor I am racing through it and d) provide a compositing challenge by requiring different ways of image merging.

This is the original 20 second shot. In this case I cut it down to about 14 seconds because track wasn't perfect yet. It yanked the scale model down for a short moment and I didn't want to show that in my presentation. Normally a track would be perfect after shooting something like this and making notes regarding focal length, shutter speed, etc. Unfortunately though my camera lens was featuring zoom creep at the time and slid my focal length from 28mm to 32mm without me noticing. Really tricky to figure that out once you're in PFTrack.

After a few iterations with cutting up the shot, playing around with camera angles and mixing up the timing of the different shots I figured I wouldn't want to spend more time on that. Getting my angles right and camera's to look interesting was a lot of fun but there was also a lot still left to do. I wanted everything to be as complete as possible so I also rewrote the script, drew a whole new storyboard with notes regarding VFX and sound as well as create an animatic; something I didn't need to do before as there was only a single shot.


This is the third version of animatic. The previous two are view-able on my YouTube channel also, though I've cluttered them with so many annotation it's probably not a lot of fun watching those. I know you can turn those off, but still; they're just less OK versions of the one above.

So after getting all these camera's in order and the timing okay I figured it was time to jump into VRay. I follow a few tutorials and shredded the support file so casually made available by ChaosGroup. I ran into a few issues here and there with the VRay buffer and its color management, but eventually found my way around most things and managed to get a similar result to the Arnold render all the way at the top. Transferring the shaders gave a few issues as some nodes I had used in the Arnold shaders were not available in VRay, though that was solved fairly quickly also. Not a lot of set-backs in this transfer from Arnold to VRay as they're quite similar in the end. I do LOVE the light cache setting for VRay though and I wish Arnold had it. This is shaving off so much of my render time. Anyways, without more talk-a-doo, here are some images.

Simple Physical Sun and Physical Camera setup.

Same as the above including a directional light that provides for the shadows.

Same as the above but with preliminary shaders. I wasn't happy with the pale discolorations. Took some time before I figured out how to get rid of those but managed eventually.

Discolorations gotten rid off and a very simple reflection shader to calculate render times in VRay. The light artifact on the floor plane is negligible as I will be turning off its primary visibility in the final renders. It will only serve as a shadow catcher.

The soil seems very reflective in this though for some reason its actually VERY light subsurface scattering attribute that's causing the highlights. I have since turned that off and it looks much better.

Similar setup to the second image, but this time with a Physical Sky; hence the blue reflections.

Crazy render featuring buggy color management in the VRay render buffer. I was stuck on this quite some time without actually knowing what was going on. Silly VRay.

HDRi lighting:
After testing this Camera and Lighting setup for about a day I wanted to insert the HDR image I had prepared for my Arnold lighting scene into this VRay scene. For some reason I had to pump up the exposure settings for this map through the roof and I still don't know why, but the results are pretty good.

Added my HDR image with an exposure of 1.0

Exposure: 3.0

Exposure: 10.0

Exposure: 14.0

I also changed the location of the light a little bit to see whether the shadows and highlights would eventually complement the model opposed to make everything look pale and bland. I also changed the hue emission of my HDR image to a more blue light.

At this point I figured out why my chrome ball wasn't working. I also changed the location of the Sunlight emitter as I felt the shadows weren't working in favor of the model.

Another change of location of the Sunlight emitter. I quite like the placement of it here, It also resembles the bulk of light that comes into the actual shot footage.

Added ambient occlusion against a few seconds of render time. The amount of time added to the total render time per frame is only a few seconds and for that it adds so much to the scene. I was really surprised this was the ting that kept my render feeling flat and boring. I like it much better now.

Camera settings.

My current render settings. They're fairly straight forward as VRay needs very little input to output an excellent render. The majority of the personality of the render comes forth from the HDR image I've put into the Skydome but since that has such an incredibly large parameter window I'm not going to attach it to this blogpost. It's seriously so large I had to screenshot it ten times to be able to capture all those settings.

Fluid Simulation in BiFrost:
Before I really did any of the stuff above I had been playing around in BiFrost and getting some nice results using simple boxes and low particle counts. At that point I felt my consumer grade workstation would provide for enough power to run a few stable simulation in Maya 2015 using BiFrost. I was wrong though. I spent about ten frustratingly slow, unproductive and inefficient days on BiFrost before finally giving up on it. Either my workstation or the software were not ready for this kind of use, while both really ought to be.

A quick and dirty representation in BiFrost that actually came really close to what I wanted it to look like. At this point this has only a single liquid emitter with many parameters left at their default settings. However, at this point already (frame 30) it took around 1 a minute to simulate each frame; much longer than it should take really.

7m26s to incompletely render a particle system using Mental Ray.

This brings me to another problem I couldn't figure out, but maybe someone reading this may know the solution to. Once I rounded up my simulation I didn't want to keep it in cache so I created cache files to my drive. However, once I did this I wasn't able to mesh the particle system anymore, so I was forced to render this in Mental Ray. The other way around, when I did start meshing the particle system before I wrote it to my disk I ended up with a shortage of RAM as doing both these operations  simultaneously is incredibly heavy. I still can't find the solution to this.

Add reflection, refraction, ambient occlusion, a shadow catcher and a scene that needs to be visible in the semi-opaque water and it goes well over 20 minutes per frame. Unfortunately I don't have that kind of processing power, nor do I have the RAM and drive space necessary to render out files the size of what this was putting out.

At this point I committed  to all the aforementioned steps excluding BiFrost and Maya 2015. I spent the holiday (autumn) 'rebooting' this project in a state that would actually be interesting to see and more challenging to complete. Doing more shoots for clean plates and preparing a proper folder structure was part of the challenge. Eventually I figured it out in eight days and am currently a week behind on schedule, though that should not be an issue as I've not accounted for the Christmas Holiday in my working schedule. This should give me plenty of time to get back on track.

Fluid Simulation in RealFlow:
So after the BiFrost fiasco and all the additional work I had given myself a little more time to prepare for what is actually required for simulations. Emitters, colliders and environment factors. Realflow seemed the easier and more straightforward way of working so I followed a tutorial called 'Introduction to Realflow' by Digital Tutors. Thus far, the results have been pretty good and I'm glad I've switched to this program.
This is the result of some quick and dirty fluid simulating in Realflow. I've spent half a day on getting this result and made enough mistakes to do it right next time. I'm glad meshing, exporting and stitching the alembic caches turned out so easy, though I'll be getting to that later.

As you can see from the viewport in the right bottom part of the image I only used 5 nodes for the completion of layout for this scene. It managed to calculate the path of all particles along the sculpted river within 10 minutes and was much more accurate than BiFrost was. As long as the mesh is triangulated I have yet to have any funky results in RealFlow.

The meshing is also fairly easy, though it does require some iteration. Once you have the right settings though it's a walk in the park. In some cases it may need a differentiation of about 0.001 to the polygonal size or the filtering, but usually these settings can be kept the some across all scenes.

This simulation is a little more demanding as it was difficult to have to river flow through each bridge. What I've done to solve it is simply add more emitters that I manually activated once the particles from the previous would reach its location. It's a tacky thing to get right, but from the camera angle I'm using it's completely obstructed so I'm getting away with it. :)

Meshing of the simulation above. Though the fluid is way to thick there's an easy way to fix this by reducing the viscosity of the fluid. By default this is set to 3.0, which resembles substances like mud and gooey fluids. Water, in this case, has a far lower viscosity. For future reference, I'm using a 1.0 viscosity in all my later simulations.

This bring us to exporting. RealFlow has a great exporting system which basically exports everything as you simulate it. The best thing to do is, once your satisfied with your simulation you reset your scene and go into Export Central. Check everything you want, include naming your files and export locations. After that, just hit simulate and it starts writing everything to your disk. It's absolutely fantastic.

Realflow, for a reason I read somewhere on the internet but probably still don't fully understand, doesn't create a single alembic cache but creates one for each frame. This tool, simply situated in the RealFlow shelf, stitches those together into a single file so you can use that in Maya or any other application that allows to Pipeline Caches. Again, it's really awesome.

So far, this is the first result of a render and I quite like it. It's not really water nor is it really gooey, though I think due to the scale of it, it may really fit the scale model. I'll be testing this at the end of the day. The great thing right now is that this cache only takes 11 seconds to render entirely at 1280x720 and as long as each frame stays below 1m30s for the water I am a happy camper.

At a request of a friend of mine I've been streaming some of the work I've been doing in Maya and RealFlow on Twitch. I'll probably do this every day for the time being as it's also motivating me to work a little harder and a little more efficient since at least one person is looking over my shoulder now. However, and this is a warning; I do have obnoxious music playing while working, so you might want to just mute the audio if you even plan on watching it.

That's it again. A lot this time. Hope you enjoyed reading all of it.

ps. Asterix is almost here! Weeeee

Friday, October 10, 2014

Specialisation Progress & Official Asterix Trailer!

A month ago I started the second to last course and it's actually a sort of phase called 'Specialisation'. I've mentioned this phase in a post before because I wanted to address some of the ideas I had for it. Unfortunately most of these ideas where really not executable because they either were incredibly over-scoped for someone still needing to learn everything; and secondly, they really didn't coincide with the intended learning goals. It's less about making a product and more about learning the bumpy road towards it; at least that's how I understood it.

So that's where I started studying kind of everything related to VFX workflow. I bought books and subscriptions to bring structure to my learning progress. I've set up a heavy duty roster so I can easily see what 'classes' I am taking on which day and at which time; including what chapters I should read and which I can skip... Writing and reading this aloud it sounds a little silly but it's definitely helping me progress a lot faster than I anticipated. So far it's a pretty solid ride.. and I guess now it's time to show some stuff.

Unfortunately I can only show the stuff I made this week because everything from the past 4 weeks is currently only on 2 drives, both of which are not accessible at this time. One is being used to render out a 400 frame sequence for a presentation next Wednesday and the other is at home - I am in Ghent. So here goes.

Swolla at around 1650. This is what I want to make into a scale model.

Modeling Progress:

I started out with a design of Zwolle, a Dutch city in Overijssel. I used the old inner city as example, called Swolla, and drew a rough illustration of where the rivers, roads, walls, etc would be. Stylized of course because I want to create a scale model. In dutch that's called a "Maquette". In the image above I've started creating polygonal meshes to resemble the design.

Trying to get a feel for the Maquette. At this point I wasn't sure why it didn't work yet as a scale model. The extruded part didn't make it feel more convincing but I figured it had something to do with shaders and a sense of scale; something that's completely missing here. I did figure out what colors I wanted everything to have.

For this ugly image I used a random city generator MEL script I found on Creative Crash but I wasn't happy with it at all. It started looking like an industrial area or a large city with highways running through it and monster rivers flowing on each side. Totally broke the feeling of a old city. I did make the bridges a little prettier.

So I created meshes around and overlapping the roads; effectively creating a much cozier environment. I also took some open source buildings from 3D Warehouse created in Google sKetchup and cleaned them. I had hoped this was quick, but it only turned out to be very dirty. Normals pointing in all direction, un-closed meshes and locked UV sets however, it's really good stuff nonetheless.

Another shot of where I figured I had added enough detail. At this point I used a small script to randomly select faces on the buildings to make chimney's and scale them accordingly.

I also did some really easy rigging. I mean.. really easy.

Shading Progress:

Camera (AA): 3     ...     Back lighting: 0.25     ...     SSS: 0.1     ...     Light passes: 3
For this I took a small selection of the Maquette so I could cut down on render times so I could quickly iterate between attributes and values. I was trying to get papery / cardboard kind of feel to the models. After some research this was actually fairly easy to achieve as it's mostly back lighting and some sub surface scattering. Next to that I was trying to cut down on render time by reducing the multiplier in the Arnold render settings but also light passes, for softer shadows.

Camera (AA): 4     ...     Back lighting: 0.75     ...     SSS: 0.25     ...     Light passes: 2
In this image I cut down render times by 50 percent and managed to increase the subsurface scattering. The back lighting in this image is increased by a factor three which in turn makes sure all the surfaces are lighter and have bright spots where the distance to the next side is smaller. For the time being I was happy with this result and decided that maybe a displacement map or bump map would make it more convincing, but as you can see from the images they made it worse. Although this is just a 'funny' stab at how it looked at one point, I really couldn't get those to look right. 

Terrain shader set up and bump map still applied to the rest of the objects. I quickly realized that even the smallest crack in anything resulted in the material definition being completely off from what I wanted it to be.

At this point most of the shaders were done and I called it quits. Here I tried out the focal distance option in the Arnold camera attributed and attached it to a measure distance node, which in turn would be attached to two locators, both of which were constraint to the camera settings. This way I can move the camera and focal point around and my focal distance will automatically shift. I'm thinking about using NURBS curves and animating locators along those curves to avoid having to manually change things.

Rendering progress:

I'm trying to become accustomed to separating my scenes into separate ones where I eventually use the render scene to reference all the one's holding shader, polygonal and others sorts of information. I feel this works best and keeps the working environment clean.

Testing shadow catcher in my render layers according to the support files provided by Arnold.

Testing shadow catcher in combination with a clean plate I created of the living room floor. I managed to get the reflections right for what I wanted and prepare the scene for HDR image lighting. The right ball in this case is a chrome one to check the reflection. This is simply following another part of the Arnold support file.

HDR image reflection in the chrome ball and a change of coloration in the entire scene. The reflections are accurate and the lighting is almost identical to the recording circumstances. I set up another lighting scene to help with the appropriate discolorations due to spill light caused by blinds and a construction light. This should neutralize all the incorrect lighting.

In the above four images I was trying out render settings. Like in the images above relating to shading I removed render passes for lights and tested the difference in duration for different kinds of lights (directional against area lights etc). I found that, against my expectation, the skydome is actually a really fast and accurate way of lighting. I stuck to that and added lights to keep an interesting image but optimized for rendering at the same time.

0m42s - 0m50s

1m21s - 1m32s

1m49s - 1m59s

I didn't want render times for this little piece to exceed 2 minutes so I used the set up from the renders of the hand and played a little bit with the lighting and HDR image emission attributes. Turns out if I render this scene and try to avoid really heavy shadows I can set many of the lighting passes set to 1 or 2 iterations. The anti-aliasing fixes most of the ugly stuff.

EDIT: apparently Google is awesome and automatically made a .gif of the previous images!
Awesome .gif

Where I am currently:

As you can see I decided not to keep either of the two previously shown shapes and collection of scale models. I didn't understand at first why my Maquette wasn't feeling like a one and I think the reason for that is because Maquette nearly always just cut off the terrain. They're not keeping into account all the other stuff because it's really just not interesting enough to see. So that's what I did; Boolean-ed my way through hundreds of meshes, repaired all of them, UV'd them again and textured the cut up parts.

The lighting in this image is not final yet, but has to do for now because next week I'm going to dive into simulations. Fluid, smoke and fire. According to my roster (hehe) I have three weeks to learn and apply.

This is also the model that I am currently rendering for a presentation next Wednesday. The windmills (5 in total) are rigged and animated so it's not a completely static scene.  Some things are still missing and I'm currently in the process of adding some final touches. In this case that's guard towers, benches and some fences.

I am also quietly thinking about redoing some of the footage I've shot. I talked about it with an ex-colleague and we kind of came to the conclusion that it might be much more interesting to shoot for action in the digital scene instead of just moving the camera around for 20 seconds. Still just food for thought.

Yes! Finally a trailer that is actually official. You'll see it also says in the title. It's 2 minutes long and no subtitles unfortunately, but really, you'll get what's going on without knowing a word in French.

That was it again. Hope you enjoyed that wall of text,