Research Impact Retrospective: MLAA from 2009 to 2017

Research Impact Retrospective: MLAA from 2009 to 2017

I have been honored to copresent the first HPG research impact retrospective with Alexander Reshetov. In this talk we covered the creation of MLAA as well as the impact it had (and has) on real-time rendering antialiasing, including temporal antialiasing and the recent 4k rendering reconstruction techniques. Check out my part of the presentation below to see how antialiasing has evolved over the last decade!

Powerpoint [35.8 MB]
Leave a comment
Tweet
Top of page

Next Generation Post Processing in Call of Duty: Advanced Warfare

Next Generation Post Processing in Call of Duty: Advanced Warfare

Proud and super thrilled to announce that the slides for our talk “Next Generation Post Processing in Call of Duty: Advanced Warfare” in the SIGGRAPH 2014 Advances in Real-Time Rendering in Games course are finally online. Alternatively, you can also download them in the link below.

Post effects temporal stability, filter quality and accuracy are, in my opinion, one of the most striking differences between games and film. Call of Duty: Advanced Warfare art direction aimed for photorealism, and generally speaking, post effects are a very sought after feature for achieving natural looking, photorealistic images. This talk describes the post processing techniques developed for this game, which aim to narrow the gap between film and games post FX quality and to help building a more cinematic experience. Which is, as you can imagine, a real challenge given our very limited time budget (16.6 ms for a 60 fps game, which is orders of magnitude less of what can be typically found in films).

In particular, the talk describes how scatter-as-you-gather approaches can be leveraged for trying to approximate ground truth algorithms, including the challenges that we had to overcome in order for them to work in a robust and accurate way. Typical gather depth of field and motion blur algorithms only deal with color information, while our approaches also explicitly consider transparency. The core idea is based on the observation that ground truth motion blur and depth of field algorithms (like stochastic rasterization) can be summarized as:

  • Extending color information, according to changes in time (motion blur) and lens position (depth of field).
  • Creating an alpha mask that allows the reconstruction of accurate growing and shrinking gradients on the object silhouettes.

This explicit handling of transparency allows for more realistic depth of field focusing effects, and for more convincing and natural-looking motion blur.

In the slides you can also find our approaches to SSS and bloom, and as a bonus, our take on shadows. I don’t want to spoil the slides, but for SSS we are using separable subsurface scattering, for bloom a pyramidal filter hierarchy that improves temporal stability and robustness, and for shadow mapping a 8-tap filter with a special per-pixel noise A.K.A. “Interleaved Gradient Noise”, which together with a spiral-like sampling pattern, increases the temporal stability (like dither approaches) while still generating a rich number of penumbra steps (like random approaches).

During the actual talk in SIGGRAPH, I didn’t had time to cover everything, but as promised every single detail is in the online slides. Note that there are many hidden slides, and a bunch of notes as well; you might miss them if you read them in slide show mode.

Hope you like them!

Powerpoint [407.8 MB]
45 Comments
Tweet
Top of page

Starry Eyed: Where Game Graphics Go Next

Starry Eyed: Where Game Graphics Go Next

I was fortunate enough to appear in Develop‘s September issue!

Check out the online version of the article here.

1 Comment
Top of page

Next Generation Character Rendering

Next Generation Character Rendering

As promised, here you have the GDC 2013 slides of our Next-Generation Character Rendering talk (see link below).

Beware of the big download, the slides are full of high-definition images and movies. Hope you like them!

We will be presenting it again in GDC China 2013, so if you are nearby, we invite you to come by!

Powerpoint [323.6 MB]
16 Comments
Tweet
Top of page

Next Generation Life

Next Generation Life

Lauren 2

Lauren 3

Lauren 4

Jonas 1

Our talk in GDC 2013, Next-Generation Character Rendering, is a few hours away. On it, we will present what represents to us the culmination of many years of work in photorealistic characters.

We will show how each detail is the secret for achieving reality. For us, the challenge goes beyond entertaining; it’s more about creating a medium for better expressing emotions, and reaching the feelings of the players.

We believe this technology brings current generation characters, into next generation life. At 180 fps in a Geforce GTX 680.

The team behind this technology consists on Javier Von Der Pahlen (Director of R&D), Etienne Danvoye (Technical Director), Bernardo Antoniazzi (Techical Art Director), Zbyněk Kysela (Modeler and Texture Artist), Mike Eheler (Programming & Support) and me (Real-Time Graphics R&D).

You have a teaser of the slides here:
Next-Generation-Character-Rendering-Teaser.pptx

The YouTube account of Activision R&D:
https://www.youtube.com/user/ActivisionRnD

The movie:

And some additional images:

Jonas 2
On
Off
Lauren 5
On
Off
Lauren 6
On
Off

We will show it running in our two-year old laptop, in live. Hope to see you in our talk!

Edit:

  • First of all I’d like to credit the Institute for Creative Technologies for the amazing performance capture provided for the animation (http://ict.usc.edu/prototypes/digital-ira/). Their new capture technology enable photoreal facial animation performances together with extremely detailed skin features. The full team behind the capture, leaded by Paul Debevec, is the following: Oleg Alexander, Graham Fyffe, Jay Busch, Ryosuke Ichikari, Abhijeet Ghosh, Andrew Jones, Paul Graham, Svetlana Akim, Xueming Yu, Koki Nagano, Borom Tunwattanapong, Valerie Dauphin, Ari Shapiro and Kathleen Haase.
  • Lauren head scan (the woman one) was obtained from Infinite-Realities.
  • Second, it has been quoted in numerous sources if this was related with the Nvidia FaceWorks presented in GTC. I’d like to clarify that we both use the same performance capture from the ICT, but the animation and render engine is completely different. In other words, same source data but different engine.
  • Finally, I’d to clarify that the technology we presented runs in its higher quality preset at 93/74 fps at 720/1080p respectively, in a GeForce GTX 560 TI (a two-year old mid-range GPU).
  • Thanks to all the people who showed interest in our reseach, the slides will be available pretty soon online!
59 Comments
Top of page

Open Your Eyes

Open Your Eyes

After a silent period, I’m proud to present the advances we’re doing in eye shading at Activision Blizzard!

This wednesday I’ll talk about it in the Advances in Real-Time Rendering course (in SIGGRAPH 2012), I invite you to come by for all the gory details!

The R&D team behind this shader consists on Javier Von Der Pahlen (Technology Director R&D and Photographer), Etienne Danvoye (Technical Director) and me (Real-Time Graphics Researcher). Also participated Zbyněk Kysela (Modeler and Texture Artist), Suren Manvelyan (Photographer) and Bernardo Antoniazzi (Techical Art Director).

Performance wise, rendering the eyes takes a 2%-3% of the whole demo. This is work in progress, but here you have some shots showcasing our shader:

Eye Shader
On
Off
Crying
On
Off
Crying
On
Off
Powerpoint [212.7 MB]
6 Comments
Top of page

SMAA 1x Featured on ARMA 2: Operation Arrowhead and Take on Helicopters

SMAA 1x Featured on ARMA 2: Operation Arrowhead and Take on Helicopters

SMAA 1x natively integrated in latest ARMA 2: Operation Arrowhead beta, and in 1.05 update of Take On Helicopters!

Leave a comment
Top of page

The Day Has Come

In this important moment of my life, the day has come to end my skin research in order to take a new professional direction.

These last months I’ve learned a very important lesson: efforts towards rendering ultra realistic skin are futile if they are not coupled with HDR, high quality bloom, depth of field, film grain, tone mapping, ultra high quality models, parametrization maps, high quality shadow maps and a high quality antialiasing solution. If you fail on any of them, the illusion of looking at a real human will be broken. Specially on close-ups at 1080p, that is where the real skin rendering challenge is.

As some subtleties (like the film grain) are lost on the online version, I encourage to download the original blu-ray quality version below, to better appreciate the details and effects rendered (but be aware that you will need a powerful computer to play it). Please note that everything is rendered in real-time; in fact, you can also download a precompiled version of the demo (see below), which shows the shot sequence of the movie, from its beginning to its ending. The whole demo runs between 80 and 160 FPS, with an average of 112.5 FPS on my GeForce GTX 580. But it can be run in weaker configurations by using more modest settings.

The main idea behind the new separable SSS approach is that you can get very similar results to the full 12-pass approach ([Eon07]) by just using a regular two-pass setup. It can be done in screen space and is really really fast (you may want to see this related post). I hope to write something about this in the future. However, the source code of whole demo is readily available on GitHub.

For the demo I’ve used SMAA T2x, which does a very good job at dealing with shader aliasing while avoiding pre-resolve tone mapping. The demo shows the average/minimum/maximum frame rate after running the intro, which hopefully will make it useful for benchmarking GPU cards.

I think there is still a lot work to do. Probably the most important one will be rendering realistic facial hair. It will be my dream if my skin research helps to improve the rendering of humans in games; I truly believe that more realistic characters will inevitably lead to deeper storytelling, and more emotionally-driven games.

Links:

  • Precompiled demo [47.7 MB]: Download Torrent — Hit space to skip the intro, and go to the interactive part
  • Source code: GitHub
  • Blu-ray quality movie [693 MB]: Vimeo Mirror Download Torrent — On Vimeo, search for “Download this video”

The 3D head scan used for this demo was obtained from Infinite-Realities. Special thanks to them!

146 Comments
Top of page

SMAA v2.7 Released and EUROGRAPHICS Presentation

SMAA v2.7 Released and EUROGRAPHICS Presentation

The full source code of SMAA has been finally released, including SMAA S2x and 4x!
https://www.iryoku.com/smaa/#downloads

Checkout the subpixel features section of the movie to see the new modes in action (or download the precompiled binary).

We are also happy to officially announce that all the nuts and bolts of the technique will be presented on EUROGRAPHICS 2012, in Cagliari (Italy). The paper is now much easier to follow, and has updated content. To all who tried to read the technical report and failed to deeply understand all the details, we definitely encourage them to give the EUROGRAPHICS version a try, as it does a much better job at explaining the core concepts and implementation details.

We’d like to thank all the community that is supporting SMAA, and give special thanks to Andrej Dudenhefner, whose hard work made InjectSMAA possible, allowing to use SMAA 1x in a lot of already published games. But bear in mind that SMAA 1x is just one third of the whole technique!

And to finalize this post I would like to remember some of the design decisions and key ideas of our technique, which were not completely understood on the past. I clarified them on a Beyond3D thread but I would like to give it another round here.

The main design decision behind SMAA is to be extremely conservative with the image. SMAA tries to analyze the possible patterns that can happen for a certain pixel, and favors no filtering when the filtering decision is not clear. As our paper shows, this is extremely important for accurately recovering subpixel features in the S2x, T2x and 4x modes. Additionally, super/multisampling will cover these cases, yielding smooth results even when the MLAA component is not filtering the pixel.

Furthermore, the MLAA component is enhanced, not only by improving the searches accuracy without introducing dithering or performance penalties, but by also introducing an stronger edge detection and extending the type of patterns detected, as shown in this example from Battlefield 3 (to easily compare the images, we recommend opening the images in browser tabs, and keep them at their original resolution):

SMAA
SMAA
MLAA
FXAA

Finally, SMAA is not devised as a full MSAA replacement. Instead, the core idea behind it is to take the strengths of MSAA, temporal SSAA and MLAA, and make a very fast and robust technique, where each component backs up the limitations of the others, delivering great image quality in demanding real-time scenarios.

5 Comments
Top of page

SMAA T2x Source Code Released

SMAA T2x Source Code Released

We’re thrilled to announce the release of the SMAA T2x source code!

We joined forces with Tiago Sousa from Crytek, to deliver a very mature temporal antialiasing solution. It has been integrated into CryEngine 3, checkout the SMAA demo movie.

The goal of SMAA is to more accurately match the results of supersampling, by faithfully representing subpixel features, and by solving other common problems of filter-based antialiasing. This reduces the flickering seen in complex scenes.

On the other hand, we made really big efforts towards simplifying the usage of our shader; the source code is now reduced to a single SMAA.h header and two textures, with very detailed comments and instructions. We hope this will ease integration into game engines. The feedback so far has been very positive, with a Java/Ruby programmer with no experience in graphics integrating SMAA into Oblivion (using OBGE) in just a few hours.

Leave a comment
Top of page