Exploring nVidia’s Dynamic Super Resolution

Aliasing — the appearance of jagged edges in games — has a been a conundrum for hardware manufacturers practically since the dawn of 3-D gaming. Both AMD and nVidia have churned out several proprietary technologies that aim to reduce aliasing, but the real difficulty is producing a clean image at a low cost to performance. Traditional multisample antialiasing (MSAA) can dent performance pretty hard. And worst of all, it doesn’t even work that well.

There are basically three problems with aliasing:

  1. Jagged lines on the edges of objects
  2. Jagged lines in transparent textures, like fences and trees
  3. A shimmer effect when the game is in motion

Aliasing-vs-AAMSAA solves (1). It doesn’t do so well with (2) and (3). For (2), nVidia and AMD allow you to select transparency antialiasing, which can be set to either multisampling or supersampling. Multisampling has a negligible performance hit when added to MSAA, but its visual impact is also less impressive. Supersampling looks great, but dings performance pretty hard. It also does a great job with (3), much more so than multisampling. The only downside, aside from the steep performance cost, is that it softens the image a bit and has some compatibility issues with games.

In recent years, MLAA (from AMD), FXAA (from nVidia), and SMAA have gained popularity as post-processing antialiasing options that effectively provide antialiasing for the entire image with a minimal performance cost. SMAA generally works the best of the three, and in most games it’s my go-to option. I’m also a fan of nVidia’s TXAA, because it is a bit superior to SMAA in reducing shimmer. The downside is that it softens the textures in-game which is not very noticeable at 2xTXAA but very noticeable at 4xTXAA.

Brute Force II: The Brute Forcening

So, we have all these imperfect answers to aliasing that try to minimize the performance cost. SMAA is in my view the best overall solution, but it can’t match the brute-force image quality of MSAA+Transparency SuperSampling. nVidia, though, has introduced another kind of brute-force approach: Dynamic Super Resolution, or DSR. DSR doesn’t have compatibility issues because it works at the driver level. It essentially tricks the game into thinking your monitor supports a higher resolution; the game renders at the higher resolution, and the image is downscaled to your monitor’s native output.

The upside is that this produces a remarkably clean, shimmer-free image. If you run a 1080p monitor, you can set DSR to 4x, and the game will render in 4k resolution. I have a 1440p monitor, so I played around with a few of nVidia’s options:

1.5x = 3155 x 1764

2x = 3620 x 2036 (this is just shy of 4K)

3x = 4434 x 2494

4x = 5120 x 2880

I didn’t even bother trying 4x DSR; that’s such a crazy-high resolution that I don’t see how my graphics cards could handle it in all but the oldest games.

And that’s the downside. The performance cost here is essentially equivalent, or even slightly greater (due to the downscaling) than actually playing at that resolution. Now, if you want to take super-smooth screenshots, you can always bust out your 4k monitor, set the DSR to 4x, and render the game in 8k. It’ll be unplayable, of course, but PC Gamer does exactly that to produce some nice downscaled screenshots. Oh, and that’s another thing: I can’t take screenshots of DSR, because it’d just come out as an ultra-high-res screenshot.

I tried DSR in four games: Crysis, Crysis Warhead, Alien: Isolation, and Shadow of Mordor.

Crysis and Crysis Warhead

ScreenShot0092
Nice pic, right? It’s not DSR; it’s transparency supersampling. Also, I took this in-game picture in 2008.

I started with Crysis because, as an older game, I knew it wouldn’t tax my cards too much — but I also knew that given the game’s reputation as a card-crusher that lasted several years after its release in 2007, it would still give the hardware a bit of a challenge.

2x looked fantastic, but was barely playable because of frequent hitching. I thought this might just be the result of my graphics cards, which are 2GB. Generally, ultra-resolution gamers should go for GPUs with more VRAM. But in Warhead, 2x was quite smooth. I didn’t use a frame rate counter (Crysis/Warhead have one built in to the console command), because I wanted a more subjective assessment of the experience; high frame rates mean nothing if the game is hitching or micro-stuttering. The engine optimizations for Warhead allowed it to be nice and playable with very little hitching, though admittedly I didn’t go through any bombastic scripted sequences. At 3620 x 2036, jaggies and shimmering alike are pretty much gone. I managed comparable image quality with 3155 x 1764 with 2xMSAA (1.5x DSR), and I was able to add 4x transparency supersampling and still maintain smooth frame rates without any hint of the hitching I’d had at 2x DSR.

Shadow of Mordor

Mordor doesn’t have any built-in antialiasing options. I can’t help but wonder though if the game employs SMAA at the higher visual settings by default, because there’s very little aliasing in the game to begin with. I tried 1.5x DSR; it was unplayable due to a huge frame rate hit, which just goes to show that as a brute-force approach, DSR shouldn’t be used unless you have lots of overhead with your GPU.

Alien Isolation

This was an odd outlier in that not only did both 1.5x and 2x DSR provide a performance hit (1.5x was smooth with intermittent slowdown in effects-heavy scenes; 2x was unplayable), but the image quality didn’t improve that much. I tried adding FXAA, and it did produce a very clean image. But the game offers SMAA, and frankly it works well enough that the choppier performance with DSR simply isn’t worth it. I should add that SMAA + DSR produced a really washed-out, jagged image in this game.

DSR vs. Transparency SuperSampling

So here’s the question: what’s the point of DSR? Transparency supersampling has been around for years. It works differently than DSR, but it’s a similar brute-force approach. Does DSR provide a significantly improved image at less of a performance cost? In a word, no. It does indeed look a bit sharper than 4xMSAA with 4xTSS. However, the lower-res image with TSS actually runs a fair bit better, and in Crysis I could add TSS to 1.5x DSR (with 2xMSAA) for the ultimate in uber-clean imaging. And unless you turn the “smoothness” setting for DSR down from its default 33% (I set mine at 15%), you’ll have a hard time seeing much difference.

If a game has SMAA, the modest visual enhancement provided by DSR is definitely not worth the steep performance cost. SMAA remains by far the overall best balance between image quality and performance, so DSR is stricly for older games that don’t support shader-based anti-aliasing. Most gamers won’t have the hardware to use DSR in modern games anyway, and personally I’d just stick with a lower-res image and add transparency supersampling for a similarly clean image at a lower performance cost.

Leave a comment