AMD FidelityFX Super Resolution: the Digital Foundry interview
Everyone wants extra performance from their PC hardware, right? And that's where technologies like AMD's FidelityFX Super Resolution (FSR), Nvidia's DLSS and Intel's upcoming XeSS are all about - essentially allowing the GPU to render at a lower resolution, then either upscaling or reconstructing to the native output of the display. In a sense, this is new territory for PC, where native resolution rendering was for a long time considered the only way forward. However, in console land, 'smart upscaling' isn't new, really coming to the fore with the launch of PS4 Pro in 2016. PC has embraced similar techniques, but has also spawned its own blend - and AMD's FSR in particular stands apart. So when Team Red asked us if were interested to talk FSR, we jumped at the chance. AMD's approach is different, both in terms of technology and philosophy.
To set the scene, what we've traditionally defined as 'smart upscalers' have all had one thing in common: the use of prior frames as a reference for improving the quality of the next one to render. All of the effort the GPU has spent in generating a previous image, working in combination with motion vectors that inform the game of where those pixels will end up in the future, allows for extra detail to be injected into a freshly rendered frame. The two key technologies to use this initially were checkerboard rendering and temporal super-sampling. These techniques what made PS4 Pro's 4.2TF GPU capable of producing a pretty convincing 4K output and they've been used on all consoles by this point. Similar techniques are now found in a range of engines supported on PC too - and with an added component from machine learning, that's how DLSS 2.x works as well, and additionally, it's how we expect Intel's XeSS to play out.
When AMD revealed that FSR would be a spatial-only upscaling solution, we were sceptical, because rather than drawing on data from prior generated frames to increase detail, it has no visibility on data from anything other than the current frame which inevitably leads to temporal discontinuities. AMD was effectively pursuing techniques that the industry in general had basically left behind - FSR isn't a reconstruction technology like TSSAA or checkerboarding, rather an upscaling one. However, there is more to it than upscaling alone: it is augmented with the ability to detect and refine edges and to mitigate the aliasing artefacts that would otherwise result - and that's what makes it intriguing. Then there is the philosophical angle too. Nvidia DLSS is closed source - a black box for developers, if you like, and only capable of running on ML-equipped cards. Intel XeSS aims to be more open but certainly to begin with, there's no source access.
AMD's FSR is the direct opposite: the source code is fully open and as it's a fully software-driven technique and it should run just fine on any modern GPU, whether it's sitting in a PC or a console. This has led to a wide range of support arriving very quickly, with unexpected 'homebrew' support added to various emulators, and even implementation in brand-new console games, such as Myst and (we suspect!) Arkane Studios' Deathloop.
All of which leads into this interview with Nick Thibieroz, Director of Game Engineering at AMD. We wanted to know more about why AMD chose to go against the grain with a spatial approach to upscaling, how FSR fits into a FidelityFX ecosystem that already includes another beneficial technology for upscaling (CAS), why it hasn't mirrored the approach of Nvidia and Intel in embracing machine learning and what best practises there are for developers in getting the best out of the technology. Also, what of the future? FSR is in a 1.0 state, improvements are surely in the pipeline, so what should we expect?
Digital Foundry: Can you give us an initial overview on AMD's objectives for FSR and how you settled specifically on expanding on your existing work with spatial upscaling?
Nick Thibieroz: The goals we set ourselves for our first upscaling solution were certainly ambitious, and we tried to include as many benefits into the same solution as we could. I would say the three major features we identified as key were: cross-platform support, a great quality/performance balance, and being open source.
Being cross-platform allows as many players as possible to benefit from the technology, without being restricted to a given generation of graphics cards, GPU vendor or specific hardware platform. We felt this goal was especially important given the current climate around GPU availability.
Of course, upscaling quality was a major focus, yet this had to be tempered against the cost of executing our solution on a GPU. Obviously, those goals are kind of counter to one another in many ways, as higher quality will typically demand expending more processing effort. Clearly, we had to go beyond the current state-of-the-art in spatial upscaling if we were to enable the technology on the broadest set of GPUs possible.
Open sourcing the source code was also something I personally pushed for, as we know this is what game developers want to see. Not only does open source encourage adoption in a wider variety of contexts, it makes integration easier, allows others to learn, and unlocks the potential for the technique to be expanded or adapted to fit a developer's specific needs. After all, the game development industry has always been about sharing knowledge! This sort of openness is really in AMD's DNA, just look at the GPUOpen initiative, we feel it's important for us to play our part.
As the process evolved though, it became clear we had something special. So, we set ourselves another objective, which was to achieve wide and rapid adoption. Happily, it turns out that the three key objectives of FSR 1.0 - cross-platform, a great quality/performance ratio and open source - really help us achieve this goal too, which is great for gamers everywhere as it gets the technology into their hands that little bit faster! That said we knew we had to go further, and this meant including a few extras in the design which would help speed up adoption. Specifically, we provided support for engines and features that many developers are already using. This is why FSR 1.0 is available on both Unreal Engine and Unity and supports arbitrary scaling (a requirement to enable techniques like Dynamic Resolution Scaling).
Considering these ambitious objectives and the breakthrough we made with our spatial upscaling algorithm, settling on a pure spatial upscaling solution for FidelityFX Super Resolution 1.0 was the natural choice.
Digital Foundry: I think it's fair to say that the general direction of travel everywhere else in the industry for upscaling has been towards temporal super-sampling - whether via software or via machine learning. It's a journey that started to really gain traction with PS4 Pro back in 2016 and is effectively the standard in triple-A game development. I'm curious why AMD ruled it out?
Nick Thibieroz: We did not rule anything out! FSR 1.0 is the result of extensive research at AMD, with multiple groups exploring different solutions using a variety of underlying upscaling technologies. Given the goals we had set out, we chose to release FSR 1.0 as we know it would appeal to a large number of developers and gamers who want to be able to enjoy high-quality gaming at faster frame rates on multiple platforms, without being limited by proprietary hardware.
So, while I appreciate that the choice of a spatial upscaler surprised many, I think the results speak for themselves in terms of developer reception and adoption. In fact, it's been impressive to see the various ways FSR has been leveraged by professionals and enthusiasts alike so far!
If you solely focus on just one facet of upscaling - let's talk image quality - then sure, I think it's fair to say some upscaling techniques out there may provide better results (although in some cases "pixel peeping" on still images may be needed to make this claim). I think if you narrow the evaluation of upscalers to just a single criterion then your conclusion will be incomplete. FSR was designed to tick many boxes, as we've discussed, and it's the combination of great features that make up the full package. Think of it like buying a new car: I don't think anyone would solely base their purchase on how good the car looks. A smart buyer is going to consider how fast it goes, what options it provides, how smooth the driving experience is, and whether they can afford it in the first place.
Digital Foundry: The buzz surrounding the announcement of FSR is that it would be an open alternative to DLSS, that an accelerant of some kind was needed with the arrival of hardware-accelerated ray tracing and its high compute requirement. RT is essentially tied to the DX12 Ultimate feature set fully supported by AMD RDNA 2, which includes machine learning - so was there a specific reason not to tap into machine learning in particular?
Nick Thibieroz: I would say that we should be cautious about thinking of "Machine Learning" as a sort of magic wand which solves all problems. Of course, if it's done right, ML can be a very powerful tool, but it's not the only way to solve problems. In a nutshell, Machine Learning works by using a fairly brute force approach to discover the relationship between some inputs and some outputs. The process works by feeding data into an ML framework, with the goal of producing a model which encapsulates the relationship between the set of inputs and the desired outputs in the best way possible. We then take the relationship that has been discovered and apply it to inputs the model has never seen before. In some cases, ML is a no-brainer, it is the best and possibly only way to go. However, it is often possible to implement similar functionality via conventional algorithms, rather than relying on ML to discover the relationship for you.
There are also trade-offs that you're going to need to make to leverage ML, which mean it might not tick some of the other - really important - boxes for a solution. Using ML in a real-time context might mean that we lose portability, performance, and - if not done right - even some quality.
If we're being objective about ML and upscaling algorithms, I think the first iteration of NVIDIA DLSS is a good illustration of what I'm talking about here. The mere presence of ML in a solution does not imply you are going to get great results. ML clearly shows promise, and AMD is heavily investing in ML R&D on a number of fronts, but just because an algorithm uses ML does not mean it's the overall best solution given a set of goals.
Digital Foundry: CAS still sits within the FidelityFX feature set - are there scenarios where AMD would recommend it above FSR, or is FSR effectively a replacement technology?
Nick Thibieroz: Contrast Adaptive Sharpening (CAS) is a post-process image sharpening technology that restores detail in an image, typically in cases where Temporal Anti-Aliasing (TAA) has been applied. While CAS supports an optional upscale function, CAS by itself is not able to restore enough detail to upscaled content to deliver a close to native resolution gaming experience.
Given this, we would recommend that developers who use the CAS upscaling option should replace it with FSR for much improved upscaled image quality. Games that don't use upscaling, or that have FSR turned off in their game options, may choose to use CAS sharpening on their natively rendered image.
Digital Foundry: In broad strokes, is there a series of general 'best practise' guidelines for developers in implementing FSR for best results?
Nick Thibieroz: FidelityFX Super Resolution can be found on GPUOpen which includes integration steps and information on where to include it in the graphics pipeline. At a high-level, we recommend that FSR be implemented in perceptual space (which typically means after tone mapping), and to use MIP bias on qualifying textures to get an image closer to what native resolution rendering would look like. Some form of anti-aliasing is also required for FSR to produce a high-quality result, otherwise hard edges will be detected and rendered as such.
Digital Foundry: It's recommended to apply FSR to an anti-aliased image, but most games have various forms of anti-aliasing to choose from. Of course, results may vary on a game-by-game basis, but are there any general recommendations you can provide to users on which AA solution works best with FSR?
FSR works well with most forms of AA we've tried: MSAA, TAA, FXAA etc. However, the quality of the underlying AA implementation is critical to the final upscaling quality produced by FSR. For example, AA techniques that don't include additional sample information (either via straight multisampling or leveraging previous jittered frames) or do so in a suboptimal manner will show limitations in FSR upscaling quality e.g., on thin features.
The best recommendation I can give for a quality FSR implementation is to ensure the underlying AA technique is also of high quality, as temporally stable as possible, and able to depict thin features in a reasonable manner.
Digital Foundry: With so many games and game engines employing temporal super-sampling, does FSR have any applications as an additive to that technique - or is CAS a better fit for games that use temporal super-sampling?
Nick Thibieroz: There's been some experiments in mixing temporal techniques with FSR and some of them are even in shipped titles. Since FSR is open-source developers are free to experiment with mixing the two if they feel the results are worth it. This is one of the great benefits of the open-source approach!
At the end of the day if a game already supports a quality and performant upscaling implementation that all gamers can benefit from regardless of the hardware they play on then FSR may not be needed at all, and that's OK.
Digital Foundry: On high frequency detail, we found that AMD's current GPU scaler engine preserves more in-surface detail than FSR from the same base resolution - but FSR handles edge artifacting considerably more adeptly. Could your work with FSR roll back into your designs for your next generation GPU hardware scaler?
Nick Thibieroz: A correct implementation of FSR presents many advantages compared to just applying the GPU scaler at the back-end. First of all, for the best results, FSR should be rendered at a specific part of the frame to play well with post-process operations and to ensure the color space is right. Secondly, FSR offers a built-in sharpening pass which helps recover and preserve some of the high-frequency detail, even those in polygon interiors. Thirdly, we also offer guidance as to how integrations of FSR should leverage MIP bias adjustments, which also help to get the best results possible. Given all of that, I would normally expect to see a superior visual result from a successful FSR integration compared to simply using a pure GPU scaler at the end of the frame. Regarding further applications of FSR to different contexts, I'd say we're evaluating all options, but nothing we're ready to talk about at this stage.
Digital Foundry: FSR is in its 1.0 state, but can you share any information on where research is heading for the next iteration? Right now, the FSR algorithms are based on a 2D input from the game engine - but could FSR stand to benefit from access to depth information, or motion information?
Nick Thibieroz: We plan to continue developing and/or enhancing FSR over time, and delivering on our goals of giving the best possible experiences to gamers. With multiple research efforts happening in parallel, several technological directions are being considered. I look forward to speaking with Digital Foundry again once we're closer to announcing something.
Digital Foundry: FSR's greatest strength is that it's software-based and effectively runs on any GPU - or at least any GPU capable of running the host game. Is this a philosophy you're completely wedded to - or would you consider leveraging DX12 Ultimate features? This may reduce compatibility but it would still be open in the sense that the technology would not be proprietary or run on a single brand of GPU.
DX12 Ultimate features include DXR 1.1, Variable Rate Shading, Mesh Shader and Sampler Feedback. At this point in time, I don't believe any of those technologies are particularly relevant to upscaling algorithms (VRS could maybe have applications though). If you're asking more generally about the concept of leveraging bleeding-edge GPU technologies to produce super resolution algorithms, then all I'd say right now is that our upscaling research spans many directions! Since FSR 1.0 was developed for a broad set of users and platforms there's now a solution in place for virtually everyone, and this allows us to direct some of our focus on solutions possibly leveraging more advanced GPU functionalities or performance levels.
Will you support the Digital Foundry team?
Digital Foundry specialises in technical analysis of gaming hardware and software, using state-of-the-art capture systems and bespoke software to show you how well games and hardware run, visualising precisely what they're capable of. In order to show you what 4K gaming actually looks like we needed to build our own platform to supply high quality 4K video for offline viewing. So we did.
Our videos are multi-gigabyte files and we've chosen a high quality provider to ensure fast downloads. However, that bandwidth isn't free and so we charge a small monthly subscription fee of £4.50. We think it's a small price to pay for unlimited access to top-tier quality encodes of our content. Thank you.
Support Digital FoundryFind out more about the benefits of our Patreon
ncG1vNJzZmivp6x7psHRqJ6apZWne6%2Bx02ibop%2BZqa6tss6upZ2qqWJ%2FcX6QZpimnF2bwLN5yKernqqmnrK4