Ray Tracing is the latest graphical technique in acheiving a much improved and realistic graphics experience. Ray Tracing commonly referenced in PC gaming, xbox Series X and Playstation 5 also have the capable hardware to deliver such visuals. This article looks into how Ray Tracing differs from traditional methods and why its important for the future of graphics.
What is Ray Tracing?
Ray Tracing is a technique that makes light in graphic renders behave like it does in real life. It works by simulating actual rays of light, using an algorithm to trace a path that a beam of light would take in the physical world. Using this technique, game designers and visual effects artists can make virtual rays of light appear to bounce off objets, cast realistic shadows and create lifelike reflections.
First conceptualised in 1969, Ray Tracing technology has been used for years to simulate realistic lighting and shadows in the film industry. An industry which could afford to wait for the render. Gaming on the other hand has had to wait for the computing power required to produce such a realtime render.
“A game needs to run 60 frames per second or 120 frames per second, so it needs to computer each frame in 16 milliseconds” says Tony Tamasi, vice president of technical marketing at graphics card developer Nvidia
To understand just how Ray Tracing works, we need to take a step back and understand how games previously rendered light and what was required in order to achieve the Ray Tracing experience.
Games without Ray Tracing rely on static lighting. Developers place light sources within an environment that emit light evenly across any given view. Moreover, virtual models like NPCs and objects don’t contain any information about any other model, requiring the GPU to calculate light behaviour during the rendering process (drastically increasing the computational requirements). Surface textures can reflect light to mimic shininess, but only light emitted from a static source. Take the comparison of reflections in GTA V below:
Ray Tracing in Games attempts to emulate the way light exists in the real world. It traces the path of simulated light by tracking millions of virtual photons. The brighter the light, the more virtual photons the GPU must calculate, and the more surfaces it will reflect and refract from.
Its also possible to utilise Ray Tracing in sound design as Mark Cerny suggests, particularly if you’re looking for a faster, cleaner salutation than more traditional methods provide. If you treat sound waves as much smaller rays, you can model them much the same way as light, drawing them from the source to the end user and judging where they interact with objects in the environment. The difficulty is that sound waves are generally much larger waves, reaching up to ten meters or larger, whilst the wavelength of light is much smaller, measured in nanometers, so modelling them as rays will inevitably cause inaccuracies that must be dealt with.
It is likely that you first heard of Ray Tracing (despite seeing it on films for years) when Nvidia started routing the Ray Tracing capabilities of its RTX 20 series graphic cards. Nvidia made a tremendous amount of noise about how its RT cores would enable the next generation of GPUs to bring incredible real time Ray Tracing to video Games for the first time. It was unfortunately with an extremely high price tag. It did present however an amazingly incredible technical achievement, allowing modern gaming PC’s to do in real time what took those Hollywood studios several orders of magnitude longer.
Lighting The Way Forward
All of this isn’t great news for Nvidia, at least in the short term, but great news for Ray Tracing enthusiasts. Broader hardware support means that doing the work to build Ray Tracing tech into Games will look much more appealing to developers, because there will be an audience able to appreciate the results. And even for Nvidia, as Ray Tracing becomes more ubiquitous, so will sales of its RTX hardware, especially if the company is able to compress prices to accelerate mainstream sales.
It’s also good news for gamers at large. Ray Tracing may not be making huge waves in a practical sense now, in large part because current support feels a bit rushed or tacked on, but as we see Games constructed from the jump with Ray Tracing support in mind, the final products will start looking a lot more impressive. In countless demos, first from Nvidia and now from CryEngine and Unity (the Games engine that recently incorporated Ray Tracing tools), we’ve seen the potential of Ray Tracing and, properly implemented, it’s as stunning as the marketing would have you believe.
The takeaway is that Ray Tracing is more HDR than 3D. It’s not a gimmicky, flash-in-the-pan tech that will fail to gain a foothold and exit the conversation in under a year. It really is an important part of the future of Games, of ensuring that the next generation of Games look closer to reality than ever before, and being able to deliver it in real time really is a stunning innovation. It’s an inevitability, and the main question around Ray Tracing is less ‘if’ than ‘when’.
Deep Learning Super Sampling (DLSS) Explained
Pc gaming has always been about customisation, allowing you to change your game’s settings and bring the performance to the edge of your hardwares capability. Nvidia’s deep learning super sampling (DLSS) is a graphics rendering technique that will boost fps without sacrificing visual quality.
DLSS works by rendering the game at a lower resolution to reduce the stress placed on your GPU, before rebuilding the image back up to your chosen native resolution through artificial intelligence (AI). There are only two catches to the feature: it isn’t supported by every game, and, unlike AMD’s new FidelityFX Super Resolution (FSR) and other forms of anti-aliasing, it’s a hardware solution that requires an RTX graphics card in order to run.
While Nvidia Reflex goes a long way to give you the competitive edge in FPS games, DLSS is debatably the most innovative feature Nvidia has released in recent years. It was originally created to make ray tracing more accessible by reducing the performance hit that comes with it, but has since crafted its own identity as of DLSS 2.0, independently appearing in Games
The “deep learning” part is Nvidia’s secret sauce. Using the power of machine learning, Nvidia can train A.I. models with high-resolution scans. Then, the anti-aliasing method can use the A.I. model to fill in the missing information. This is important, as SSAA usually requires you to render the higher resolution image locally. Nvidia does it offline, away from your computer, providing the benefits of supersampling without the computing overhead.
What does DLSS actually do?
DLSS is the result of an exhaustive process of teaching Nvidia’s A.I. algorithm to generate better-looking Games. After rending the game at a lower resolution, DLSS infers information from its knowledge base of super-resolution image training, to generate an image that still looks like it was running at a higher resolution. The idea is to make Games rendered at 1440p look like they’re running at 4K, or 1080p Games to look like 1440p. DLSS 2.0 offers 4x resolution, allowing you to render Games at 1080p while outputting them at 4K.
More traditional super-resolution techniques can lead to artifacts and bugs in the eventual picture, but DLSS is designed to work with those errors to generate an even better-looking image. It’s still being optimized, and Nvidia claims that DLSS will continue to improve over the months and years to come, but in the right circumstances, it can deliver substantial performance uplifts, without affecting the look and feel of a game.
Leave A Comment