Graphics Cards

NVIDIA GeForce RTX 2060 - Review

The bet with its new Turing architecture graphics has been clear: light ray tracing in real time. NVIDIA has raised the bar in the gaming market and is getting everyone to talk about this technology. This new architecture implements the new RT Cores and Tensor Cores in order to implement two new technologies.

RT Cores have been created for Ray Tracing, limiting the loss of performance. The DLSS is based on Tensor Cores and therefore on artificial intelligence. Deep Learning Super Sampling basically serves to recover most of the lost performance.

The most basic model of these Turing architecture graphics is the RTX 2060. Among those designed for RayTracing, it is the simplest and is the last before the GTX 16 Series that lack RT Cores and Tensor Cores. They are more focused on 1080p, even in some titles it can go up to 1440p, depending on the title.

RTX 2060 Founders Edition 03

Home

The RTX 2060 Founders Edition graphics card arrives in the company's typical vertical opening box. It has a rather curious design as if it were the cover of a graphic and we also see the RTX distinctive in large. We also see the bright green color of the company.

As is logical, inside we find the NAVIDIA graphic card, which is protected by a large amount of polystyrene. Graphics cards and other similar components that are delicate should be well protected to avoid damaging it.

This RTX 2060 comes with a tight-fitting plastic cover that reduces the amount of plastic compared to old bags of which there was plenty of leftover. In addition to the graphics card we find a box of 'accessories'. This box basically contains a quick guide and a user manual, nothing that really interests us.

Features

We left behind the highly noisy and inefficient blower-type heatsinks. NVIDIA has opted to launch the RTX 20 Series with a real heat sink with two 90mm fans. This allows two things: to obtain better temperatures and to improve the ability of overclocking.

One of the main characteristics of this graphics card is that it has a highly compact design. We are talking about a graphic that is 240mm long, 130mm high and 40mm thick, occupying only two PCIe slots.

Regarding the video outputs, it has two DisplayPort 1.4 ports, it has an HDMI 2.0b output, a DVI-D output and a USB Type-C. The Type-C port allows us to support Virtual Link for virtual reality through a single cable. The counterpart to this is that there is no compatible model with this, but we can use it as USB without more.

This graphics card has a PCIe 3.0 x16 port to connect it to the motherboard. The RTX 2060 logo in the upper area is illuminated in green by LEDs. But this graphics card completely lacks NVLink, so it does not support multi-graphics system.

RTX 2060 Founders Edition 15

The power for this RTX 2060 is at the rear and is based on a single 8-pin PCIe connector. The power connector offers us 150W while the PCIe interface offers us 75W. The graphics card has a TDP of 160W, with which we have a good margin for overclocking.

Finally, highlight the aluminum backplate that serves to improve cooling a bit and prevent the PCB from bending.

RTX 2060 Founders Edition 14

Specifications

  • Silicon: TU104
  • CUDA Cores: 1
  • Tensioner Cores: 240
  • RT Cores: 30
  • Base frequency: 1 365MHz
  • Boost Frequency: 1 680
  • Memory: 6GB GDDR6 @ 14Gbps
  • Bandwidth: 336GB / s @ 192bits
  • Power: PCIe 8 pins
  • Video outputs
    • 2x DisplayPort 1.4 (8K@60Hz)
    • 1x HDMI 2.0b (4K@60Hz)
    • DVI-D
    • USB Type-C
  • TDP: 160W

NVIDIA HDR

NVIDIA HDR technology is a qualitative rather than a quantitative improvement in the gaming experience. Higher framerates and higher resolutions are known qualities. Higher resolution means better possibility of detail, while higher framerates mean smoother game and video. Variable refresh rate technology soon followed, solving the V-Sync input lag dilemma, though it once again took time to get to where it is now.

For gaming displays, HDR was substantially different from adding graphic detail or allowing for smoother gameplay and playback, because it meant a new dimension of "more possible colors" and "brighter whites and darker blacks" in games. Because the HDR capability required the support of the entire graphics chain, as well as a high-quality HDR monitor and content to take full advantage of, it was more difficult to display. In addition to the other aspects of high-end gaming graphics, and pending further development of VR, this was the future of GPUs.

NVIDIA G-SYNC HDR

But today NVIDIA is changing course, going to the fundamental way that computer graphics are modeled in games today. In the most realistic rendering processes, light can be emulated as rays emitting from their respective sources, but calculating even a subset of the number of rays and their interactions (reflection, refraction, etc.) in a delimited space is so intensive that real-time rendering was impossible. But to get the performance needed to render in real time, rasterization essentially reduces 3D objects to 2D representations to simplify calculations, significantly falsifying the behavior of light.

Utilizing real-time ray tracing effects in games may require sacrificing some or all of the three parameters such as: high resolution, ultra high frames and HDR. HDR is limited by gaming support more than anything else. But the first two have minimal performance standards when it comes to modern high-end games on the PC. Anything below 1080p is completely disgusting, and anything below 30fps or more realistically from 45fps to 60fps hurts gameplay. The variable refresh rate can mitigate the latter and the framed drops are temporary, but the low resolution is forever.

RTX 2060 Founders Edition 10

NVIDIA Ray Tracing

NVIDIA's big vision for real-time hybrid ray tracing graphics means they needed to make significant architectural investments in future GPUs. The very nature of the operations required for ray tracing means that they are not particularly well suited to traditional SIMT execution, and while this does not prevent Ray Tracing on the GPU through computation on the traditional GPU, in the end it does. relatively inefficient way. Which means that of the many architectural changes in Turing, many of them have been devoted to solving the ray tracing problem, some of which exclusively.

For Ray Tracing, Turing introduces two new types of hardware units that were not present in its Pascal predecessor: RT Cores and Tensor Cores. The former is pretty much what the name says on the packaging, with the RT cores speeding up the ray tracing process, and all the new algorithms involved in it. Meanwhile, Tensor Cores are not technically related to the Ray Tracing process itself, but play a key role in the feasibility of Ray Tracing rendering, along with empowering other features that are implemented with the GeForce RTX series.

RT Cores Bringing NVIDIA Ray Tracing to Life

RT cores are NVIDIA's biggest innovation, but it's also the piece of the puzzle that NVIDIA likes to talk about the least. The company is not revealing the basic elements of how these cores work and how they have achieved that complex calculations can be done so 'easily'.

nvidia ray tracing

RT cores can essentially be thought of as a fixed function block that is specifically designed to speed up Volume Bound Hierarchy (BVH) searches. BVH is a tree-like structure used to store polygon information for ray tracing, and is used here because it is an innately efficient means of proving the intersection of rays. Specifically, by continually subdividing a scene across smaller and smaller bounding boxes, it is possible to identify the polygons that a lightning bolt intersects with in just a fraction of the time it would take to test all the polygons.

NVIDIA RT cores implement a highly optimized version of this process. That involves precisely NVIDIA's secret sauce (in particular, how NVIDIA determined the best BVH variation for hardware acceleration), but in the end the RT cores are very specifically designed to speed up this process. The end product is a collection of two distinct hardware blocks that constantly iterate through bounding box or polygon controls to test intersection at the rate of billions of rays per second and many times that number in individual tests.

nvidia ray tracing

NVIDIA claims that the most powerful Turing parts based on the TU102 GPU can handle more than 10.000 billion lightning intersections per second (10 GigaRays / second), ten times what Pascal can do if he follows the same process using his shaders. .

Raw power for real-time light ray tracing

NVIDIA hasn't said what the size of an individual RT core is, but they are believed to be quite large. Turing implements only one RT core per SM, which means that even the hulking TU102 GPU that brings the RTX 2080 Ti to life only has 72 units for Ray Tracing. Also, since RT cores are part of SM, they are closely related to SM in terms of performance and number of cores. As NVIDIA downscals Turing for smaller GPUs using fewer SMs, the number of RT cores and the resulting Ray Tracing performance drop with it as well. So NVIDIA always maintains the same SM resource ratio (although chip designs can do it elsewhere).

nvidia ray tracing

Along with developing a means to test ray intersections more efficiently, the other part of the formula for Ray Tracing success in the NVIDIA book is to minimize the amount of work required. NVIDIA's RT cores are comparatively fast, but even so, the beam interaction tests are still moderately complex. For this, NVIDIA has turned to its Tensor Cores to help them the rest of the way, allowing a moderate number of rays to remain sufficient for high-quality images.

So Ray Tracing would normally need to emit many rays from each and every pixel on a screen. This is necessary because a large number of rays per pixel are needed to produce the “clean” look of a fully rendered image. Conversely, testing too few rays results in a "noisy" image in which there is significant discontinuity between pixels because not enough rays have been emitted to resolve the finer details. But because NVIDIA can't test that many rays in real time, it's doing its best and faking it, using neural networks to clean up an image and make it appear more detailed than it actually is.

nvidia ray tracing

NVIDIA Tensor Cores to improve performance

NVIDIA relies on Tensor Cores. These cores were first introduced in the Volta architecture, exclusive to NVIDIA servers and can be considered as a CUDA core for steroids. They are fundamentally a much larger collection of ALUs within a single core, with much of their flexibility removed. So instead of getting the highly flexible CUDA kernel, you get a huge matrix multiplication machine that is incredibly optimized to process thousands of values ​​at once (in what is called a tensor operation). Turing's Tensor Cores, in turn, duplicate what Volta started by supporting newer, lower-precision methods than the original, which in certain cases can offer even better performance and at the same time, sufficient precision.

nvidia ray tracing

As for how this applies to Ray Tracing, the strength of Tensor Cores is that tensor operations map extremely well to neural network inference. This means that NVIDIA can use the cores to run neural networks that will perform additional rendering tasks. In this case, a neural network denoising filter is used to clean up the ray-traced image in a fraction of the time (and with a fraction of the resources) it takes to test the required number of rays.

Artificial Intelligence to improve graphic quality

The denotative filter itself is essentially an image resizing filter on steroids, and it can (usually) produce an image of similar quality to ray tracing using brute force by algorithmically guessing what details should be present among the noise. However, getting it to work well means you need to be trained, and therefore it is not a generic solution. Rather developers need to take part in the process, training a neural network based on fully rendered, high-quality images of their game.

In total there are 8 Tensor Cores in each SM, so, like RT cores, they are tightly coupled to NVIDIA's individual processor blocks. Also, this means that the performance of the Tensor is reduced on GPUs with less SM. So NVIDIA always has the same ratio of Tensor Cores as RT cores to handle what the RT cores grossly spit out.

nvidia

Deep Learning Super Sampling (DLSS)

Tensor Core are not hardware with a fixed function in a traditional sense. They are quite rigid in their abilities, but they are programmable. For its part, NVIDIA wants to know how many different fields and tasks its extensive neural network and AI hardware can apply to.

Games, of course, do not fall under the umbrella of traditional neural network tasks, as these networks lean toward consuming and analyzing images rather than creating them. However, in addition to removing the output from their RT cores, NVIDIA's other great gaming use case for their Tensor Cores is what they call Deep Learning Super Sampling (DLSS).

DLSS follows the same principle as noise removal by processing to clean up an image, but instead of removing noise, it is about restoring detail. Specifically, how to approximate the image quality benefits of anti-aliasing. It is an indirect method of rendering at a higher resolution without the high cost of doing the job. When all goes well, according to NVIDIA the result is an image comparable to an antialiasing image without the high cost.

nvidia ray tracing dlss

New filter, more powerful

How this works is up to the developers, in part because they are deciding how much work they want to do with normal rendering versus DLSS scaling. In standard mode, DLSS renders with a lower input sample count and then infers a result, which at the target resolution is of similar quality to a Temporal Anti-Aliasing (TAA) result. There is a 2X DLSS mode, in which the input is rendered at the final resolution of the target and then combined with a larger DLSS network. You could say that TAA is not a very high level to set, but NVIDIA has set out to address some of the traditional shortcomings of TAA with DLSS, notably blurriness.

Now keep in mind that DLSS has to be trained for each game; it is not a universal solution. This is done in order to apply a unique neutral net that is appropriate for the game in question. In this case, the neural networks are trained using SSAA 64x images, giving the networks a very high quality baseline to work with.

NVIDIA's two most important use cases for Tensor Cores, DLSS is by far the easiest to implement. Developers only have to do basic work adding the NVIDIA NGX API calls to a game (essentially adding DLSS as a post-processing phase) and NVIDIA will take care of the rest when it comes to neural network training. So DLSS support will go out of the door very quickly, while using Ray Tracing (and especially significant Ray Tracing) will take much longer.

nvidia-dlss

ADIEX

Benchmark

Conclusion

Turing architecture graphics are over priced, that's undeniable. Leaving this aspect aside, the NVIDIA RTX 2060 Founders Edition is a great market solution. It allows us to enjoy the tracing of light rays with graphic limitations, it is true, but it allows to do it. They are ideal for 1440p resolutions and for 4K we would already need.

The most interesting aspect of these new graphics cards is that they abandon the blower heatsink. They make the leap to a dual fan system that significantly lowers the temperature of the GPU. In addition, it offers us a good margin of overclocking, allowing to improve the final performance.

On Ray Tracing say that it is still a minority technology in games, although it is the future. Clearly Triple A games are moving towards this type of technology. Cyberpunk 2077 will be the first game to fully exploit the full capabilities of this technology. Today's games implement some aspects of this technology, not all because that would be complicated.

Gold HardwareSfera 300x300 Medal

NVIDIA GeForce RTX 2060

Ray Tracing
Integrated
Performance
Refrigeration
Hardware
Price

The NVIDIA RTX 2060 Founders Edition is a great market solution. It allows us to enjoy the tracing of light rays with graphic limitations, it is true, but it allows to do it. They are ideal for 1440p resolutions and for 4K we would already need.

User Rating: Be the first one!
Show more

Robert Sole

Director of Contents and Writing of this same website, technician in renewable energy generation systems and low voltage electrical technician. I work in front of a PC, in my free time I am in front of a PC and when I leave the house I am glued to the screen of my smartphone. Every morning when I wake up I walk across the Stargate to make some coffee and start watching YouTube videos. I once saw a dragon ... or was it a Dragonite?

Related publications

Leave your comment

Your email address will not be published. Required fields are marked with *

Button back to top
Close

Ad blocker detected

This site is financed through the use of advertising, we always ensure that it is not too intrusive for the reader and we prioritize the reader's experience on the web. But if you block ads, some of our funding will be reduced.