Review: ASUS Republic of Gamers Strix RTX 2080 Ti OC graphics card

Table of Contents
We reviewed the ASUS Republic of Gamers Strix RTX 2080 Ti OC graphics card, an impressive gaming solution with support for Raytracing and DLSS.
The new generation of NVIDIA Turing graphics cards have been developed to take gaming to a new level. The maximum exponent of these graphics is the NVIDIA GeForce RTX 2080 Ti. In itself this graphic is excellent but ASUS has given it a twist. In this case we have the ASUS Republic of Gamers Strix RTX 2080 Ti OC, which is a true graphics card monstrosity. In this case, a powerful heatsink has been installed to keep the temperature of this graph very well controlled.
Features of the ASUS Republic of Gamers Strix RTX 2080 Ti OC
The TU102 silicon in the NVIDIA RTX 2080 Ti features a total of 4352 CUDA Cores, 544 Tensor Cores, and 68 RT Cores. In the case of this ASUS model, it has a base frequency of 1350MHz that in Boost mode reaches 1650MHz in Gaming mode. This graphics card has an overclocking mode whose basic frequency is maintained but in Boost mode it can reach 1665MHz. The GPU is accompanied by 11GB of GDDR6 memory working at a frequency of 14GHz
Regarding connectivity, this graphics card has two HDMI 2.0b connectors, two DispayPort 1.4 connectors, an HDCP port and a USB Type-C port. This graphics card has an NVLink connector. For power, this graphics card has two 8-pin PCIe connectors.
This graphics card also has on its heatsink the Aura Sync RGB lighting system that allows synchronizing all the ASUS components of our system. A Dual BIOS system has also been implemented to be able to return the factory parameters in the event that we overclock the graphics or update the BIOS and something goes wrong in the process.
ASUS Republic of Gamers Strix RTX 2080 Ti OC cooling system
ASUS has provided a large heatsink for this graphics card to properly cool the GPU. The radiator of this graphics card has a thickness of 2.7 PCIe slots. The radiator surface area has been increased by 20% compared to the Pascal series radiator which allows for better dissipation and offers the possibility of further overclocking.
The heatsink has an axial type triple fan cooling system. The fan has been improved with a reduced outer edge that has allowed the fan blades to be lengthened. This has allowed for an increase in air pressure.
Thanks to the large heatsink, this graphic has 0dB technology. This technology allows the fan to not work until the graphics card reaches a temperature of 55ºC.
It also stands out that the heatsink has Max Control technology. What this technology offers us is an improved contact surface with the GPU, which allows an improvement in thermal transfer.
Special Technologies of the ASUS Republic of Gamers Strix RTX 2080 Ti OC
A new production system for its components has been developed by ASUS. The company uses Max Control technology, which has been developed to automate the production process of a graphic. Basically what it offers us is the reduction of manufacturing time through a single welding system.
Implementing this unique welding system reduces the amount of heat stress on components and uses fewer chemicals. This in turn has a lower environmental impact, reduces the amount of energy required for manufacturing and also offers an increasingly reliable product.
ASUS engineers have also developed a reinforcement system for the chassis that gives it more consistency. For this, a reinforcement system for the chassis has been implemented beyond the backplate to prevent the graphics card from bending and suffering damage.
NVIDIA HDR technology
NVIDIA HDR technology is a qualitative rather than a quantitative improvement in the gaming experience. Higher framerates and higher resolutions are known qualities. Higher resolution means better possibility of detail, while higher framerates mean smoother game and video. Variable refresh rate technology soon followed, solving the V-Sync input lag dilemma, though it once again took time to get to where it is now.
For gaming displays, HDR was substantially different from adding graphical detail or enabling smoother gameplay and playback, because it meant a new dimension of “more possible colors” and “brighter whites and darker blacks” in games. Because HDR capability required support across the entire graphics chain, as well as a high-quality HDR monitor and content to take full advantage of, it was more difficult to display. In addition to the other aspects of high-end gaming graphics, and pending further development of VR, this was the future of GPUs.
But today NVIDIA is changing course, going to the fundamental way that computer graphics are modeled in games today. In the most realistic rendering processes, light can be emulated as rays emitting from their respective sources, but calculating even a subset of the number of rays and their interactions (reflection, refraction, etc.) in a delimited space is so intensive that real-time rendering was impossible. But to get the performance needed to render in real time, rasterization essentially reduces 3D objects to 2D representations to simplify calculations, significantly falsifying the behavior of light.
Utilizing real-time ray tracing effects in games may require sacrificing some or all of the three parameters such as: high resolution, ultra high frames and HDR. HDR is limited by gaming support more than anything else. But the first two have minimal performance standards when it comes to modern high-end games on the PC. Anything below 1080p is completely disgusting, and anything below 30fps or more realistically from 45fps to 60fps hurts gameplay. The variable refresh rate can mitigate the latter and the framed drops are temporary, but the low resolution is forever.
NVIDIA Ray Tracing Technology
NVIDIA's big vision for real-time hybrid ray tracing graphics means they needed to make significant architectural investments in future GPUs. The very nature of the operations required for ray tracing means that they are not particularly well suited to traditional SIMT execution, and while this does not prevent Ray Tracing on the GPU through computation on the traditional GPU, in the end it does. relatively inefficient way. Which means that of the many architectural changes in Turing, many of them have been devoted to solving the ray tracing problem, some of which exclusively.
For Ray Tracing, Turing introduces two new types of hardware units that were not present in its Pascal predecessor: RT Cores and Tensor Cores. The former is pretty much what the name says on the packaging, with the RT cores speeding up the ray tracing process, and all the new algorithms involved in it. Meanwhile, Tensor Cores are not technically related to the Ray Tracing process itself, but play a key role in the feasibility of Ray Tracing rendering, along with empowering other features that are implemented with the GeForce RTX series.
RT cores are NVIDIA's biggest innovation, but it's also the piece of the puzzle that NVIDIA likes to talk about the least. The company is not revealing the basic elements of how these cores work and how they have achieved that complex calculations can be done so 'easily'.
RT cores can essentially be thought of as a fixed function block that is specifically designed to speed up Volume Bound Hierarchy (BVH) searches. BVH is a tree-like structure used to store polygon information for ray tracing, and it is used here because it is an innately efficient means of proving the intersection of rays. Specifically, by continually subdividing a scene across smaller and smaller bounding boxes, it is possible to identify the polygons that a ray intersects in just a fraction of the time it would take to test all the polygons.
NVIDIA RT cores implement a hyper-optimized version of this process. That involves precisely NVIDIA's secret sauce (in particular, how NVIDIA determined the best variation of BVH for hardware acceleration), but in the end the RT cores are very specifically designed to speed up this process. The end product is a collection of two distinct hardware blocks that constantly iterate through bounding box or polygon controls to test the intersection at the rate of billions of rays per second and many times that number in individual tests.
NVIDIA claims that the most powerful Turing parts based on the TU102 GPU can handle more than 10.000 billion lightning intersections per second (10 GigaRays / second), ten times what Pascal can do if he follows the same process using his shaders. .
NVIDIA hasn't said what the size of an individual RT core is, but they are believed to be quite large. Turing implements only one RT core per SM, which means that even the hulking TU102 GPU that brings the RTX 2080 Ti to life only has 72 units for Ray Tracing. Also, since RT cores are part of SM, they are closely related to SM in terms of performance and number of cores. As NVIDIA downscals Turing for smaller GPUs using fewer SMs, the number of RT cores and the resulting Ray Tracing performance drop with it as well. So NVIDIA always keeps the same SM resource ratio (although chip designs can do it elsewhere).
Along with developing a means to test ray intersections more efficiently, the other part of the formula for Ray Tracing success in the NVIDIA book is to minimize the amount of work required. NVIDIA's RT cores are comparatively fast, but even so, the beam interaction tests are still moderately complex. For this, NVIDIA has turned to its Tensor Cores to help them the rest of the way, allowing a moderate number of rays to remain sufficient for high-quality images.
So ray tracing would typically require many rays to be emitted from each and every pixel on a display. This is necessary because a large number of rays per pixel are needed to generate the “clean” look of a fully rendered image. Conversely, if too few rays are tested, you end up with a “noisy” image where there is significant discontinuity between pixels because not enough rays have been emitted to resolve the finer details. But since NVIDIA can’t test that many rays in real time, it’s doing its best and faking it, using neural networks to clean up an image and make it look more detailed than it really is.
NVIDIA relies on Tensor Cores. These cores were first introduced in the Volta architecture, exclusive to NVIDIA servers and can be considered as a CUDA core for steroids. They are fundamentally a much larger collection of ALUs within a single core, with much of their flexibility removed. So instead of getting the highly flexible CUDA kernel, you get a huge matrix multiplication machine that is incredibly optimized to process thousands of values at once (in what is called a tensor operation). Turing's Tensor Cores, in turn, duplicate what Volta started by supporting newer, lower-precision methods than the original, which in certain cases can offer even better performance and at the same time, sufficient precision.
As for how this applies to Ray Tracing, the strength of Tensor Cores is that tensor operations map extremely well to neural network inference. This means that NVIDIA can use the cores to run neural networks that will perform additional rendering tasks. In this case, a neural network denoising filter is used to clean up the ray-traced image in a fraction of the time (and with a fraction of the resources) it takes to test the required number of rays.
The denotative filter itself is essentially an image resizing filter on steroids, and it can (usually) produce an image of similar quality to ray tracing using brute force by algorithmically guessing what details should be present among the noise. However, getting it to work well means you need to be trained, and therefore it is not a generic solution. Rather developers need to take part in the process, training a neural network based on fully rendered, high-quality images of their game.
In total there are 8 Tensor Cores in each SM, so like the RT cores, they are tightly coupled to the individual NVIDIA processor blocks. Also, this means that the performance of the Tenosr is reduced on GPUs with less SM. So NVIDIA always has the same ratio of Tensor Cores as RT cores to handle what the RT cores grossly spit out.

NVIDIA Deep Learning Super Sampling (DLSS) technology
Tensor Core are not hardware with a fixed function in a traditional sense. They are quite rigid in their abilities, but they are programmable. For its part, NVIDIA wants to know how many different fields and tasks its extensive neural network and AI hardware can apply to.
Games, of course, do not fall under the umbrella of traditional neural network tasks, as these networks lean toward consuming and analyzing images rather than creating them. However, in addition to removing the output from their RT cores, NVIDIA's other great gaming use case for their Tensor Cores is what they call Deep Learning Super Sampling (DLSS).
DLSS follows the same principle as denoising by processing to clean up an image, but instead of removing noise, it is about restoring details. Specifically, how to approximate the image quality benefits of anti-aliasing. It is an indirect method of rendering at a higher resolution without the high cost of doing the job. When all goes well, according to NVIDIA the result is an image comparable to an antialiasing image without the high cost.
How this works is up to the developers, in part because they are deciding how much work they want to do with normal rendering versus DLSS scaling. In standard mode, DLSS renders with a lower input sample count and then infers a result, which at the target resolution is of similar quality to a Temporal Anti-Aliasing (TAA) result. There is a 2X DLSS mode, in which the input is rendered at the final resolution of the target and then combined with a larger DLSS network. You could say that TAA is not a very high level to set, but NVIDIA has set out to address some of the traditional shortcomings of TAA with DLSS, notably blurriness.
Now keep in mind that DLSS has to be trained for each game; it is not a universal solution. This is done in order to apply a unique neutral net that is appropriate for the game in question. In this case, the neural networks are trained using SSAA 64x images, giving the networks a very high-quality baseline to work with.
NVIDIA's two most important use cases for Tensor Cores, DLSS is by far the easiest to implement. Developers only have to do basic work adding the NVIDIA NGX API calls to a game (essentially adding DLSS as a post-processing phase) and NVIDIA will take care of the rest when it comes to neural network training. So DLSS support will go out of the door very quickly, while using Ray Tracing (and especially significant Ray Tracing) will take much longer.
ASUS Republic of Gamers Strix RTX 2080 Ti OC Benchmark
Conclusion
The ASUS Republic of Gamers Strix RTX 2080 Ti OC graphics card is an excellent graphics card. We cannot give a fault due to the excellent performance of this graph. Perhaps the great but is that we have not been able to test the full performance since there is only one game with Raytracing and DLSS and we do not have it. In addition, this graphic offers unique and improved features compared to previous generations. The technical improvements implemented are something to take into account that favors better performance.
![]() | ![]() |