So, it's been about a month since the launch of the RTX 4090, and now it's time for the world's second most powerful gaming graphics card to hit the market. Enter the RTX 4080, the sole version bearing this name, equipped with the AD103 GPU and 16GB of GDDR6X VRAM. The 12GB version has been scrapped.
There's a saying in the West, known as the 'elephant in the room,' indicating something that can't be ignored when discussing an issue. In the case of the RTX 4080, that's the $1,199 price tag Nvidia has slapped on this product. Compared to the 2080 and 3080, this price hike amounts to 33%, a significant sum at a time when not everyone is willing to shell out that much for a PC component, no matter how powerful they may be. In Vietnam, as usual, Nvidia doesn't sell the RTX 4080 Founders Edition, while models from partner OEMs range around 40 million Vietnamese Dong.
So, does this price tag justify the performance of the RTX 4080? Let's keep calm because we'll definitely get to that.
At the heart of the RTX 4080 lies the AD103 GPU, with a total of 9728 CUDA cores. This number is lower than the RTX 4090 (16384 CUDA cores), launched mid-October, by about 40%. However, the AD103 GPU itself is also clocked similarly to the AD102 on the RTX 4090. The base clock of the RTX 4080 Founders Edition runs at 2.21 GHz, boosting up to 2.51 GHz during gaming. For the 4090, these numbers are 2.23 and 2.53 GHz respectively. These figures on many custom cards from OEMs like MSI or Asus may be higher.
The RTX 4080's 16GB of GDDR6X graphics memory operates through a 256-bit bus interface, delivering a memory bandwidth of 716.8 GB/s for loading graphics texture data. And on the surface of the AD103 GPU die of the RTX 4080 are still the latest technologies that Nvidia has researched on the Ada Lovelace architecture: 3rd generation Ray Tracing cores, 4th generation Tensor cores, a new generation Optical Flow Accelerator (OFA) processing cluster, with enough power to support the Frame Generation feature integrated into DLSS 3 image enhancement technology.
Nvidia combines their accumulated experience from the previous two DLSS generations with the new-generation Optical Flow Accelerator processing core on the Ada Lovelace GPU. A frame rendered at native resolution will simultaneously be sent to both processing clusters, the tensor cores, and the OFA. The tensor cores handle image upscaling much like DLSS 2.3, then pass the processed frame to the OFA for this acceleration cluster to learn motion vector data, generating an entirely new frame based on available data. It discerns which pixels represent objects, shadows, and where light falls:In other words, for every two frames generated, one from the GPU's SM and one from the tensor core, a new frame is created and displayed on the screen. Most notably, everything is produced from a single acceleration cluster, without burdening the CPU to draw polygonal frames before the GPU overlays objects and computes graphical effects, forming a complete frame.That's why DLSS 3 can alleviate bottlenecks or handle games where the raw processing power of the hardware falls short of achieving 60 FPS.Read more: The surprise of this year is that the exterior design of the RTX 4080 is hardly different from the RTX 4090. It's still a 3-slot PCIe design, with a massive cooling system to cool all the soldered components on the graphics card's PCB. This solution contrasts sharply with the Ampere graphics card era, where the two Founders Edition models of the RTX 3090 and 3080 had different designs, one being a bulky 3-slot, and the other a much sleeker 2-slot, only similar in terms of the structure of the cooling system - one fan sucking, one blowing on both sides.One could tentatively predict that Nvidia doesn't want to design separate cooling systems for their two most powerful graphics card models at the moment to save production costs. Sharing the cooling heatsink system of the RTX 4090, suddenly we have a Founders Edition model that's no longer notorious for being 'hot and noisy' as before, especially in the system of GDDR6X memory chips, which are really hot when operating on previous FE models, like the RTX 3080 and 3080 Ti, for example.However, on the flip side, the RTX 4080 is equipped with a truly redundant cooling system. During gaming tests, I monitored Nvidia's operating data and found that even when playing games at 4K resolution, maxing out graphics quality for aesthetics, and enabling DLSS for higher frames per second, the GPU temperature only reached between 55 to 65 degrees Celsius, generally very cool. When gaming at lower resolutions, the fan might not even spin because the card's BIOS is designed not to operate the fan when the GPU temperature is below 50 degrees Celsius.That redundancy, ironically, creates pressure on the gaming PC system, especially on the PCIe slot on the motherboard. Both the weight and size of the RTX 4080 are basically no different from the RTX 4090. It's relatively heavy, at around 2kg. Nvidia still needs to provide two screw holes for users to use fixed card frames, preventing the card from sagging, and distributing the entire weight pressure not solely onto the card's PCIe slot but also on the motherboard.Another impressive point of the RTX 4080 is its remarkable performance while consuming only around 320W of power. I tested it with a newly built system, equipped with a mid-range CPU, the Core i5-13600K instead of the 13900K, paired with the RTX 4080, and played games at 4K resolution, never dropping below 60 FPS. As for gaming at 2K resolution, it's outstanding, for instance, in Red Dead Redemption 2, a resource-intensive game fluctuated between 120 to 140 FPS, making 2K 144Hz or 165Hz gaming monitor solutions much more reasonable.The entire computer system is powered by a 750W power supply, which is sufficient to operate the two most important hardware components in the entire computer system, according to power consumption data measured by HWInfo64.And just like the RTX 4090, the RTX 4080 comes with a 12VHPWR cable, a bundle of 16 tiny pins to connect from three 8-pin 150W pins from the standard ATX 2.0 power supply to the card. Or if you have an ATX 3.0 power supply, you'll have a single cable that can deliver up to 450W of power to the graphics card, making the computer system more neat and easier to hide wires than the bulky cable bundle that comes with the FE version. By now, it's also easy to conclude that the global PC hardware community may have overreacted to cases of 12VHPWR cables overheating, melting, or failing due to electrical shorts. With a 50-case ratio out of about 120 thousand RTX 4090 cards sold on the market, the likelihood of such a worst-case scenario is very low.Another issue when evaluating the RTX 4090 is the connectivity ports. With the power of a $1,600 graphics card, Nvidia should have equipped it with Display Port 2.0 ports, to wait for the release of 8K 120Hz monitors, with enough bandwidth to output images to the highest-end TVs on the market with just one cable.But that's the issue of the RTX 4090, not the 4080. The DisplayPort 1.4a connection still provides enough bandwidth to play games at 4K 144Hz, or 8K 60Hz with standard 12-bit HDR color signals. That's also the performance range of the RTX 4080, so you don't have to worry about having a powerful card but inadequate connectivity. With the RTX 4080, you can confidently game at 4K 120Hz, with the support of DLSS 3 in several supported titles.All of that brings us back to the debate at the beginning of the article. With such performance, does the $1,199 price tag correspond, making the RTX 4080 a product with decent value for money?I predict that Nvidia has set the price of the RTX 4080 at this level for two reasons.Firstly, Nvidia is confident in their grasp of two very powerful products. They anticipate that AMD, despite all the potential of the RDNA 3 architecture, will not have any product competitive enough with the RTX 4080 and 4090. Setting that price demonstrates Nvidia's confidence. They are half right, as AMD is preparing to launch their most powerful graphics card yet in December, the RX 7900 XTX, which they dare not compare to the RTX 4090, instead considering the $999 card as a direct competitor to the RTX 4080.Secondly, Nvidia might be applying a strategy similar to Apple with their top-tier products. They are willing to sell fewer high-end cards, but the profit margin will be higher. They believe that gamers are willing to spend thousands of dollars to buy cards, just like during the GPU shortage of the past two years. And by next year, when lower-tier products like the RTX 4070 Ti or 4060 are released, those will be the affordable toys for gamers, directly competing with AMD's products, or even Intel's.Whether the $1,199 price tag of the RTX 4080 is reasonable or not is something gamers will conclude by deciding whether or not to spend money on Nvidia's new product. But that doesn't deny one fact, that the RTX 4080 is very powerful, capable of gaming at 4K resolution, even when paired with a Core i5 instead of needing a Core i9. All wrapped up in a heavy, bulky but very cool shell, and power consumption isn't significantly different from the previous generation.Will the RTX 4080 become unreasonable when the RX 7900 XTX is launched, costing over 200 USD more than AMD's product? That answer will have to wait until December 13th, which is two weeks away when products based on AMD's RDNA 3 architecture hit the market.The content is developed by the Mytour team with the aim of customer care and solely to inspire travel experiences. We do not take responsibility for or provide advice for other purposes.
If you find this article inappropriate or containing errors, please contact us via email at [email protected]