Yuzu Devs Tear Apart NVIDIA’s GeForce RTX 4060 Ti GPU, Call It A Downgrade For Emulation Due To Cut-Down Memory Config

By: Hassan Mujtaba
Source: https://wccftech.com/yuzu-devs-tear-apart-nvidia-geforce-rtx-4060-ti-gpu-call-it-a-downgrade-for-emulation/

In a recent progress report, the developers behind the Yuzu emulator criticized NVIDIA for seriously downgrading the GeForce RTX 4060 Ti GPU.

NVIDIA GeForce RTX 4060 Ti Called A “Serious Downgrade” By Yuzu Emulator Dev Team

The NVIDIA GeForce RTX 4060 Ti launched last month to mixed reviews. Some reviewers including us found it to be a decent offering for its price with a more future-proof feature set such as DLSS 3, faster ray-tracing capabilities, extended support for streamers and content creators, & a powerful AI engine in an efficient design while others referred to it as a bad value when compared to older offerings.

The same case is being made by Yuzu developers who have found the NVIDIA GeForce RTX 4060 Ti to be a serious downgrade for emulation purposes compared to the older GeForce RTX 3060 Ti. It’s pointed out that the memory configuration that NVIDIA went for is sub-par and doesn’t allow for a very good experience when running emulated games.

The developers point out that when using the NVIDIA GeForce RTX 4060 Ti for Switch emulation, you will get slower performance compared to the RTX 3060 Ti due to its narrower 128-bit bus & which means that you either have to stick with older Ampere cards or move to a high-end GeForce RTX 40 series GPUs which seems to be what NVIDIA was initially going after with its RTX 4090 and RTX 4080 situation. The RTX 4090 is seen as the best offering within the NVIDIA RTX 40 stack while the rest of the cards have been a miss so far, mainly due to the price hikes.

Now just like AMD’s Infinity Cache, NVIDIA has also equipped its Ada GPUs with more L2 cache. It is proven that a higher cache does contribute to much better performance but it is also very easy to fill up the cache pool when running a resolution scaler. Using a 2x upscale will easily eat up vast pools of cache and that leaves you with the 128-bit wide bus interface which is a downgrade versus the previous Ti offering. Following is the full statement from the Yuzu dev team:

Now, on to the disappointing news: the RTX 4060 Ti.

We don’t understand what kind of decisions NVIDIA took when deciding the Ada Lovelace GeForce product stack, but it has been nothing but mistakes. The RTX 4060 Ti 8GB with only a 128-bit wide memory bus and GDDR6 VRAM is a serious downgrade for emulation when compared to its predecessor, the 256-bit wide equipped RTX 3060 Ti. You will be getting slower performance in Switch emulation if you get the newer product. We have no choice but to advise users to stick to Ampere products if possible, or aim higher in the product stack if you have to get a 4000 series card for some reason (DLSS3 or AV1 encoding), which is clearly what NVIDIA is aiming for.

The argument in favour of Ada is the increased cache size, which RDNA2 confirmed in the past helps with performance substantially, but it also has a silent warning no review mentions: if you saturate the cache, you’re left with the performance of a 128-bit wide card, and it’s very easy to saturate the cache when using the resolution scaler — just 2X is enough to tank performance.

Spending 400 USD on a card that has terrible performance outside of 1X scaling is, in our opinion, a terrible investment, and should be avoided entirely. We hope the 16GB version at least comes equipped with GDDR6X VRAM, which would increase the available bandwidth and provide an actual improvement in performance for this kind of workload.

via Yuzu

That’s a serious blow to gamers who were looking forward to the NVIDIA GeForce RTX 4060 Ti and wanted to play some emulated titles. NVIDIA has its reasons to go with the 4060 Ti and its current memory configuration which might make sense for 1080p gaming but the 128-bit wide bus for 300 USD cards is a bit lacking for sure. We found it not to be a major issue in the majority of games we tested at 1080p but as memory demand keeps on going up for unoptimized PC ports, we can see it as a major issue.

Written by Hassan Mujtaba