Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.
IGNORED

GeForce 5000 Final Specs Leaked


Recommended Posts

Quote

Last month, the leaker known as kopite7kimi — known for his accurate predictions — revealed on X (formerly Twitter) the likely final specifications for Nvidia’s next-gen GeForce RTX 5080 and 5090 GPUs. More recently, Wccftech claims to have picked up details on the RTX 5070.

 

Source

 

5090 - 32GB

5080 - 16GB

5070 - 12GB

 

Nvidia continuing the trend of gimping their GPUs on Vram to force buyers to pay more if they want to avoid Vram issues. 12GB is not enough for modern gaming. 

  • Thanks 1

null

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: 32GB 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

12GB 5070 would be acceptable if it were priced right. 70 series is midrange. Before the 70 series cost more than a car payment, people bought them with the expectation that they weren't going to have the latest features at the highest resolutions. The last gen of reasonable pricing was Pascal. 1070 for $380. 

 

The biggest oof I see here is the 5080. Massively cut down from the 5090 without being massively cheaper most likely. Setting themselves up to offer us the 5080 Super/Ti down the road to fill the large gap between the two at a more palatable albeit still high price. 

 

I was considering upgrading my main rig this year but then I bought a Grom and have been outside having fun instead of worrying about how Nvidia is going to give it to me with no lube. The ole 3090 will be fine for another year or so. 

  • Respect 1
  • Agreed 4

Owned

 Share

CPU: 5900X + Optimus block
MOTHERBOARD: MSI X570 Ace
GPU: EVGA RTX 3090 FTW3 Ultra + Optimus block
RAM: 32GB Oloy Blade 3600CL14
SSD/NVME: 1TB SN750 Black
SSD/NVME 2: 1TB SN750 Black
SSD/NVME 3: 2TB Samsung 970 Evo Plus
CASE: LD PC-V7
Full Rig Info

Owned

 Share

CPU: 7800X3D
MOTHERBOARD: B650E-I Strix
RAM: G.Skill Flare X5
GPU: Reference 6950 XT
CASE: InWin D-Frame Mini
WC RADIATOR: MO-RA3 with 4x180mm Silverstone fans
FAN CONTROLLER: Aquaero
Full Rig Info

Owned

 Share

CPU: 12600KF
MOTHERBOARD: Z790I Strix
RAM: G.Skill Trident Z5 Neo
GPU: RTX 2080
CASE: Sliger SM580
WC RESEVOIR: Phanteks R160C
WC RADIATOR: XSPC TX240 Ultrathin
FANS: Phanteks T30
Full Rig Info
Link to comment
Share on other sites

Dumb configs for sure. Watch the Super / Ti refreshes of each come with 3GB modules for an "upgrade" to 24GB for the 5080 Ti / Super and 18GB for the 5070 Ti / Super. 

 

All in all, very unimpressive. The halo keeps getting better while everything else languishes.

 

EDIT: And it's not just the VRAM. The 5080 is literally half of a 5090.

Edited by Sir Beregond
  • Agreed 1

Showcase

 Share

CPU: AMD Ryzen 9 5900X
GPU: Nvidia RTX 3080 Ti Founders Edition
RAM: G.Skill Trident Z Neo 32GB DDR4-3600 (@ 3733 CL14)
MOTHERBOARD: ASUS Crosshair VIII Dark Hero
SSD/NVME: x2 Samsung 970 Evo Plus 2TB
SSD/NVME 2: Crucial MX500 1TB
PSU: Corsair RM1000x
MONITOR: LG 42" C4 OLED
Full Rig Info

Owned

 Share

CPU: E8400, i5-650, i7-870, i7-960, i5-2400, i7-4790k, i9-10900k, i3-13100, i9-13900ks
GPU: many
RAM: Corsair 32GB DDR3-2400 | Oloy Blade 16GB DDR4-3600 | Crucial 16GB DDR5-5600
MOTHERBOARD: ASUS P7P55 WS SC | ASUS Z97 Deluxe | EVGA Z490 Dark | EVGA Z790 Dark Kingpin
SSD/NVME: Samsung 870 Evo 1TB | Inland 1TB Gen 4
PSU: BeQuiet Straight Power 12 1500W
CASE: Cooler Master MasterFrame 700 - bench mode
OPERATING SYSTEM: Windows 10 LTSC
Full Rig Info

Owned

 Share

CPU: M1 Pro
RAM: 32GB
SSD/NVME: 1TB
OPERATING SYSTEM: MacOS Sonoma
CASE: Space Grey
Full Rig Info
Link to comment
Share on other sites

2 hours ago, Fluxmaven said:

12GB 5070 would be acceptable if it were priced right. 70 series is midrange. Before the 70 series cost more than a car payment, people bought them with the expectation that they weren't going to have the latest features at the highest resolutions. The last gen of reasonable pricing was Pascal. 1070 for $380. 

 

The biggest oof I see here is the 5080. Massively cut down from the 5090 without being massively cheaper most likely. Setting themselves up to offer us the 5080 Super/Ti down the road to fill the large gap between the two at a more palatable albeit still high price. 

 

I was considering upgrading my main rig this year but then I bought a Grom and have been outside having fun instead of worrying about how Nvidia is going to give it to me with no lube. The ole 3090 will be fine for another year or so. 

 

70 series was never mid and that’s the Nvidia effect. 

 

90 was halo / flagship

80 was high / enthusiast

60 was always mid

50 was budget

40 was low

 

70 series dating back to Fermi (470, 570) we’re always for that segment between mid and high. A sweet spot for perf and IQ. They always were an upsell for the mid range. Spend the extra $50 or $100 and get the bigger chip, wider bus, more vram etc. Going from 70 to 80 was typically harder to justify and there was less value ($/perf).

 

A 12GB card in 2025 is not sweet spot. It’s the min. needed for the features they’re advertising like RT at resolutions like 1440p.

 

Agree with everything you’re saying, except I believe the 3070 at $500 (theoretical) MSRP was as close as we got in a long time.

 

So tired of these games from Nvidia. 

 

Hopefully after the data center clients are saturated with a metric tonne of Hopper and Blackwell GPUs, they pull back on spend and the market returns back to normal.

 

I’m still ready to “side grade” my 7900 XTX to Blackwell, but I aint going below 16gb of VRAM. Still happy with what I have but more and more games will be built on RT and I feel it’s now time to jump back to Nvidia. If a 4070 Ti is matching or beating a 7900 XTX in RT heavy loads, safe to say a 5070 / Ti will be near or above 4080 Super RT perf. 

 

A 5080 would be perfect but like everyone knows… it looks terrible on paper. AGAIN…. 

 

Let’s see how this all shakes out but not looking good for enthusiasts or budget minded gamers

Edited by Slaughtahouse
  • Respect 1
Link to comment
Share on other sites

It seems like Nvidia is taking the Rockstar approach and selling the same product twice. Just like how Rockstar always does this "will they-won't they" thing with PC releases which causes a lot of people to buy the game twice, Nvidia wants people to have a reason to upgrade twice in one gen if possible. 

 

Even Apple doesn't get away with this kind of thing anymore. Going from 70 series cards with 8GBs to 70 series cards with 12GBs over the course of 5 generations pure anti-consumer BS. 

null

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: 32GB 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

Content Creator
296 269

I am interested in the 5090, really only for VR as I still get ~30fps in some titles with my BSB. Hoping the extra VRAM/bandwidth will help out (I have definitely hit 24GB cap causing games to crash in VR). Don't think I'll wait for a non-reference card this time, the Strix 4090 I have is kinda a pointless 'upgrade' over the reference 4090.

Edited by Alex
  • Respect 1

Owned

 Share

CPU: AMD Ryzen 9 - 7950X3D
MOTHERBOARD: Asus X670E Hero (pls EVGA make an AMD mobo)
RAM: G.Skill 64GB @ 6000Mhz
GPU: RTX 4090 Strix OC
SSD/NVME: 2x SN850X 4TB | 1x 970 Pro 0.5TB
MONITOR: Something OLED, 120hz and 42-48"
WC RADIATOR: MO-RA3 420 | HeatKiller 360S
WC CPU BLOCK: Heatkiller IV Pro
Full Rig Info

Owned

 Share

AUDIO: Topping DX7 Pro
AUDIO 2: Sennheiser HD800S
AUDIO 3: Sennheiser HD650
AUDIO 4: Sennheiser HD 4.50 BTNC
AUDIO 5: Edifier RT1700BT
AUDIO 6: Custom XLR balanced cables
Full Rig Info

CA$40000

Owned

 Share

CPU: F22C | 257hp 251nm
CPU COOLER: K&N FIPK Intake | AEM filter
GPU: Yokohama AD08R 255/40r17
CPU COOLER 2: Titan 7 R-10s 17x9.5+51
AUDIO: Alpine UTE73BT + Modifry DCI
MOTHERBOARD: Öhlins Road & Track DFV Coilovers
OPERATING SYSTEM: Hondata
CASE: Recaro Pole Position
Full Rig Info
Link to comment
Share on other sites

9 hours ago, Slaughtahouse said:

 

70 series was never mid and that’s the Nvidia effect. 

 

90 was halo / flagship

80 was high / enthusiast

60 was always mid

50 was budget

40 was low

 

70 series dating back to Fermi (470, 570) we’re always for that segment between mid and high. A sweet spot for perf and IQ. They always were an upsell for the mid range. Spend the extra $50 or $100 and get the bigger chip, wider bus, more vram etc. Going from 70 to 80 was typically harder to justify and there was less value ($/perf).

 

I think it gets a little muddier than that looking back.

 

You indicate a "was" in each number line item, but let's take a look at different eras and how chip/spec/branding played a part.

 

Speaking strictly on traditional branding, prior to really the 30-series, "8" was always the "flagship" or "halo" number. Whether we are talking the 6800 Ultra, 8800 GTX, GTX 580....8 was the number in the product name you looked for to indicate usually the highest end Nvidia part for that gen. Some gens you got refreshes that used "9", the FX 5900 series and GTX 7900 series comes to mind. But until some dual GPU cards made use of the "9" branding, that was about it, and 8 would have been the number you look for.

 

During this time, the chip and spec for any "8" branded card for a given gen, was the biggest and most flagship for whatever they had at the time. Now until the 200-series, you didn't really have "7" branding. Instead you had, for example, the 8600-series and 8800-series cards that had multiple card types indicated by their suffix like GTS, GT, GTX, Ultra, etc. In any case, they usually all used the same chip but with various levels of segmentation / cutting down of the chip / specs until you went down a number in the series like from 8 to 6, and now you are on a smaller die. Once you get to the 200-series, you see the general formation of what we have these days in terms of product naming, albeit some differences due to die shrink re-releases, and competition from AMD at the time necessitating most notable a refresh of the 260. The funniest thing is that even the GTX 260...uses the same big die chip as the bigger 270, 275, 280, 285. It was just cut down so much.

 

Then you get the Fermi with the 400 and 500-series. As expected, the 80 is the flagship / halo / enthusiast card. The 70 emerges as a slight cut-down of what the 80 is. And the 60 is definitely the mid-range card as it and the 50 move to the middle chips of the Fermi gen. It's pretty easy to follow just based on chip segmentation. GTX 480, 470, and later 580 and 570 use the big die GF100 / GF110. The 60-class cards use the GF104 / GF114 die, while the 50-class and below use GF106 / GF116 with maybe some smaller chips for the low low end.

 

Then something funny happens once the 600-series Kepler generation releases. Still billed as the "80" (and most notably, priced the same) which up to this point in Nvidia's entire history since FX at least, has been associated with the highest end cards with the highest end specs, release the GTX 680. Sure, it's faster than the GTX 580, but it's odd. It uses a much smaller die than the GTX 580, and indeed equipped with GK104, is the successor die to the GTX 560 class GF114 cards. With this move, the "80" card had effectively become the mid-range card. They still had a successor die to the GF110 equipped GTX 580, but that would wait. Nvidia would release the first ever $1000 consumer card in the GTX Titan with it. Then continue to sandbag the hell out of the chip with the GTX 780, and finally give you the full die with the 780 Ti / Titan Black. But notice these are now $699 and $999 products vs the old $499 MSRP of both the GTX 480 and GTX 580. The GK104 chip persisted as a rebranded card...the 770 which was just a 680.

 

With that, Nvidia found a new model for Maxwell. The GTX 980 released for $550, but was equipped with GM204. Notice the trend with the "4" in the chip title? Its the mid-range chip again. You wouldn't actually get the flagship chip until the later releases of the 980 Ti and Titan X. What was the 70-class at this point? The cutdown x04 die of the 80-class card. This would be the same thing for the 10-series. At all these points, Nvidia want's you to still think of "80" as the flagship it once was, but in actuality has been nothing more than what was previously branded (and priced) as mid-range Fermi and below.

 

Turing comes around then and the 20-series raises prices across the stack, and yet the RTX 2070 drops to the TU106 die! We see somewhat of a correction with the release of the Super refresh, but yet again...the flagship is reserved for the 2080 Ti / Titan RTX, while the 2080 / S, 2070 Super are on TU104...the middle die. The mid-range of the generation. 2060 Super and below are on TU106.

 

We see the same trend. Each gen has 3 main dies that can be broken down into following the general principles:

 

x00 / x02

x04

x06

 

And then sometimes you got some x07, x08 tiny dies for the really low end. You know...el cheapo. But for the mainsteam of gaming products, its generally these three, and so it is very easy to see what is actually the high end, mid-range, and low end products and its funny how much the numbers such as "80", "70", "60", etc. stop mattering. In no way could a GTX 680 be considered high end for Kepler. Yet its called "80".

 

30-series comes around and now you have 90. But what really is 90? Well the 2080 Ti was $1200. The 3090 came in and while billed as a "Titan" in marketing, very clearly was not (at the time of release at least) because it actually performed worse than the Titan RTX at certain professional workloads that Titan cards were used for with their special drivers that the 3090 did not have. It was very clearly a rebrand and re-pricing of the "80 Ti" cards. Something else interesting happens in the 30-series. Using a cheaper, but most definitely inferior node to TSMC 7nm, the base RTX 3080 card is suddenly back on GA102 silicon which means the 70-class card is definitely further separated from the 80-class card than it had been previously.

 

With the 40-series, we go back to the 4090 occupying the space that Titans and 80 Ti cards had previously since the 700-series which is the big die products, in this case AD102. The 4080 is noticeably way cut down on specs vs the 4090 and gets its own die AD103. AD104 is then used for the 70-class cards. And finally you get AD106 for the 60-class cards.

 

Where am I going with all of this?

 

If one recognizes the fact that the 80-class (non-Ti) cards for all generations since Kepler, except for the 700-series and 30-series have all been the "middle" die, and the 70-class card was either a cut-down of that, or even a tier die lower, it becomes clear what the truth is. We're conditioned to still think "80" and "70" mean high-end, but is that reality? Nope.

 

Once branding and marketing is put aside, take a look at the 5080 vs the 5090. The 5080 is very literally HALF of what a 5090 is, and the 5070 will be even less than that. Make no mistake. The 80 and 70 is most definitely the "mid-range" of today and has been ever since the 600-series with the exceptions noted above. The only difference is now Nvidia wants you to pay $600-$800 or more and $1000-$1200 or more for literal mid-range products.

Edited by Sir Beregond
  • Respect 5
  • Agreed 2

Showcase

 Share

CPU: AMD Ryzen 9 5900X
GPU: Nvidia RTX 3080 Ti Founders Edition
RAM: G.Skill Trident Z Neo 32GB DDR4-3600 (@ 3733 CL14)
MOTHERBOARD: ASUS Crosshair VIII Dark Hero
SSD/NVME: x2 Samsung 970 Evo Plus 2TB
SSD/NVME 2: Crucial MX500 1TB
PSU: Corsair RM1000x
MONITOR: LG 42" C4 OLED
Full Rig Info

Owned

 Share

CPU: E8400, i5-650, i7-870, i7-960, i5-2400, i7-4790k, i9-10900k, i3-13100, i9-13900ks
GPU: many
RAM: Corsair 32GB DDR3-2400 | Oloy Blade 16GB DDR4-3600 | Crucial 16GB DDR5-5600
MOTHERBOARD: ASUS P7P55 WS SC | ASUS Z97 Deluxe | EVGA Z490 Dark | EVGA Z790 Dark Kingpin
SSD/NVME: Samsung 870 Evo 1TB | Inland 1TB Gen 4
PSU: BeQuiet Straight Power 12 1500W
CASE: Cooler Master MasterFrame 700 - bench mode
OPERATING SYSTEM: Windows 10 LTSC
Full Rig Info

Owned

 Share

CPU: M1 Pro
RAM: 32GB
SSD/NVME: 1TB
OPERATING SYSTEM: MacOS Sonoma
CASE: Space Grey
Full Rig Info
Link to comment
Share on other sites

7 hours ago, Sir Beregond said:

 

I think it gets a little muddier than that looking back.

 

You indicate a "was" in each number line item, but let's take a look at different eras and how chip/spec/branding played a part.

 

Speaking strictly on traditional branding, prior to really the 30-series, "8" was always the "flagship" or "halo" number. Whether we are talking the 6800 Ultra, 8800 GTX, GTX 580....8 was the number in the product name you looked for to indicate usually the highest end Nvidia part for that gen. Some gens you got refreshes that used "9", the FX 5900 series and GTX 7900 series comes to mind. But until some dual GPU cards made use of the "9" branding, that was about it, and 8 would have been the number you look for.

 

During this time, the chip and spec for any "8" branded card for a given gen, was the biggest and most flagship for whatever they had at the time. Now until the 200-series, you didn't really have "7" branding. Instead you had, for example, the 8600-series and 8800-series cards that had multiple card types indicated by their suffix like GTS, GT, GTX, Ultra, etc. In any case, they usually all used the same chip but with various levels of segmentation / cutting down of the chip / specs until you went down a number in the series like from 8 to 6, and now you are on a smaller die. Once you get to the 200-series, you see the general formation of what we have these days in terms of product naming, albeit some differences due to die shrink re-releases, and competition from AMD at the time necessitating most notable a refresh of the 260. The funniest thing is that even the GTX 260...uses the same big die chip as the bigger 270, 275, 280, 285. It was just cut down so much.

 

Then you get the Fermi with the 400 and 500-series. As expected, the 80 is the flagship / halo / enthusiast card. The 70 emerges as a slight cut-down of what the 80 is. And the 60 is definitely the mid-range card as it and the 50 move to the middle chips of the Fermi gen. It's pretty easy to follow just based on chip segmentation. GTX 480, 470, and later 580 and 570 use the big die GF100 / GF110. The 60-class cards use the GF104 / GF114 die, while the 50-class and below use GF106 / GF116 with maybe some smaller chips for the low low end.

 

Then something funny happens once the 600-series Kepler generation releases. Still billed as the "80" (and most notably, priced the same) which up to this point in Nvidia's entire history since FX at least, has been associated with the highest end cards with the highest end specs, release the GTX 680. Sure, it's faster than the GTX 580, but it's odd. It uses a much smaller die than the GTX 580, and indeed equipped with GK104, is the successor die to the GTX 560 class GF114 cards. With this move, the "80" card had effectively become the mid-range card. They still had a successor die to the GF110 equipped GTX 580, but that would wait. Nvidia would release the first ever $1000 consumer card in the GTX Titan with it. Then continue to sandbag the hell out of the chip with the GTX 780, and finally give you the full die with the 780 Ti / Titan Black. But notice these are now $699 and $999 products vs the old $499 MSRP of both the GTX 480 and GTX 580. The GK104 chip persisted as a rebranded card...the 770 which was just a 680.

 

With that, Nvidia found a new model for Maxwell. The GTX 980 released for $550, but was equipped with GM204. Notice the trend with the "4" in the chip title? Its the mid-range chip again. You wouldn't actually get the flagship chip until the later releases of the 980 Ti and Titan X. What was the 70-class at this point? The cutdown x04 die of the 80-class card. This would be the same thing for the 10-series. At all these points, Nvidia want's you to still think of "80" as the flagship it once was, but in actuality has been nothing more than what was previously branded (and priced) as mid-range Fermi and below.

 

Turing comes around then and the 20-series raises prices across the stack, and yet the RTX 2070 drops to the TU106 die! We see somewhat of a correction with the release of the Super refresh, but yet again...the flagship is reserved for the 2080 Ti / Titan RTX, while the 2080 / S, 2070 Super are on TU104...the middle die. The mid-range of the generation. 2060 Super and below are on TU106.

 

We see the same trend. Each gen has 3 main dies that can be broken down into following the general principles:

 

x00 / x02

x04

x06

 

And then sometimes you got some x07, x08 tiny dies for the really low end. You know...el cheapo. But for the mainsteam of gaming products, its generally these three, and so it is very easy to see what is actually the high end, mid-range, and low end products and its funny how much the numbers such as "80", "70", "60", etc. stop mattering. In no way could a GTX 680 be considered high end for Kepler. Yet its called "80".

 

30-series comes around and now you have 90. But what really is 90? Well the 2080 Ti was $1200. The 3090 came in and while billed as a "Titan" in marketing, very clearly was not (at the time of release at least) because it actually performed worse than the Titan RTX at certain professional workloads that Titan cards were used for with their special drivers that the 3090 did not have. It was very clearly a rebrand and re-pricing of the "80 Ti" cards. Something else interesting happens in the 30-series. Using a cheaper, but most definitely inferior node to TSMC 7nm, the base RTX 3080 card is suddenly back on GA102 silicon which means the 70-class card is definitely further separated from the 80-class card than it had been previously.

 

With the 40-series, we go back to the 4090 occupying the space that Titans and 80 Ti cards had previously since the 700-series which is the big die products, in this case AD102. The 4080 is noticeably way cut down on specs vs the 4090 and gets its own die AD103. AD104 is then used for the 70-class cards. And finally you get AD106 for the 60-class cards.

 

Where am I going with all of this?

 

If one recognizes the fact that the 80-class (non-Ti) cards for all generations since Kepler, except for the 700-series and 30-series have all been the "middle" die, and the 70-class card was either a cut-down of that, or even a tier die lower, it becomes clear what the truth is. We're conditioned to still think "80" and "70" mean high-end, but is that reality? Nope.

 

Once branding and marketing is put aside, take a look at the 5080 vs the 5090. The 5080 is very literally HALF of what a 5090 is, and the 5070 will be even less than that. Make no mistake. The 80 and 70 is most definitely the "mid-range" of today and has been ever since the 600-series with the exceptions noted above. The only difference is now Nvidia wants you to pay $600-$800 or more and $1000-$1200 or more for literal mid-range products.

 

Even with all that in mind, Nvidia wasn't skimping on ram until ray tracing became a thing. It was sort of forgivable for the first RT gen because the cards were actually doing something new, and the 2000 series felt more like a 1000 series refresh with RT added on. At the time, the ram situation was generally fine as it was anyway. 

 

The difference now is that only the two most expensive cards, at a price range that is only for die hard enthusiasts, have enough Vram that users won't be forced to upgrade again soon just to keep up with Vram requirements.

 

I bet it also has to do with DLSS. If GPUs had enough Vram, there would probably be a good amount of people who would delay upgrading and just switch to a lower DLSS mode for a while. DLSS is a great way to mitigate a lack of GPU power, but running low on Vram is much more detrimental to the overall experience.

 

If the 3080 had 16GBs or more, I bet it would still be a pretty popular card today. 

 

 

Planned obsolescence, pure and simple.

  • Agreed 2

null

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: 32GB 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

Nothing more to add to the discussion above. Simply adding the latest table from Videocardz to compare all rumoured specs:

 

image.png.f2e0f9f967f29c8d4fb38a703f4d7d77.png

 

Source: https://videocardz.com/newz/rumors-suggest-nvidia-could-launch-rtx-5070-in-february-rtx-5060-series-already-in-march

 

 

If we want to quickly compare against past few gens...

 

Ada:

image.thumb.png.c00ac13fd63bce993f5517352e18cc95.png

https://videocardz.net/nvidia-geforce-rtx-4090

 

Ampere:

image.thumb.png.2d5d68ea240ccf1069659e77f429ce03.png

https://videocardz.net/nvidia-geforce-rtx-3090-ti

 

Turing:

image.thumb.png.c4807c9060c43ddbec3951f241ac174d.png

https://videocardz.net/nvidia-geforce-rtx-2080ti

 

Edited by Slaughtahouse
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, UltraMega said:

 

Even with all that in mind, Nvidia wasn't skimping on ram until ray tracing became a thing. It was sort of forgivable for the first RT gen because the cards were actually doing something new, and the 2000 series felt more like a 1000 series refresh with RT added on. At the time, the ram situation was generally fine as it was anyway. 

 

The difference now is that only the two most expensive cards, at a price range that is only for die hard enthusiasts, have enough Vram that users won't be forced to upgrade again soon just to keep up with Vram requirements.

 

I bet it also has to do with DLSS. If GPUs had enough Vram, there would probably be a good amount of people who would delay upgrading and just switch to a lower DLSS mode for a while. DLSS is a great way to mitigate a lack of GPU power, but running low on Vram is much more detrimental to the overall experience.

 

If the 3080 had 16GBs or more, I bet it would still be a pretty popular card today. 

 

 

Planned obsolescence, pure and simple.

Yep you are absolutely right. In fact VRAM doubled at every segment (which is another fun way to track Fermi to Pascal gens). 

 

The GTX 680 at 2GB doubled the 1GB GTX 560. From there, the GTX 980 doubled to 4GB, and then doubled again to 8GB with the GTX 1080.

 

The 70 cards saw a similar doubling as during these gens they were cutdowns of the 80. Most notoriously the 970 was cut down in a very bizarre way, but did still actually have 4GB on it.

 

Looking at the high end then, the GTX 580 standard config was 1.5GB, it's successor the 780 Ti was 3GB, the 980 Ti was 6GB and then an oddity...the 1080 Ti is 11GB. 

 

Titans during these gens were often double the VRAM of the 80 Ti card as part of their segmentation strategy. With Maxwell the Titan X at 12GB was double the 980 Ti at 6GB.

 

But then Pascal Titans do something weird....we get 12GB again. And so they disable a memory channel on the 1080 Ti to make it 11GB. 

 

20-series you are absolutely right...the 8GB is reused again for the 2070 and 2080. 6GB 2060 is double the original 3GB 1060, but let's not forget the 1060 also had a 6GB release. The Titan RTX finally sees a doubling to 24GB. But the 2080 Ti also keeps the weird 11GB configuration of the 1080 Ti.

 

And then we all know what happened with 30, 40, and now 50 series.

 

Nvidia has absolutely skimped and I think you are right in their DLSS and bigger cache strategies. I don't think it will work out though.

 

Devs are starting to use these Nvidia technologies like DLSS and frame gen to get lazy about optimizing their games. You want to laugh? Go take a look at the minimum and recommendes specs for Monster Hunter Wilds at even just 1080p. We're in for a bad time with game requirements vs hardware specs if this becomes the trend. The irony of frame gen is the fact you need a beefy card to start from a good number and make it better. So the fact they now want you to use it to even get 1080p 60fps is a joke. Hopefully a one off joke, but you know how these things go.

Edited by Sir Beregond

Showcase

 Share

CPU: AMD Ryzen 9 5900X
GPU: Nvidia RTX 3080 Ti Founders Edition
RAM: G.Skill Trident Z Neo 32GB DDR4-3600 (@ 3733 CL14)
MOTHERBOARD: ASUS Crosshair VIII Dark Hero
SSD/NVME: x2 Samsung 970 Evo Plus 2TB
SSD/NVME 2: Crucial MX500 1TB
PSU: Corsair RM1000x
MONITOR: LG 42" C4 OLED
Full Rig Info

Owned

 Share

CPU: E8400, i5-650, i7-870, i7-960, i5-2400, i7-4790k, i9-10900k, i3-13100, i9-13900ks
GPU: many
RAM: Corsair 32GB DDR3-2400 | Oloy Blade 16GB DDR4-3600 | Crucial 16GB DDR5-5600
MOTHERBOARD: ASUS P7P55 WS SC | ASUS Z97 Deluxe | EVGA Z490 Dark | EVGA Z790 Dark Kingpin
SSD/NVME: Samsung 870 Evo 1TB | Inland 1TB Gen 4
PSU: BeQuiet Straight Power 12 1500W
CASE: Cooler Master MasterFrame 700 - bench mode
OPERATING SYSTEM: Windows 10 LTSC
Full Rig Info

Owned

 Share

CPU: M1 Pro
RAM: 32GB
SSD/NVME: 1TB
OPERATING SYSTEM: MacOS Sonoma
CASE: Space Grey
Full Rig Info
Link to comment
Share on other sites

There is actually one other aspect to this, which is AI. 

 

I suspect most of Nvidia's reasons for skimping on ram are just based on greed and planned obsolescence, but there is a argument that Nvidia is also trying to prevent the market from being flooded with cheap and effective AI capable GPUs. 

 

Most AI applications require a lot of Vram, 12GBs is sort of the minimum just to get started with basic stuff like image gen. Selling relatively cheap GPUs with AI capabilities and enough Vram to back up said capabilities could cut in to their non-gaming related sales, and it could give China an easy way around some of the chip embargos. 

 

Vram was never really an issue before AI enabled GPUs, and it's been an issue ever since. 

 

I wonder if AMD might release GPUs with less Vram if/when they catch up to Nvidia on AI enabled GPUs for similar reasons.

Edited by UltraMega
  • Agreed 1

null

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: 32GB 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

Even though the trend does not surprise me from Nvidia, what did surprise me is how much they have gimped the 5080. 16GB of VRAM is OK and really is the sensible standard for gaming at this point in time however, to see how much they have cut the SM's and CUDA cores in half compared to ONE tier above (5090) is frankly a real kick in the face. All designed to push you to buy the 5090 obviously, but literally halving the specs between 1 segment tier is madness IMHO.

 

Ultimately, the result of Nvidia simply being completely unchecked. They get to call the shots and that is how it is. Until there is competition, this will never change. Ultimately people do NOT vote with their wallets and people will still just eat the cost of a 5090 = Nvidia wins.

  • Agreed 1

£3000

Owned

 Share

CPU: AMD Ryzen 9 7950X3D
MOTHERBOARD: MSI Meg Ace X670E
RAM: Corsair Dominator Titanium 64GB (6000MT/s)
GPU: EVGA 3090 FTW Ultra Gaming
SSD/NVME: Corsair MP700 Pro SE Gen 5 4TB
PSU: EVGA Supernova T2 1600Watt
CASE: be quiet Dark Base Pro 900 Rev 2
FANS: Noctua NF-A14 industrialPPC x 6
Full Rig Info

Owned

 Share

CPU: Intel Core i5 8500
RAM: 16GB (2x8GB) Kingston 2666Mhz
SSD/NVME: 256GB Samsung NVMe
NETWORK: HP 561T 10Gbe (Intel X540 T2)
MOTHERBOARD: Proprietry
GPU: Intel UHD Graphics 630
PSU: 90Watt
CASE: HP EliteDesk 800 G4 SFF
Full Rig Info

£3000

Owned

 Share

CPU: 2 x Xeon|E5-2696-V4 (44C/88T)
RAM: 128GB|16 x 8GB - DDR4 2400MHz (2Rx8)
MOTHERBOARD: HP Z840|Intel C612 Chipset
GPU: Nvidia Quadro P2200
HDD: 4x 16TB Toshiba MG08ACA16TE Enterprise
SSD/NVME: Intel 512GB 670p NVMe (Main OS)
SSD/NVME 2: 2x WD RED 1TB NVMe (VM's)
SSD/NVME 3: 2x Seagate FireCuda 1TB SSD's (Apps)
Full Rig Info
Link to comment
Share on other sites

17 hours ago, ENTERPRISE said:

Even though the trend does not surprise me from Nvidia, what did surprise me is how much they have gimped the 5080. 16GB of VRAM is OK and really is the sensible standard for gaming at this point in time however, to see how much they have cut the SM's and CUDA cores in half compared to ONE tier above (5090) is frankly a real kick in the face. All designed to push you to buy the 5090 obviously, but literally halving the specs between 1 segment tier is madness IMHO.

 

Ultimately, the result of Nvidia simply being completely unchecked. They get to call the shots and that is how it is. Until there is competition, this will never change. Ultimately people do NOT vote with their wallets and people will still just eat the cost of a 5090 = Nvidia wins.

While there is certainly a portion of responsibility owned by the consumer for buying Nvidia vs AMD. It ain't that cut and dry.

 

AMD has continually botched opportunities to really capture market share. RDNA2 was probably closest, but they still have to contend with the fact they are clearly the 2nd rate graphics company playing catchup to Nvidia. Sure you can argue how much things like RT matter...and certainly at lower to mid range price points, its less relevant. But there is still a clear delineation for who is on top and the market knows this. 

 

AMD could have done better by doing any number of the following things:

  • Not simply slotted launch MSRPs for RDNA3 into Nvidia's structure
  • $899 7900 XT just made everyone laugh and call it hot garbage until the price dropped. After that it was a fantastic card, but the damage in mind share was done.
  • Should have had FSR3 ready to go for RDNA3, or a clearer path to release. Again, a very visible "we're playing catch up" visual to the market
  • Dropping high-end for RDNA4 may work for the strategy they want, but still reeks of "we're giving up" again

 

All in all, AMD doesn't just get a pity buy because they aren't Nvidia if they haven't earned it. Now for some, they have and that's fine. Different people want different things. For me, I think I am just done buying or worrying about upgrading graphics anytime soon. Nvidia has a monopoly, and AMD can take part of the blame for that.

Edited by Sir Beregond

Showcase

 Share

CPU: AMD Ryzen 9 5900X
GPU: Nvidia RTX 3080 Ti Founders Edition
RAM: G.Skill Trident Z Neo 32GB DDR4-3600 (@ 3733 CL14)
MOTHERBOARD: ASUS Crosshair VIII Dark Hero
SSD/NVME: x2 Samsung 970 Evo Plus 2TB
SSD/NVME 2: Crucial MX500 1TB
PSU: Corsair RM1000x
MONITOR: LG 42" C4 OLED
Full Rig Info

Owned

 Share

CPU: E8400, i5-650, i7-870, i7-960, i5-2400, i7-4790k, i9-10900k, i3-13100, i9-13900ks
GPU: many
RAM: Corsair 32GB DDR3-2400 | Oloy Blade 16GB DDR4-3600 | Crucial 16GB DDR5-5600
MOTHERBOARD: ASUS P7P55 WS SC | ASUS Z97 Deluxe | EVGA Z490 Dark | EVGA Z790 Dark Kingpin
SSD/NVME: Samsung 870 Evo 1TB | Inland 1TB Gen 4
PSU: BeQuiet Straight Power 12 1500W
CASE: Cooler Master MasterFrame 700 - bench mode
OPERATING SYSTEM: Windows 10 LTSC
Full Rig Info

Owned

 Share

CPU: M1 Pro
RAM: 32GB
SSD/NVME: 1TB
OPERATING SYSTEM: MacOS Sonoma
CASE: Space Grey
Full Rig Info
Link to comment
Share on other sites

 

Since I am in sheets / excel all the time, thought it'd be interesting to make one to highlight the differentials between GPUs. Edit: Sheet now updated with info from various sources including Videocardz and Techpowerup. 

 

Items italicized are rumoured and to be confirmed (TBC).

Edited by Slaughtahouse
  • Thanks 1
  • Respect 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy