Welcome to ExtremeHW
Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.
Registered users can:
- Start new topics and reply to others.
- Show off your PC using our Rig Creator feature.
- Subscribe to topics and forums to get updates.
- Get your own profile page to customize.
- Send personal messages to other members.
- Take advantage of site exclusive features.
- Upgrade to Premium to unlock additional sites features.
-
Posts
1,967 -
Joined
-
Last visited
-
Days Won
42 -
Feedback
100%
Content Type
Forums
Store
Events
Gallery
Profiles
Videos
Marketplace
Tutorials
Everything posted by Sir Beregond
-
Yep, I don't disagree.
-
I don't think it's a complete disaster per se. Intel is bringing some interesting things to the market with this launch such as CUDIMM compatibility and such. Things like that will certainly keep the enthusiast market that likes to tweak and overclock happy. However that's still a niche market in the grand scheme of things, and I don't think the general gaming market will regard this launch well at all.
-
Like Zen 1 for AMD, could be the start of something great for Intel, but it's a tough sell in the current market. AM5 is an established platform at this time with Zen 5 X3D on the way, while LGA1851 is gonna be an expensive new platform, and at least from the rumors I heard, not sure the longevity of it following Arrow Lake. At least for gaming, Arrow Lake seems like it won't be a compelling option vs 9800X3D platforms. On the other hand, one thing that seems like a red flag to me is the fact that Intel 7 (12th, 13th, 14th gen) is a 10nm node and they are dropping all the way down to TSMC's N3B (3nm). And yet they can't seem to exactly surpass 14th gen. It's...concerning.
-
While there is certainly a portion of responsibility owned by the consumer for buying Nvidia vs AMD. It ain't that cut and dry. AMD has continually botched opportunities to really capture market share. RDNA2 was probably closest, but they still have to contend with the fact they are clearly the 2nd rate graphics company playing catchup to Nvidia. Sure you can argue how much things like RT matter...and certainly at lower to mid range price points, its less relevant. But there is still a clear delineation for who is on top and the market knows this. AMD could have done better by doing any number of the following things: Not simply slotted launch MSRPs for RDNA3 into Nvidia's structure $899 7900 XT just made everyone laugh and call it hot garbage until the price dropped. After that it was a fantastic card, but the damage in mind share was done. Should have had FSR3 ready to go for RDNA3, or a clearer path to release. Again, a very visible "we're playing catch up" visual to the market Dropping high-end for RDNA4 may work for the strategy they want, but still reeks of "we're giving up" again All in all, AMD doesn't just get a pity buy because they aren't Nvidia if they haven't earned it. Now for some, they have and that's fine. Different people want different things. For me, I think I am just done buying or worrying about upgrading graphics anytime soon. Nvidia has a monopoly, and AMD can take part of the blame for that.
-
Yep you are absolutely right. In fact VRAM doubled at every segment (which is another fun way to track Fermi to Pascal gens). The GTX 680 at 2GB doubled the 1GB GTX 560. From there, the GTX 980 doubled to 4GB, and then doubled again to 8GB with the GTX 1080. The 70 cards saw a similar doubling as during these gens they were cutdowns of the 80. Most notoriously the 970 was cut down in a very bizarre way, but did still actually have 4GB on it. Looking at the high end then, the GTX 580 standard config was 1.5GB, it's successor the 780 Ti was 3GB, the 980 Ti was 6GB and then an oddity...the 1080 Ti is 11GB. Titans during these gens were often double the VRAM of the 80 Ti card as part of their segmentation strategy. With Maxwell the Titan X at 12GB was double the 980 Ti at 6GB. But then Pascal Titans do something weird....we get 12GB again. And so they disable a memory channel on the 1080 Ti to make it 11GB. 20-series you are absolutely right...the 8GB is reused again for the 2070 and 2080. 6GB 2060 is double the original 3GB 1060, but let's not forget the 1060 also had a 6GB release. The Titan RTX finally sees a doubling to 24GB. But the 2080 Ti also keeps the weird 11GB configuration of the 1080 Ti. And then we all know what happened with 30, 40, and now 50 series. Nvidia has absolutely skimped and I think you are right in their DLSS and bigger cache strategies. I don't think it will work out though. Devs are starting to use these Nvidia technologies like DLSS and frame gen to get lazy about optimizing their games. You want to laugh? Go take a look at the minimum and recommendes specs for Monster Hunter Wilds at even just 1080p. We're in for a bad time with game requirements vs hardware specs if this becomes the trend. The irony of frame gen is the fact you need a beefy card to start from a good number and make it better. So the fact they now want you to use it to even get 1080p 60fps is a joke. Hopefully a one off joke, but you know how these things go.
-
I think it gets a little muddier than that looking back. You indicate a "was" in each number line item, but let's take a look at different eras and how chip/spec/branding played a part. Speaking strictly on traditional branding, prior to really the 30-series, "8" was always the "flagship" or "halo" number. Whether we are talking the 6800 Ultra, 8800 GTX, GTX 580....8 was the number in the product name you looked for to indicate usually the highest end Nvidia part for that gen. Some gens you got refreshes that used "9", the FX 5900 series and GTX 7900 series comes to mind. But until some dual GPU cards made use of the "9" branding, that was about it, and 8 would have been the number you look for. During this time, the chip and spec for any "8" branded card for a given gen, was the biggest and most flagship for whatever they had at the time. Now until the 200-series, you didn't really have "7" branding. Instead you had, for example, the 8600-series and 8800-series cards that had multiple card types indicated by their suffix like GTS, GT, GTX, Ultra, etc. In any case, they usually all used the same chip but with various levels of segmentation / cutting down of the chip / specs until you went down a number in the series like from 8 to 6, and now you are on a smaller die. Once you get to the 200-series, you see the general formation of what we have these days in terms of product naming, albeit some differences due to die shrink re-releases, and competition from AMD at the time necessitating most notable a refresh of the 260. The funniest thing is that even the GTX 260...uses the same big die chip as the bigger 270, 275, 280, 285. It was just cut down so much. Then you get the Fermi with the 400 and 500-series. As expected, the 80 is the flagship / halo / enthusiast card. The 70 emerges as a slight cut-down of what the 80 is. And the 60 is definitely the mid-range card as it and the 50 move to the middle chips of the Fermi gen. It's pretty easy to follow just based on chip segmentation. GTX 480, 470, and later 580 and 570 use the big die GF100 / GF110. The 60-class cards use the GF104 / GF114 die, while the 50-class and below use GF106 / GF116 with maybe some smaller chips for the low low end. Then something funny happens once the 600-series Kepler generation releases. Still billed as the "80" (and most notably, priced the same) which up to this point in Nvidia's entire history since FX at least, has been associated with the highest end cards with the highest end specs, release the GTX 680. Sure, it's faster than the GTX 580, but it's odd. It uses a much smaller die than the GTX 580, and indeed equipped with GK104, is the successor die to the GTX 560 class GF114 cards. With this move, the "80" card had effectively become the mid-range card. They still had a successor die to the GF110 equipped GTX 580, but that would wait. Nvidia would release the first ever $1000 consumer card in the GTX Titan with it. Then continue to sandbag the hell out of the chip with the GTX 780, and finally give you the full die with the 780 Ti / Titan Black. But notice these are now $699 and $999 products vs the old $499 MSRP of both the GTX 480 and GTX 580. The GK104 chip persisted as a rebranded card...the 770 which was just a 680. With that, Nvidia found a new model for Maxwell. The GTX 980 released for $550, but was equipped with GM204. Notice the trend with the "4" in the chip title? Its the mid-range chip again. You wouldn't actually get the flagship chip until the later releases of the 980 Ti and Titan X. What was the 70-class at this point? The cutdown x04 die of the 80-class card. This would be the same thing for the 10-series. At all these points, Nvidia want's you to still think of "80" as the flagship it once was, but in actuality has been nothing more than what was previously branded (and priced) as mid-range Fermi and below. Turing comes around then and the 20-series raises prices across the stack, and yet the RTX 2070 drops to the TU106 die! We see somewhat of a correction with the release of the Super refresh, but yet again...the flagship is reserved for the 2080 Ti / Titan RTX, while the 2080 / S, 2070 Super are on TU104...the middle die. The mid-range of the generation. 2060 Super and below are on TU106. We see the same trend. Each gen has 3 main dies that can be broken down into following the general principles: x00 / x02 x04 x06 And then sometimes you got some x07, x08 tiny dies for the really low end. You know...el cheapo. But for the mainsteam of gaming products, its generally these three, and so it is very easy to see what is actually the high end, mid-range, and low end products and its funny how much the numbers such as "80", "70", "60", etc. stop mattering. In no way could a GTX 680 be considered high end for Kepler. Yet its called "80". 30-series comes around and now you have 90. But what really is 90? Well the 2080 Ti was $1200. The 3090 came in and while billed as a "Titan" in marketing, very clearly was not (at the time of release at least) because it actually performed worse than the Titan RTX at certain professional workloads that Titan cards were used for with their special drivers that the 3090 did not have. It was very clearly a rebrand and re-pricing of the "80 Ti" cards. Something else interesting happens in the 30-series. Using a cheaper, but most definitely inferior node to TSMC 7nm, the base RTX 3080 card is suddenly back on GA102 silicon which means the 70-class card is definitely further separated from the 80-class card than it had been previously. With the 40-series, we go back to the 4090 occupying the space that Titans and 80 Ti cards had previously since the 700-series which is the big die products, in this case AD102. The 4080 is noticeably way cut down on specs vs the 4090 and gets its own die AD103. AD104 is then used for the 70-class cards. And finally you get AD106 for the 60-class cards. Where am I going with all of this? If one recognizes the fact that the 80-class (non-Ti) cards for all generations since Kepler, except for the 700-series and 30-series have all been the "middle" die, and the 70-class card was either a cut-down of that, or even a tier die lower, it becomes clear what the truth is. We're conditioned to still think "80" and "70" mean high-end, but is that reality? Nope. Once branding and marketing is put aside, take a look at the 5080 vs the 5090. The 5080 is very literally HALF of what a 5090 is, and the 5070 will be even less than that. Make no mistake. The 80 and 70 is most definitely the "mid-range" of today and has been ever since the 600-series with the exceptions noted above. The only difference is now Nvidia wants you to pay $600-$800 or more and $1000-$1200 or more for literal mid-range products.
-
Dumb configs for sure. Watch the Super / Ti refreshes of each come with 3GB modules for an "upgrade" to 24GB for the 5080 Ti / Super and 18GB for the 5070 Ti / Super. All in all, very unimpressive. The halo keeps getting better while everything else languishes. EDIT: And it's not just the VRAM. The 5080 is literally half of a 5090.
-
techspot Google is purging uBlock from the Chrome Web Store
Sir Beregond replied to UltraMega's topic in Software News
Mainly just want something lightweight that doesn't lag or run slowly or suck down RAM. And ad blocking would be nice. -
techspot Google is purging uBlock from the Chrome Web Store
Sir Beregond replied to UltraMega's topic in Software News
I'm looking for a new browser. Not big on Edge or Firefox to be honest, but when looking at most of the options you quickly find they are all Chromium. -
Noctua introduces next-gen NF-A14x25mm G2 140mm fans
Sir Beregond replied to Slaughtahouse's topic in Hardware News
Maybe in one of those wood accented cases, but otherwise... -
I still have the game sitting in my library. I've not really progressed past the intro part of the story.
- 10 replies
-
- cyberpunk2077
- development
-
(and 1 more)
Tagged with:
-
That's the problem. Probably sat in a hot Amazon truck.
-
This isn't the first time AMD ditched the high end for the "sweet spot" market. This only works if they actually do it right and well, count on AMD to not miss an opportunity to miss an opportunity, as far as the Radeon division is concerned. They need to do better than simply letting Nvidia dictate the entire pricing structure of the whole product stack and then just "slotting in". That won't gain market share and they still have to be careful in brand perception as the "second rate" company to grow their marketshare. Personally I think not having a high end product hurts that a bit. But...the majority market isn't the high end, so they really just have to do their mid-range and sweet-spot segments correctly and it could be a good strategy for them. Especially with Nvidia's largely overpriced and under-performing (not to mention 8GB lol) blundering of the mid-range with the 40-series. I'd say don't launch RDNA4 until FSR4 is ready, and really push the price war. Not this BS $900 7900XT garbage that the market later price corrects. Come out the gate swinging. If they can deliver a 8700 XT or 8800 XT (or whatever it ends up being called) that at least meets the performance of the 7900XTX / 4080 Super with much better RT than RDNA3 at say $500-$600, that would be a great thing for the market.
-
"Second star to the right, and straight on till morning."
Sir Beregond replied to ENTERPRISE's topic in Announcements
Unexpected news for sure. Life comes first, and I wish you good health (physical and mental), and I hope you find some peace in not having to manage a site in both the expense and time that I'm sure we all have some sort of idea about, but probably not the full extent of. I do hope you keep in contact though! -
Ah gotcha. I haven't played FS2020 yet, so guess I misunderstood.
-
Well it's in the name. It's a flight sim. Games like Ace Combat are not and more akin yo an arcade flight shooter. I'd like to try FS2024.
-
It'll be interesting to see what they do, but I'm guessing stock it will not be 600W. 4090 certainly does gain above 450W stock, but it really is in the diminishing returns category for the amount of power needed to push additional performance. Hell, doesn't lose much dropping to 350W. The 450W stock was definitely choses because of the performance to power curve.
-
techspot Samsung's 280-layer QLC NAND hits mass production
Sir Beregond replied to UltraMega's topic in Hardware News
I've tended to avoid QLC for TLC drives, so would be interested to see how they stack up. -
My experience with curve optimizer was that it hardly ever failed stress tests with load on it. It was usually during the idle part of the voltage curve where if it was undervolted too much it would freeze/crash. On Zen 3 anyway.
-
Usually in synthetics, the 13900k had a multi-threading advantage over AMD because of the sheer number of E-cores. Does that translate differently in other use cases?
-
No price is not fine. In past gens didn't the Pro usually come in at the same MSRP as the console it was replacing? Here you are instead getting a $200 increase and for your trouble they are taking away your disk drive. Not a good look or price imo. Definitely a money grab from the fact Xbox is floundering and probably Sony wanting to test the waters on what PS console buyers will tolerate for pricing. Personally my newest consoles I own are a PS2 and a PS3, so in the end I don't really care, I'm not buying one, but just my observations.
-
Are they going 3nm for Blackwell? Will be interesting to see how this turns out. Part of the massive gains the 4090 enjoyed over Ampere was due to going from Samsung 8nm (a 10nm node), all the way to TSMC Nvidia 4nm (a 5nm node). I imagine much more of Blackwell's gains will be underlying architecture changes in addition to the switch to GDDR7. In that sense, you guys are right that they could try to push more power again if they aren't gaining much from the move to 3nm from 4nm.
-
It's certainly possible, but I'd remind folks pre-release rumors of the 4090 also said "600W+ card" and it released instead as a 450W card.
-
Cooler Master MasterLiquid 360 Ion Review Discussion
Sir Beregond replied to RageSet's topic in Review Discussions
Great pictures as always. I need to learn how to take better photos lol. If I could offer some feedback, with a CPU like a 14900KS, would be curious to see more CPU config details around these cooler tests. Would probably be helpful to indicate stock or how you have it configured. -
Basically exactly what I did. The 48" is in the living room now and I use the 42" as my monitor.