Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

Sir Beregond

Reviewer
  • Posts

    2,036
  • Joined

  • Last visited

  • Days Won

    53
  • Feedback

    100%

Everything posted by Sir Beregond

  1. Being on 4k, these 4k relative and actual performance charts put some things in perspective. I wonder how the 5090 will change these numbers come next year. AMD Ryzen 7 9800X3D Review - The Best Gaming Processor WWW.TECHPOWERUP.COM The Ryzen 7 9800X3D establishes AMD as the leader in gaming performance. This Zen 5-based X3D chip is not only fast, it also comes with full support for overclocking. Besides gaming, application...
  2. Yeah I only mentioned Aquasuite if you had one of their fan controllers or something. Otherwise I think you'll be fine with the free software or this Argus.
  3. Yeah I think in many way yes, in some ways no. Analog components (your I/O, etc.) stopped scaling with new nodes way back in 28nm and maybe even before then. Cache was scaling well until around 7nm and has been leveling off since. Logic (your CPU / GPU cores, etc) on the other hand, continues to scale with each new process shrink, and actually looks like it might start scaling even better as we continue. Seems to suggest scaling is favoring logic these days while cache and analog transistors are already so much smaller than logic transistors, that they are probably already at their physical limits and in that sense, yeah Moore's Law is dead. And then yeah you mention the efficiency part which is interesting because on one hand companies are really trying to market themselves as efficient. Meanwhile the actual base power draw is so much more these days that they used to be. Top end GPUs used to max out at 250W. Now you have a 450W (default) 4090 that is arguably one of the most efficient power to performance GPUs Nvidia has released to the consumer market, yet its still drawing far more than we used to. Anyway, I'm interested in hardware still to an extent, but likewise hard to get excited about stuff when large market segments have stagnated and only gotten more expensive. Based on this chart from AMD, one of the reasons chiplets made sense. Logic on the most advanced node they want to use, meanwhile offload I/O component to older nodes because there is no reason to use the more advanced/expensive ones.
  4. Yeah...been watching that thread on OCN and there sure is the crowd that will just buy anything because it's new.
  5. You're thinking of the reticle limit which tl;dr dictates maximum physical limit on die size. There are ways around that though, and to be honest, that hasn't been the limiting factor on monolithic chip design. The real limit is that the bigger the die, the higher the probability of defects. The bigger the die, the fewer chips per wafer. With potential for more defects, the fewer yields per wafer too. This becomes a big problem as we start hitting 3nm, 2nm and beyond where TSMC functionally holds a monopoly or at least technological dominance in the fab market and can price wafers however they want. TSMC 2N wafer is set to be 2x the cost of 5N and still significantly more than 3N (Source). So the real problem Intel, AMD, and Nvidia are going to hit is designs that minimizes potential for defects, maximizes yields per wafer, so they can get the maximum profit out of each wafer. At the same time they need to still deliver on performance expectations. We have not seen any consumer monolithic dies from Nvidia, AMD, or Intel anywhere close to the reticle limit. The closest was probably Turing with TU102 hitting a die size of 754mm2, but the dies since then have dropped back to just over 600mm2 for the biggest ones, with the step down dies being significantly smaller than that. Shifting back to Intel, the die size of Raptor Lake 14900K is 257mm2 - again a far cry from the physical reticle limit. So in the end, I don't think reticle limit is really a factor here currently and its more probably down to the cost per wafer of the more advanced nodes, and the increasing complexity of architectures necessitating a rethinking of how to manufacture and package, hence chiplets and tiles.
  6. I totally agree that for a system whose primary use case is benching, overclocking, tweaking, etc, that power consumption / efficiency doesn't matter at all, and in that sense, I know tons of folks that fit in that camp who have had a lot of fun with 13th and 14th gen Intel. For my daily rig that's 4k gaming and I value quiet performance with the watercooling, I went AMD and it works fine for my use case. I still did a lot of tweaking for the RAM timings which need significant gains in latency, as well as tweaking both the PBO boosting behavior and all-core OC's for the chip. My motherboard has a feature to dynamically shift between all-core OC and PBO single-core max boosts based on parameters set for load. So it's not all bad. But for max overclocking and tweaking fun? Got the bench and some Intel systems for that. Rumor mill suggests Intel is working their way back to a single core and dropping this hybrid P + E core approach. Guess we'll see if that actually happens. Skymont E-cores are supposed to be about the IPC of Golden Cove (Alder Lake P-cores) if I recall? I wonder why Intel didn't drop an 8P + 32E considering the loss of hyper-threading. Yeah I really hope Linux gaming continues to gain ground with the Steam Deck and other efforts. I am really reaching the point I want to just be done with Windows altogether.
  7. From a purely enthusiast perspective, I absolutely agree that chiplet design is fundamentally "worse". They absolutely introduce latency penalties over what a monolithic design would have. Much like having to access DRAM is a bottleneck, so to is having to communicate over an interconnect to other parts of the CPU. That said...it makes total sense from a price/performance standpoint and as a way to both simplify manufacturing, and overall lower cost. In many ways its what allowed AMD to heavily price compete Intel in the Zen 2 era, and Intel probably saw that advantage as well vs having what...several different monolithic die designs to contend with every generation. So from that perspective...I kind of get it. Secondarily, my AMD system is fine. It works, it's performant for what I use it for, etc. I would hardly say it "sucks". Now...from an overclocking / enthusiast wanting to play with it perspective? Yeah...I have a 13900KS on the test bench for a reason, when I get around to it . Whether us as a niche group which is not the market that is propping up companies like Intel and AMD like it or not, chiplets are here to stay. My only hope is that Intel can find a way to be competitive again because the last thing we need is a monopoly in the market.
  8. No idea considering the 9000-series X3D chips have not released yet. That said, shouldn't matter. That motherboard support BIOS flashing without CPU and memory according to their website, so I think you'll be fine either way.
  9. Not sure what kind you need, but see what type your card uses. Some like this one linked has springs in each screw hole, while other x brackets themselves are functionally leaf springs. And then of course for sizing, measure screw mounting hole distance, usually in mm.
  10. The biggest interesting rumor to me with this is the unlocked multiplier. Definitely interested to see what it's actually capable of when pushed if they have figured out a way to deal with thermals on stacked cache.
  11. Yep, Fan Control was going to be my suggestion unless you have access to AquaSuite. Had never heard of Argus.
  12. I don't think it's a complete disaster per se. Intel is bringing some interesting things to the market with this launch such as CUDIMM compatibility and such. Things like that will certainly keep the enthusiast market that likes to tweak and overclock happy. However that's still a niche market in the grand scheme of things, and I don't think the general gaming market will regard this launch well at all.
  13. Like Zen 1 for AMD, could be the start of something great for Intel, but it's a tough sell in the current market. AM5 is an established platform at this time with Zen 5 X3D on the way, while LGA1851 is gonna be an expensive new platform, and at least from the rumors I heard, not sure the longevity of it following Arrow Lake. At least for gaming, Arrow Lake seems like it won't be a compelling option vs 9800X3D platforms. On the other hand, one thing that seems like a red flag to me is the fact that Intel 7 (12th, 13th, 14th gen) is a 10nm node and they are dropping all the way down to TSMC's N3B (3nm). And yet they can't seem to exactly surpass 14th gen. It's...concerning.
  14. While there is certainly a portion of responsibility owned by the consumer for buying Nvidia vs AMD. It ain't that cut and dry. AMD has continually botched opportunities to really capture market share. RDNA2 was probably closest, but they still have to contend with the fact they are clearly the 2nd rate graphics company playing catchup to Nvidia. Sure you can argue how much things like RT matter...and certainly at lower to mid range price points, its less relevant. But there is still a clear delineation for who is on top and the market knows this. AMD could have done better by doing any number of the following things: Not simply slotted launch MSRPs for RDNA3 into Nvidia's structure $899 7900 XT just made everyone laugh and call it hot garbage until the price dropped. After that it was a fantastic card, but the damage in mind share was done. Should have had FSR3 ready to go for RDNA3, or a clearer path to release. Again, a very visible "we're playing catch up" visual to the market Dropping high-end for RDNA4 may work for the strategy they want, but still reeks of "we're giving up" again All in all, AMD doesn't just get a pity buy because they aren't Nvidia if they haven't earned it. Now for some, they have and that's fine. Different people want different things. For me, I think I am just done buying or worrying about upgrading graphics anytime soon. Nvidia has a monopoly, and AMD can take part of the blame for that.
  15. Yep you are absolutely right. In fact VRAM doubled at every segment (which is another fun way to track Fermi to Pascal gens). The GTX 680 at 2GB doubled the 1GB GTX 560. From there, the GTX 980 doubled to 4GB, and then doubled again to 8GB with the GTX 1080. The 70 cards saw a similar doubling as during these gens they were cutdowns of the 80. Most notoriously the 970 was cut down in a very bizarre way, but did still actually have 4GB on it. Looking at the high end then, the GTX 580 standard config was 1.5GB, it's successor the 780 Ti was 3GB, the 980 Ti was 6GB and then an oddity...the 1080 Ti is 11GB. Titans during these gens were often double the VRAM of the 80 Ti card as part of their segmentation strategy. With Maxwell the Titan X at 12GB was double the 980 Ti at 6GB. But then Pascal Titans do something weird....we get 12GB again. And so they disable a memory channel on the 1080 Ti to make it 11GB. 20-series you are absolutely right...the 8GB is reused again for the 2070 and 2080. 6GB 2060 is double the original 3GB 1060, but let's not forget the 1060 also had a 6GB release. The Titan RTX finally sees a doubling to 24GB. But the 2080 Ti also keeps the weird 11GB configuration of the 1080 Ti. And then we all know what happened with 30, 40, and now 50 series. Nvidia has absolutely skimped and I think you are right in their DLSS and bigger cache strategies. I don't think it will work out though. Devs are starting to use these Nvidia technologies like DLSS and frame gen to get lazy about optimizing their games. You want to laugh? Go take a look at the minimum and recommendes specs for Monster Hunter Wilds at even just 1080p. We're in for a bad time with game requirements vs hardware specs if this becomes the trend. The irony of frame gen is the fact you need a beefy card to start from a good number and make it better. So the fact they now want you to use it to even get 1080p 60fps is a joke. Hopefully a one off joke, but you know how these things go.
  16. I think it gets a little muddier than that looking back. You indicate a "was" in each number line item, but let's take a look at different eras and how chip/spec/branding played a part. Speaking strictly on traditional branding, prior to really the 30-series, "8" was always the "flagship" or "halo" number. Whether we are talking the 6800 Ultra, 8800 GTX, GTX 580....8 was the number in the product name you looked for to indicate usually the highest end Nvidia part for that gen. Some gens you got refreshes that used "9", the FX 5900 series and GTX 7900 series comes to mind. But until some dual GPU cards made use of the "9" branding, that was about it, and 8 would have been the number you look for. During this time, the chip and spec for any "8" branded card for a given gen, was the biggest and most flagship for whatever they had at the time. Now until the 200-series, you didn't really have "7" branding. Instead you had, for example, the 8600-series and 8800-series cards that had multiple card types indicated by their suffix like GTS, GT, GTX, Ultra, etc. In any case, they usually all used the same chip but with various levels of segmentation / cutting down of the chip / specs until you went down a number in the series like from 8 to 6, and now you are on a smaller die. Once you get to the 200-series, you see the general formation of what we have these days in terms of product naming, albeit some differences due to die shrink re-releases, and competition from AMD at the time necessitating most notable a refresh of the 260. The funniest thing is that even the GTX 260...uses the same big die chip as the bigger 270, 275, 280, 285. It was just cut down so much. Then you get the Fermi with the 400 and 500-series. As expected, the 80 is the flagship / halo / enthusiast card. The 70 emerges as a slight cut-down of what the 80 is. And the 60 is definitely the mid-range card as it and the 50 move to the middle chips of the Fermi gen. It's pretty easy to follow just based on chip segmentation. GTX 480, 470, and later 580 and 570 use the big die GF100 / GF110. The 60-class cards use the GF104 / GF114 die, while the 50-class and below use GF106 / GF116 with maybe some smaller chips for the low low end. Then something funny happens once the 600-series Kepler generation releases. Still billed as the "80" (and most notably, priced the same) which up to this point in Nvidia's entire history since FX at least, has been associated with the highest end cards with the highest end specs, release the GTX 680. Sure, it's faster than the GTX 580, but it's odd. It uses a much smaller die than the GTX 580, and indeed equipped with GK104, is the successor die to the GTX 560 class GF114 cards. With this move, the "80" card had effectively become the mid-range card. They still had a successor die to the GF110 equipped GTX 580, but that would wait. Nvidia would release the first ever $1000 consumer card in the GTX Titan with it. Then continue to sandbag the hell out of the chip with the GTX 780, and finally give you the full die with the 780 Ti / Titan Black. But notice these are now $699 and $999 products vs the old $499 MSRP of both the GTX 480 and GTX 580. The GK104 chip persisted as a rebranded card...the 770 which was just a 680. With that, Nvidia found a new model for Maxwell. The GTX 980 released for $550, but was equipped with GM204. Notice the trend with the "4" in the chip title? Its the mid-range chip again. You wouldn't actually get the flagship chip until the later releases of the 980 Ti and Titan X. What was the 70-class at this point? The cutdown x04 die of the 80-class card. This would be the same thing for the 10-series. At all these points, Nvidia want's you to still think of "80" as the flagship it once was, but in actuality has been nothing more than what was previously branded (and priced) as mid-range Fermi and below. Turing comes around then and the 20-series raises prices across the stack, and yet the RTX 2070 drops to the TU106 die! We see somewhat of a correction with the release of the Super refresh, but yet again...the flagship is reserved for the 2080 Ti / Titan RTX, while the 2080 / S, 2070 Super are on TU104...the middle die. The mid-range of the generation. 2060 Super and below are on TU106. We see the same trend. Each gen has 3 main dies that can be broken down into following the general principles: x00 / x02 x04 x06 And then sometimes you got some x07, x08 tiny dies for the really low end. You know...el cheapo. But for the mainsteam of gaming products, its generally these three, and so it is very easy to see what is actually the high end, mid-range, and low end products and its funny how much the numbers such as "80", "70", "60", etc. stop mattering. In no way could a GTX 680 be considered high end for Kepler. Yet its called "80". 30-series comes around and now you have 90. But what really is 90? Well the 2080 Ti was $1200. The 3090 came in and while billed as a "Titan" in marketing, very clearly was not (at the time of release at least) because it actually performed worse than the Titan RTX at certain professional workloads that Titan cards were used for with their special drivers that the 3090 did not have. It was very clearly a rebrand and re-pricing of the "80 Ti" cards. Something else interesting happens in the 30-series. Using a cheaper, but most definitely inferior node to TSMC 7nm, the base RTX 3080 card is suddenly back on GA102 silicon which means the 70-class card is definitely further separated from the 80-class card than it had been previously. With the 40-series, we go back to the 4090 occupying the space that Titans and 80 Ti cards had previously since the 700-series which is the big die products, in this case AD102. The 4080 is noticeably way cut down on specs vs the 4090 and gets its own die AD103. AD104 is then used for the 70-class cards. And finally you get AD106 for the 60-class cards. Where am I going with all of this? If one recognizes the fact that the 80-class (non-Ti) cards for all generations since Kepler, except for the 700-series and 30-series have all been the "middle" die, and the 70-class card was either a cut-down of that, or even a tier die lower, it becomes clear what the truth is. We're conditioned to still think "80" and "70" mean high-end, but is that reality? Nope. Once branding and marketing is put aside, take a look at the 5080 vs the 5090. The 5080 is very literally HALF of what a 5090 is, and the 5070 will be even less than that. Make no mistake. The 80 and 70 is most definitely the "mid-range" of today and has been ever since the 600-series with the exceptions noted above. The only difference is now Nvidia wants you to pay $600-$800 or more and $1000-$1200 or more for literal mid-range products.
  17. Dumb configs for sure. Watch the Super / Ti refreshes of each come with 3GB modules for an "upgrade" to 24GB for the 5080 Ti / Super and 18GB for the 5070 Ti / Super. All in all, very unimpressive. The halo keeps getting better while everything else languishes. EDIT: And it's not just the VRAM. The 5080 is literally half of a 5090.
  18. Mainly just want something lightweight that doesn't lag or run slowly or suck down RAM. And ad blocking would be nice.
  19. I'm looking for a new browser. Not big on Edge or Firefox to be honest, but when looking at most of the options you quickly find they are all Chromium.
  20. Maybe in one of those wood accented cases, but otherwise...
  21. I still have the game sitting in my library. I've not really progressed past the intro part of the story.
  22. That's the problem. Probably sat in a hot Amazon truck.
  23. This isn't the first time AMD ditched the high end for the "sweet spot" market. This only works if they actually do it right and well, count on AMD to not miss an opportunity to miss an opportunity, as far as the Radeon division is concerned. They need to do better than simply letting Nvidia dictate the entire pricing structure of the whole product stack and then just "slotting in". That won't gain market share and they still have to be careful in brand perception as the "second rate" company to grow their marketshare. Personally I think not having a high end product hurts that a bit. But...the majority market isn't the high end, so they really just have to do their mid-range and sweet-spot segments correctly and it could be a good strategy for them. Especially with Nvidia's largely overpriced and under-performing (not to mention 8GB lol) blundering of the mid-range with the 40-series. I'd say don't launch RDNA4 until FSR4 is ready, and really push the price war. Not this BS $900 7900XT garbage that the market later price corrects. Come out the gate swinging. If they can deliver a 8700 XT or 8800 XT (or whatever it ends up being called) that at least meets the performance of the 7900XTX / 4080 Super with much better RT than RDNA3 at say $500-$600, that would be a great thing for the market.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy