Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

Sir Beregond

Reviewer
  • Posts

    2,048
  • Joined

  • Last visited

  • Days Won

    56
  • Feedback

    100%

Everything posted by Sir Beregond

  1. Man all those SMD's. I haven't watched the video yet, but I imagine this one would be difficult to delid.
  2. Yeah see I misunderstood you when you asked in Discord the other day. I had thought you were trying a chiller or LN2 or something. Yeah make your room as cold as you want, you won't have any condensation issues as you are not using below ambient cooling.
  3. This should be pretty easy for CPU blocks, but it feels like every GPU block I see these days are nickel plated except for some Heatkiller stuff. Perhaps there is more, but I feel like I don't ever see any copper GPU blocks outside of them. That said, I 100% agree that getting nickel out of watercooling is a great idea. Only use it if you want to liquid metal something really. For any other normal use, copper is the way to go, and I will likewise be looking at switching to copper if possible next time I build something.
  4. Yeah when you look at a lot of benchmarks, the 1080p and 1440p often favor the Radeon cards, whereas 4k often swings over to Nvidia, with some exceptions. I imagine if they are expanding the bus width with RDNA3 + infinity cache, that should be much better and hopefully help eliminate that bottleneck. At that point it will be down to GDDR6 vs GDDR6X, but my guess is this is where Infinity Cache can further help, with the proper bus width for high resolutions. And this is why I am really perplexed by the 192-bit 4080 12GB. Imagine buying that for $900-$1k and it not doing great at 4k. I agree. If Moore's Law is indeed dead, then monolithic approaches to GPU dies is a problem and chiplet designs where-in the less important pieces can go to a less advanced/expensive node for the I/O die equivalent for GPU's while using the advanced node for the chiplets is the solution.
  5. My buddy's son has a Zotac 1080 Ti and it's still working what...6 years later? So guess it is working out fine. I don't otherwise know much about them. I had bought a Zotac 3070 on a drop in Feb 2021 before prices really started to skyrocket. At the time it was basically MSRP ($550?) + the tariff from Jan 2021. But I ended up trading it, water block, an old radiator (and a slight bit of $$$) for this 3080 Ti. I also wonder about long term having such a compact board for such a powerful card. This FE 3080 Ti is denser in board components then any card I've ever had before.
  6. I can't hear this without also hearing "Aaaaand now, your world champion Chicago Bulls!"
  7. Yeah I get real strong BFG vibes. I also perused their Glassdoor profile to see what the employees were saying and that didn't inspire much confidence. $1600 is way beyond the pale. I didn't spend $1200 on this 3080 Ti either to be clear. I got it from a trade and a little cash, but still well below that number.
  8. I am just not buying period. I didn't get this 3080 Ti to just spend an arm and a leg again. I'll see what 50-series and RX 8000 end up looking like. Being 4k now, I'm guessing I won't have the same service life as I did on my 980 at 1080p, but the pricing is out of hand for an upgrade every gen.
  9. I'm still skeptical on a lot of the charts and claims from Nvidia yesterday and a lot of their use of "relative performance" vs raw FPS differences. A lot of those charts were also DLSS in performance mode in the small print, which no one in their right mind should be using. I imagine actual reviews are going to paint a different story on how much better the 40-series cards actually are. If I had to predict RDNA3, I'd guess their raster performance should be pretty competitive across the stack. If I had to venture a guess, I am going to think Ray Tracing will still be faster on the Nvidia cards, but I think as long as RDNA3 can at least beat Ampere in RT even if it doesn't match Lovelace, that still makes it a compelling option. The biggest thing here is if AMD can price similarly or slightly more than RDNA2 was MSRP'd at, then that would be really compelling to the current market I would think, and is probably AMD's best chance at gaining more market share while Nvidia keeps trying to sell an overstock of 30-series cards and overpriced 40-series cards everyone is hating.
  10. It would be one thing if they were the same chip, but given the vast disparity between the two 4080's, its definitely a slap in the face to consumers...especially the ones who won't know better and will just assume the only difference is VRAM. For me personally, it sounds pretty cool, but I will wait and see how the DLSS 3 tech plays out. From what I've heard, 30-series owners should still be able to run DLSS 3 titles in DLSS 2. More than just the cost, I find the concept of a 450W card with what sounds to be optional 600W+ cards via the AIB's to be ridiculous. I know enthusiasts will like that for the OC sessions, but as a general gaming card to use daily, that just seems flat out irresponsible to me given rising energy costs, especially in places like Europe, and other places, as well as inflation, etc. Just very poor taste in my opinion.
  11. And to think I used to have a working 7500 and 9700 Pro.
  12. No, that will come later as a cut-down AD102 with 20GB 320-bit bus. (i.e. what the 4080 Ti will likely be in my mind). Could also do 22GB 352-bit, but the first seems more likely to me. But I agree with your sentiment. That said, wasn't the GTX 285 a die shrink of the GTX 280, or am I misremembering?
  13. I don't know, I think @Avacadomight be on to something. Given the 4080 12GB specs, I think it's extremely likely that the 3090/3090 Ti will outperform it at 4K. Probably just a way for them to sell unsold stock of 30-series cards.
  14. Given the "Jensen 2X" for the 3080 over the 2080 when it released, I likewise take these claims with a grain of salt. The 4090 is actually cheaper than I was expecting. I thought for sure they'd push $1999 on it. I think the bigger travesty is the pair of 4080's. The 16GB is being priced like the 2080 Ti / 3080 Ti before it, yet is not the same die as the 4090 and features a really mid-range 256-bit bus I'd expect to see with the 104 die cards. In my mind, that is really bad. Even RDNA2 with it's Infinity Cache showed deficiencies at 4K using the 256-bit bus. To be fair, 80-class cards having 256-bit buses and 104 dies is not unusual. The 680, 980, 1080, 2080 are all examples of this. However what is new is the astronomical price increase. None of those cards were $1200. The biggest slap to the face to consumers is the $900 12GB 4080. At a measly 192-bit bus I can't possibly see how this won't have problems for higher resolution gaming...something people in this price range are going to want to be able to do. A lot of people around are calling it a 4070 rebranded as a 4080 and in a sense that is certainly true, but it's even worse than that. Have we ever had a 70-class card that was 192-bit? Seems to me that was usually the 60-class card. So in my mind they are taking 60-class card specs, pumping a ton of power through it, and calling it "80-class". Maybe its just me, but feels like Kepler all over again. Anyone remember the GTX 680? And maybe this is an extreme reaction, but I sincerely hope people let these 4080's rot on shelves.
  15. Wish I could help E, but my last experience with Intel overclocking was a 4790k. I will say I found it far easier to work with than the 5900X. While all core OC on AMD did great with some multi-thread benchmarks, I found it hampered my single thread performance where PBO/CO could achieve higher single core clocks. I guess some newer BIOS versions let you get the best of both worlds now, but I haven't updated to try it out. Contrast that to my 4790k system and it was as easy as setting a number while syncing all cores and bumping the voltage up if necessary and testing. Manually tuning Curve Optimizer on AMD on the other hand was extremely tedious. I am not sure if Intel is as simple as it used to be given we have moved past the quad core era. But I guess I can just say good luck with your OCing endeavors.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy