Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

J7SC_Orion

Members
  • Posts

    2,209
  • Joined

  • Last visited

  • Days Won

    93
  • Feedback

    0%

Everything posted by J7SC_Orion

  1. Thanks for that - booked-marked the vid for future reference as that UX Pro Ultra looks good. I might add some work-related cards in the near future, but for my regular machines, I probably wait for RTX 5K in late 2024 / early 2025 (unless AMD comes up with something extremely tempting before then). But whatever the timing, there will be thermal putty involved... As already alluded to, the 2 GB GDDR6X chips on the 4090 are weird in that they actually like some heat for top oc. My th.putty application for that card works almost a little too good. But in any case, all GPUs I have with th.putty (including two from early 2021) are performing great over time > no net change with the same oc, load and standardized ambient temps. FYI, whether th.putty or th.pads (such as Thermalright Odyssey 16.8 W/mK), I always add a little bit of thermal paste on top of either - could be overkill but seems to work fine... In addition, I add th.putty onto the back of the PCB at strategic spots as well as the back of the GPU chip itself before mounting the metal backplate which in turn gets an additional big heatsink (pics above posts). With the hot summer weather and the power-slurping GPUs, I am glad that my equipment stays relatively cool.
  2. I put away two unopened vacuum-sealed TG -PP10 jars in another vacuum-sealed back...they are residing at the bottom drawer of our fridge. While it might not last, given the rated shelf-life of unused TT-PP10, the way it is packed and stored, it just might - we'll see. In any case, good to see some of the alternative thermal putties you showed / mentioned since TG-PP10 has been discontinued, tx for that ! Yea, thermal putty is great stuff - not least as it 'conforms' to the available space, and ends up providing additional cooling to surrounding bits. It is of course harder to clean up for disassembly, but so far on my three late-gen GPUs where I used thermal putty, it is working perfectly. Even after two+ years of use, temps are 'steady as she goes'. The only 'problem' is the VRAM on the 4090...as pointed out above, that actually likes a bit of warmth (~ 55 C) for best overclocking results. That said, my card's VRAM oc quite well even below that and for top bench runs, I can always run memtest_vulkan just before the bench
  3. Some more Dolby Atmos "Ray Tracing Audio' goodies; some visuals are also stunning. Second vid is an hour long Dolby Vision HDR, again with some gorgeous scenes in it. I also had another look at the 55 inch LG C3 and G3; C3 is hard to distinguish from earlier OLED 'C' line. All are quite stunning, but the G3 is brighter than the others, likely due to the addition of MLA (micro lens array).
  4. ...now I'm digging deeper - this Abit IC7 is from 2003. It still works (booted it up late last year).
  5. ...4x Swiftech Micro V2 reservoirs from 2012; all still in use - not too many moving parts which could wear
  6. ...still rumbling away in a corner somewhere... a.) old back-up dev server from latish 2013 with 4930K proc, 64 GB of GSkill Ripjaws, Corsair TX850 (btw, have 3x of those PSUs, all still running fine). We keep that oldie running for some strange reason... b.) Toshiba Laptop from 2011ish; used perhaps once or twice a year these days, but still hanging in there...has to run on wall plug though as the battery is c.) 3x of many Antec 302 cases from 2012 still in use...
  7. A 'weighty affair', or 'honey, does this make me look fat...?' Beyond cooling and physical size of modern GPUs discussed (and fyi, updated) in the earlier posts on modern GPUs, it is time to look at weight and mounting orientation... Top-tier GPUs used to be a lot lighter, whether factory-air-cooled or using water-cooling...take for example the Antec 302 cases on the top right in the next pic, two of which had quad GPUs in them, mounted vertically (terrible idea re. heat build-up and retention, but that was all the room in the 'office' I had, apart from being just plain silly)... ...even the then-new high end GPUs (ie. 980s) such as the EVGA Classifieds came with a cold plate, depicted below on the top right. That in turn meant that a universal water-block (Swiftech shown below) combined with 120mm fans blowing down on strategic bits of the cold plate was all that was really needed for top performance. By the time the 2080 Ti rolled around, things really started to change...a single factory-water-blocked 2080 Ti was about as heavy as the previous heaviest card I used which was a water-blocked dual-GPU AMD 8990 (both pictured lower left). In fact, the 2x 2080 Ti project was the last time I build anything with 'normal horizontal mounting orientation' (mobo vertical, GPU horizontal, per build-log photo lower right). FYI, I ended up supporting those two 2080 Tis with custom-cut pieces from the super-dense foam from the boxes GPUs typically come with - useful stuff, that (as long as it is really dense). Nowadays, the weight problem is even worse, IMO: Four of the EVGA Classified GPUs shown above with cold-plates and universal water-coolers weigh only a bit more combined than one water-blocked 4090...which with its 600+ W capability need much more massive cooling areas (including metal). Apart from the sheer maximum wattage, there is also the fact that the heat is much more concentrated in a smaller are of the die, but using more layers in the PCB. In addition, the overall VRAM capacity is also much higher these days which on its own produces additional heat, never mind more heat per VRAM chip. The worst VRAM offender I have in use was/is the 3090 with double-sided 24 x 1 GB VRAM...the ones on the back of the 3090 PCB were ready for barbequing until I added an additional metal heatsink and a few other cooling tricks... Finally, the VRMs also produce more heat since they provide more power output. The increased weight and size has also led to a new type of problem according to several folks, including serious commercial repair operators such as Krisfix.de / YouTube . There is a marked increase of latest-gen cards 'dying' due to cracked and torn layer/s in the PCBs (especially some long, air-cooled 4090s) around the areas highlighted with the red circles below.. I am fortunate to never have experienced that, but it does not have to be an outright, visible crack / tear with a card that no longer works at all - sometimes, it can also be dropped PCI lanes, such as a 16x lane PCI card only showing 8x or even 4x in software monitoring even when there is nothing else competing for the lanes, and following verified initial 16x functionality. ...it is not only the weight itself, but also the fact that the PCI 'fingers' of the card are offset to one side with the new ultra long & heavy cards and their cooler overhang when water-cooled - that creates a lever force effect that is responsible for much of the tearing. OEM custom ordered system with these cards are even more prone to this kind of damage, due to '''shipping''' . As @T.Sharp already pointed in an earlier post, at the very least, it is wise to add a support bracket of some kind at the right side if the card is mounted the traditional, horizontal way. In any case, I switched over to vertical mounting over the last four-plus years. This does mean using PCIe 4.0 risers, though, or mount the mobo horizontally. On risers, I ran several tests with both a 6900XT and 3090 Strix using quality PCI 4.0 risers (LinkUp) on a temporary test-bench below before water-cooling and permanent / final install. I found no reduction in fps and score at all compared to a direct PCI slot mount of the same card in the same conditions. Finally, with my latest dual-mobo build (on the bottom in the pic below), there is zero weight and zero lever / twisting forces on the PCI / risers. ...at the end of the day, there is no such thing as a 'free lunch', at least when it gets to top-tier GPUs in each generation. While there is no doubt that GPU efficiency has increased greatly over the last few generations, it is also true that overall power consumption is still going up - which is not surprising if you consider this table...
  8. Another Infrared thermometer w/Laser pointer...
  9. Another mandatory FS2020 patch > extra scenery for E.Europe....so I checked out Vienna instead
  10. That is a very nice build ! One quick question: I cannot quite make out how the GPU +w-block is supported (I am working on a post about weight of new-gen GPUs that for this series); how are you supporting that GPU/w-block combo ? As to your point about moving some bits 'out of the case' for external positioning, it adds gobs of flexibility - my earlier project-insanity taught me that ...Just one example would be my previous 'primary' system (now serves as a secondary work+play setup)...it has two mobos (front and back) on a heavily modded TT Core P5, plus 5x 360mm x 62mm rads, 4x D5 pumps, plus an AIO etc. That was the last time I tried to stuff everything on/in one case. The problem is also the sheer and awkward-to-move weight (well over 100 pounds). Here is an early pic: of that build on the left, followed by an update how it looks now (a w-cooled 3090 Strix joined the 2x 2080 Ti for some productivity tasks)... My current 'primary' work+play build is also a dual mobo one (2x X570 regular ATX), but side-by-side this time per pic below, using a TT Core P8. I already used Koolance QD4 quick-disconnects on the previous build (btw, the horizontal GPU cooler tubes on the front above are actually cooper pipes...). Learning from my prior projects, this current primary setup has 5x D5 pumps and all the rads (6 of them with a combined total of 2520mm x 63 mm) on a separate little, wheeled 'table'. The 4x QD4s also pictured below make it easy to update / move / clean etc, noting that the reservoirs stayed on the case, high up and in the back. The 'cooling table' is only 2 feet away, yet I hear only the faintest whoosh of air, not actually any of the 40+ push-pull 120mm fans (fixed at 1800 rpm) or D5 pumps (fixed at 70%). With your MORA 420, you are pretty much on the path of a 'segmented' build anyway - it is very liberating to be able to build more outside a regular 'case' and customize any which way one sees fit. At the end of the day, the foot print is also better at the personal work space.
  11. ...that's why I hammered away in this thread at caution and vigilance re. adapter flush seating. The 12VHPWR 'saga' started with several of the 4-in-1 dongle cables melting on 4090s last fall; then came some cables and also special adapters - especially the CM 180 degree ones for Asus (which has the connector flipped around on their cards).
  12. ...want...that...in that nice system with 2x 96 core VCache CPUs
  13. I won't tell anyone about the traitor thing; too busy drooling over those new H100 NVLs
  14. Of dimples, dongles and power arcs... Power connections for GPUs with a stock vbios of 600W or more (never mind 700+ W transient spikes) are a major recurring problem theme. Before touching in the 12VHPWR saga (also already touched on by @T.Sharp ), it is worth pointing out that melt problems can arise even with traditional connectors. Below is pic of the 3x8pin PCIe of my 3090 Strix with a custom vbios that could peak at ~ 600W...there was a loose connection and some arcing / minor melting which must have occurred after the original mounting and before other parts tugged and obscured the view of it. Fortunately, each single connector had 200W at most, and it all still works great (including folded electrical tape to fix the area around the arrow). The 12VHPWR introduced last year is a whole different ballgame...NVidia and also select Intel cards use that connector, but it with the highest-power cards such as RTX 4090s that potential weaknesses have become real problem cases - and initially at least, a lot of 'guesstimzations' as to what the underlying issue actually was (or issues were). By now, vendors have already installed modified 12VHWPR connectors (for ~ three months now) without any official announcements It is a running change, mostly for RTX4K cards. This new 12VHPWR hybrid connector comes with 4 sense pins that are shorter now. Sense pins are the 4 smaller ones on top in the pic below of my Giga-G-OC. This all gets down to one major point: Making absolutely certain that the connector is 100% flush mount, and secured as such. The original, longer sense pins in the early 12VHPWR connectors could be exposed in anything but a perfect flush mount and start arcing...after all, that connector carries 600W+ in a much smaller package than for example the 3x8 (6+2) pin standard connectors used in the 3090 above. So even a slight bump of the cable could cause issues, but things got even worse with certain 180 degree adapters - a fair number melted power connections (and the odd fire) were the result... Now, there will be a new connector design, the so-called 12V-2x6 connector. That is incidentally backward-compatible with the 12VHPWR ones. Still, most 12VHPWR connectors have functioned fine as long as there is no tension on the cable and it is a perfectly flush mount and secured to stay that way. Also, it is probably wise not to add superfluous bits such as adapters for cosmetic reasons (each additional connection point is another potential point of failure). In addition, especially with the early 'dimple' designs, the fewer times the cable is mounted / unmounted into / from the GPU connector the better, because eventually, that can create a bit of the dreaded 'play' in the connection via the dimples wearing and affecting the flush mount. Apart from the power capabilities, most 4090 cards are also humongous in terms of width (and weight per upcoming post) when air-cooled, but their PCB is actually among the narrowest I have worked with (the rest of the wide air-cooler is extra metal and overhang beyond the PCB). The difference is almost comical when prepping one of these monsters for water-cooling. In terms of depth, these air-cooled cards are basically 4 slot cards. Per second post above that showed various 4090 GPU and hotspot temp deltas, I find it highly advisable to water-cool any GPU that can pull more than 400W (also keeping in mind transient spikes). When it gets to the latest narrow-PCB, high-power designs by NVidia, it is also worth keeping in mind that right below the already 'sensitive' 12VHPWR connector are major VRM components that run very hot in their own right and also affect the power connection region right above it. On my 4090, I placed thermal putty behind the VRM and the power connector area where it meets the backplate. Said backplate has an additional heatsink, and 2x 80 fans above it. Originally, I ran the 4090 with the 4-into-1 dongle (aka the squid, on the right in the pic below) that came in the GPU retail box...the dongle worked fine but it could only be a temporary solution given the unique build (dual mobo) and related power cable routing. The dongle got quite warm at a sustained 640W (ie. ROG Furmark/Vulkan) at the joint where the four 'arms' flowed into one, though it never got outright hot. The 12VPWR x 2x8 pin PCI cable on the left in the pic below has not been used yet (it was supplied by Seasonic for my PX1300W Platinum PSU). The white CM 12VHPWR x 4x8 pin PCI is also custom made for my PSU and that is what I have been using since I converted the card to water-cooling and installed a custom vbios for ~ 670 W. That cable has never been taken out so that is still the 1st and only mount of the cable, and only the 2nd mount for the GPU connector itself. The white CM cable has been problem-free since early December '22. As you can tell from the pics below, there is no stress on it at all, and I can easily check re. the flush mount without touching it. All in all, it is a no brainer hi-po GPUs require extra attention re. their new power connections, and of course cooling. On cooling, there is also a bit of a quirk with 4090s: Like the 3090, the 4090 has 24 GB of GDDR6X, but unlike the 3090 which uses double-sided 1 GB VRAM, the 4090 uses single-sided 2 GB VRAM. The VRAM on the 4090 is faster from the get-go and can be over-clocked quite well - but for real high OC speeds, the 4090 VRAM should be in the mid 50 C range or higher; it likes a bit of fuzzy warmth
  15. I start with cooling a 4090 (original air cooling, then water). The key variable I am interested in post is the delta between GPU and Hotspot. This parameter can be quite telling, also if it deteriorates over time when running the same app and oc. More to come later, and your comments are welcome. 4090 clocks shown with core voltages (1.00, 1.05, 1.075, 1.10) range from 2775 MHz to 3240 MHz, PL up to 650W. Apart from keeping overall temps in check for a 600W + power limit (even if one usually sets it well below that), it is also a question of keeping the temp delta between GPU and hotspot relatively low - typically less than 20 degrees delta for a water-cooled GPU. Another parameter to watch out for with the newer power hungry GPUs is how long 'heat soak' can be delayed....the bigger the cooling surface of the rads, the liquid volume in a loop and also the number (and type) of fans, the longer heat soak can be delayed. The 4090 system above has a total rad space area of 1320 mm x 63mm 'triple core' copper / brass rads. Below is a comparison between a single core aluminum 360 AIO rad and a triple-core copper/brass rad. One guess which one cools better... When it gets to cooling modern GPUs with gobs of power, the GPU itself (rather than just rads, pumps etc) need attention. Obviously, there is the water-block itself, but also the thermal paste for the die - I typically use Gelid GC Extreme for CPU and GPU dies, but there are other good pastes out there. For VRAM, I have switched to thermal putty a while back and so far, the experience has been great - even after 2.5 years plus, identical temps show in stress tests at the same adjusted ambient temp. The only really draw-back of thermal putty is that it is a bit messy to clean up - but I rarely take a GPU apart after water-blocking it; I have some water-blocked cards in regular use for more than four years without disassembly. For VRM components, I also either use thermal putty, or Thermalright thermal pads. Finally, I add thermal putty to the back of the GPU and then mount an extra heat-sink on the back - it does help with power-hungry GPUs that generate a lot of heat over a small area. Collage pic of 2080 Tis top left, 6900XT bottom left, 3090 top right, 4090 bottom right:
  16. Hello GPU aficionados I am starting this thread to look at modern GPUs and some of the things to look out for with cooling, power and also mounting... Given the various 12VHPWR issues which have crept up, as well as 'cracked' PCBs and such, I figured a closer look is warranted...
  17. That new Threadripper looks yummy - even though it '''only''' has 8 channel DDR5, instead of the 12 channels the new Epycs carry But if you can't wait, how about...
  18. What is amazing is that these are regular Zen4 cores (just binned) but with ludicrously low power consumption. Being regular cores means no funny stuff with scheduling et al. So yeah, 32 core Ryzen laptops are a real possibility. Personally, I like to settle at a slightly different config: A nice 32 core oc-able Threadripper Pro w/ VCache and 12-channel DDR5 'desktop'
  19. @bonami2 ...very interesting ! I better start picking the parts for an AM5 platform beast of burden
  20. ...yeah, probably one of these chaps (YT time-stamp) gets some extreme cooling going for one the new monsta CPUs...
  21. Well, these will definitely play solitaire really well. @bonami2 will get me two of each
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy