Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    637
  • Joined

  • Last visited

  • Days Won

    33
  • Feedback

    0%

Everything posted by tictoc

  1. PLX on the motherboard won't necessarily help. I know that there were some modded x79 BIOS floating around that had PCIe bifurcation added. I'm not sure if ASUS ever enabled it in any of the official BIOS. PCIe bifurcation is common on server boards, but is really only common on X299, X399, and trx40 consumer boards. ASRock is really the best at this, and I was able to get a modded BIOS direct from ASRock support to allow me to run 4 GPUs plus an HBA and a NIC without using ridiculously expensive active PCIe splitters. About 2 months after ASRock sent me the modded BIOS, they added the various bifurcation configurations to the official BIOS. I'm pretty sure that the X99-E-WS doesn't support PCIe bifurcation, and if you use the passive M.2 expander card in that board, only the first NVMe drive will show up.
  2. In order for the 4x M.2 card to work, the motherboard will need to support PCIe bifurcation on a x16 slot. There is no PLX chip onboard the expander card, so you will need to be able to set the slot to x4x4x4x4 in the BIOS. ASRock has very good bifurcation support in their BIOS and ASUS has it enabled on many of their high end boards.
  3. All of the m.2 slots come off the CPU. Chipset is LAN, SATA, wifi, bluetooth, and USB. If you can run lstopo either in Linux or WSL, you can see which of your PCIe slots are connected to the numa nodes that have direct access to memory. That can be more of a bottleneck than x8 or x16. Depending on the workload, the increased latency of PCIe->CPU->Infinty Fabric->Memory is noticeable even outside of benchmarks. *Edit* I can check my x399 Taichi later today, since I think they are the same. I misremembered how x399 is split up. All of the PCIe slots can have direct access to memory, but you can check/lock your processes to run on cores with direct memory access. On Linux you can use numactl to bind processes to certain numa nodes, and on Windows I think you can use something like Process Lasso.
  4. I would go top slot for the M.2 card, 3rd slot for primary GPU and 4th slot for secondary GPU. You lose a little bandwidth on the 2nd GPU, but there's no way around that if you want full bandwidth for the M.2 card. I'm not sure if SLI will work with a x8 and a x16 card, but with the above layout you can still use the x1 slot and you can drop a nic in the 2nd slot.
  5. You should take a look at Mullvad and ProtonVPN. Mulllvad has a pretty stellar privacy policy, open source clients, easy to sign up with 100% anonymity (you can mail them cash in an envelope), no log policy, decent speeds, independently audited, and it has easy WireGuard and OpenVPN configuration. ProtonVPN has most of the same positives as Mullvad except for easy anonymous sign-up and currently they do not support WireGuard. ProtonVPN also has a number of servers and datacenters that they own rather than everything operating on shared hosts/data centers.
  6. I wouldn't sweat cable management too much. The most important thing in a rack case like that, is just keeping optimal airflow front to back. My last decent sized machine that I had running in my rack, I actually used the pile of wires to direct some air flow from the outside fan towards the HBA. :laugh_laugh:
  7. I have the same setup on my TRX40 Creator (ASRock). 2x120mm fans pulling in fresh air for the VRMs and the 10G nic, which is actually the hottest thing in my system. I had a Heatkiller VRM block on my x399 Taichi, but after testing the Creator it didn't seem necessary.
  8. I agree one pump should be just fine, unless you want a second for redundancy/failover. I have a single D5 in a loop with a CPU, 3 GPUs, and 2 radiators, and it has been going more or less 24/7 for the last three years with various CPU and GPU combinations.
  9. Nice build. :thumbs_thumbup: With only x1 CPU and x2 GPUs for dual D5's in each loop, what speed do you run your pumps?
  10. With all the storage purchases, everyone must be hoarding data while staying at home. Just purchased another pair of 2TB MX500's for my file server.
  11. You will see a marginal boost in ppd for CPU tasks in a VM. Running GPU tasks in a VM is a bit more complicated, because you need to pass the GPU through from the host to the guest OS. You will get near native performance, but for a dedicated machine just installing Linux is much easier.
  12. If you add a radiator the order won't really matter. Whichever of the two options is easier and/or looks better will be fine. The only time that I have seen it have much of an affect, is with a machine that didn't really have enough rad space for the heat load. In that scenario, I had a pair of 7990s in series and the 2nd card had an OC that was barely stable. To keep it at max clocks, I had to go Res->Pump->CPU->GPU->Rad->GPU->Rad, because it was right at the edge of stability and the increased water temperature from the CPU and other GPU didn't have enough cooling capacity to keep it stable. That was a rather specific scenario, and could have probably been solved with an additional radiator. With what you are running, you won't encounter anything like that.
  13. A single 420 radiator would probably be OK, depending on how much you are OCing the CPU and GPU, and your tolerance for fan noise. You might need to turn up the fans and/or go push/pull when both the CPU and GPU are under heavy load.
  14. With only a single GPU in the loop I would just go with whatever is easiest and/or looks the best. Overall temps won't be that different either way. The water temp will be slightly elevated after the CPU, but unless you are right on the ragged edge of stability with a GPU OC it won't matter.
  15. Two more WUs. p16905 (core_21) PCIe 3.0 x16: TPF - 02:15 | ppd - 686,311 PCIe 3.0 2.0 x1: TPF - 02:26 | ppd - 610,229 p16445 (core_22) PCIe 3.0 x16: TPF - 01:20 | ppd - 892,930 PCIe 3.0 2.0 x1: TPF - 01:31 | ppd - 736,022
  16. The ppd hit is not too bad all things considered. The cost to go with a different platform, probably far outweighs the higher performance. I'll keep posting results as I see different WUs. Not sure if I saw this anywhere, but you are definitely going to want to run Linux on that box. Not only is it more performant the Windows, but it will handle the gear with much less brain damage. Hit me up if you need any helping getting the OS up and running.
  17. First core_22 WU. This is a good ppd WU, so due to QRB, the ppd penalty is larger when dropping down to PCIe x1. The way the WU queues and transfers data is also different. The TPF on every fourth frame increases by 11 seconds, so I averaged all the frame times for the WU. OS: Arch Linux Kernel: 5.6.8 Driver: 440.82 GPU: GTX 1070 1974core|7604mem p14253 PCIe 3.0 x16: TPF - 02:30 | ppd - 837,797 PCIe 3.0 2.0 x1: TPF - 02:53 | ppd - 676,404
  18. The first WU that I am testing is a core_21 WU. I'm booting off this GPU, so their might be a little more performance to be had. The difference should be pretty minimal since I'm not running X, and the GPU is only rendering the console while testing. Silly NVIDIA and their P2 clocks. I forgot to bump the memory up to 8008MHz, so I left it the same for the x1 and x16 test. OS: Arch Linux Kernel: 5.6.8 Driver: 440.82 GPU: GTX 1070 1974core|7604mem p16906 PCIe 3.0 x16: TPF - 02:13 | ppd - 686,675 PCIe 3.0 2.0 x1: TPF - 02:23 | ppd - 615,920 I'll compare a few more WUs as they come in. Hopefully I can grab a few of the more demanding and higher ppd core_22 WUs.
  19. I'll post some comparison numbers from a running WU later today.
  20. I would definitely go with NVIDIA GPUs. As it stands F@H is quicker on NVIDIA GPUs, and if we see CUDA capable core_22 WUs in the future, the gap is only going to get bigger. The upstream project (OpenMM) has CUDA builds and the performance is much better than OpenCL. I don't have any inside knowledge if the devs at F@H are going to push out a new core with CUDA support, but if maximum performance is the goal they should. The x1 link will be a bottleneck, but I don't have any recent testing to compare. In Linux, running a GTX 1070 on a PCIe 3.0 x4 link reduces performance by 1-2% vs PCIe 3.0 x16. I can post up some numbers later today of performance on a x1 riser.
  21. I'm not actively monitoring coolant temps right now, because my temp probe broke. I do spot check temps if I notice CPU or GPU temps going up.
  22. The junction temps only start to creep up that high under very heavy loads when overclocked. By very heavy, I am talking about loads that match or exceed FurMark, with voltage set to 1.28v and the card power limits raised to 450W. One of my cards has a very convex die, and I actually have to under-volt that card to keep temps in check. Right now with the cards running lighter fp32 work, temps are sitting at 37C edge and 45C junction. 5700XT is at 47C edge and 55C junction.
  23. GPU temps on the Radeon VIIs are around 50-55C-edge 85-90C junction, but those temps are when running the VIIs at 2121-2150MHz | 1.265v | 370W with a very heavy fp64 load. With reasonable clocks, volts, and power limits, the VIIs are around 40C edge and 75-80C junction. CPU has a static 4.3GHz OC and runs at 67C under a heavy AVX2 load. Just threw the 5700XT back in the rig after having it shelved due to the unstable drivers. Seems to be running OK now, but that blower cooler is awfully loud.
  24. That was a lie. Upgraded it about one month after that post. Current state of my main machine, now with more copper!
  25. Not the last thing I purchased, but the one I'm playing with right now. M.2 to PCIe x4 adapter
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy