Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    642
  • Joined

  • Last visited

  • Days Won

    35
  • Feedback

    0%

Everything posted by tictoc

  1. You are correct, 2 6-pins would probably be a bad idea. This shouldn't be a huge issue for most people, since most new PSUs come with at least two 8 pin connectors. Ultimately there is little difference in the end when you consider even most high quality, high wattage PSUs, are currently running a single set of wires to power dual 8-pin connectors. Regardless of what the connector is rated for, I find it pretty hard to believe that NVIDIA is going to release a GPU that pulls over 500W at stock volts and temps. Cooling something like that would be pretty ridiculous on air.
  2. With decent pins and cables, converting 2x 6 or 8 pin cables shouldn't be an issue. Digi-Key has the connectors for about $1.50. https://www.digikey.com/products/en/...=1&pageSize=25
  3. If you find yourself needing mare RAM, shoot me a message. I have a bunch of 8GB ECC RDIMMS. https://www.samsung.com/semiconductor/dram/module/M393B1K70DH0-YH9/ The sticks are 1333 rated dimms, but I ran them at 1866Mhz with a bump in voltage, on my old 2P ASRock server.
  4. I know you just got this up and running, but I would nuke the steamcache containers and start over with lancache. ​​ The quick start guide at lancache will get you up and running in just a few minutes, since you already have your VM and storage pool set up. The only additional setup step will be installing Docker compose. http://lancache.net/docs/ *Edit* Additionally I would install all of the docker components using the Ubuntu repos. On Ubuntu 20.04 I think everything is in the official repos, so you don't need to add the Docker repos. This should result in a setup that is easier to maintain over the long term, without having to manually update packages and repos. Remove existing Docker versions that were previously installed $ sudo apt remove docker docker-engine docker.io containerd runc docker-compose Install docker engine $ sudo apt install docker-compose Verify that docker is working $ docker run hello-world Verify docker-compose is working $ docker-compose version Check if your user is in the docker group. I'm not sure if Ubuntu's package automatically adds your user to the docker group. $ groups Add user to docker group if you aren't already added. This allows you to run docker commands without using sudo. You will need to log out and log back in after adding your user to the group. $ sudo usermod -aG docker $USER This step might not be necessary, but I don't know how Ubuntu's docker service is set up. Set docker service to start at boot $ sudo systemctl enable --now docker Now just follow the steps on lancache, and you should be good to go with a nice stable set up. :thumbs_thumbsupup: *Edit 2* Since this is just a minimal VM to run docker containers, I wouldn't bother with a full Ubuntu Desktop install. I would just install Ubuntu Server https://ubuntu.com/download/server Everything will be done from the terminal, so there really isn't any need to run a full graphical environment, unless that is easier for you.
  5. All of the ready made solutions that I am aware of are Linux based. Docker on Windows from what I know, is still a pretty painful experience. You might be able to set up a WSL2 Ubuntu install, but it will be a more complicated (maybe impossible??) to do the DNS redirects from a container inside of WSL. The easiest road would probably be to set up an Ubuntu Server VM with VirtualBox or Hyper-V. Once you have your VM up and running, you just need to get Docker installed and running. https://docs.docker.com/engine/install/ubuntu/ After that you can just follow the guide at lancache to get your containers installed and running. It might seem a bit involved, but it should be fairly painless to get it up and running. Not sure on any ready made and performant Windows solutions, but maybe someone else that runs Windows will chime in.
  6. Not sure if you are still looking to do this, but lancache is probably the easiest way to set this up. Downloading games in a few seconds is fun. ​
  7. If the system is truly headless, you will have to do a bit of Xorg hackery to be able to control fans and clocks with no display server running. The GPUs will fold at full speed (in the P2 state) without any dummy adapters. To make things easier, I would recommend having the primary GPU hooked up to a monitor at least through install and testing. Things are a little more complicated if this is going to be a truly headless machine but not impossible. There are some gains, since there is zero display overhead if you are truly running headless with no display server.
  8. Everything that I had previously tested at x1, was at PCIe 2.0 x1.
  9. Here's a WU running on a GPU that is more than 2x as powerful as the 1070 I tested before. OS: Arch Linux Kernel: 5.6.14 Driver: 440.82 GPU: RTX 2080 Super p13406 (core_22) PCIe 3.0 x16: TPF - 01:09 | ppd - 2,512,601 || PCIe Utilization - 18% | GPU Utilization - 97% | GPU Power - 220W | Clocks - 1935core|7500mem PCIe 3.0 x4: TPF - 01:12 | ppd - 2,357,211 || PCIe Utilization - 34% | GPU Utilization - 95% | GPU Power - 217W | Clocks - 1935core|7500mem PCIe 3.0 x1: TPF - 01:28 | ppd - 1,744,509 || PCIe Utilization - 50% | GPU Utilization - 84% | GPU Power - 190W | Clocks - 1935core|7500mem PCIe 2.0 x1: TPF - 01:50 | ppd - 1,248,271 || PCIe Utilization - 55% | GPU Utilization - 77% | GPU Power - 168W | Clocks - 1965core|7500mem While I was testing this, I realized that the earlier numbers I reported with a 1070 are not accurate. I was using the x1 slot in my x399 board, and that slot is PCIe 2.0 x1. That means that the performance at PCIe 3.0 x1 is going to be quite a bit better than what I posted before.:applaud:
  10. I've been running ZMT tubing for about the last 5 years. Prior to that I ran norprene tubing. The ZMT does tend to a have an OD that is on the large side of 16mm|5/8". I've yet to have a single fitting not work or leak, but some are a bit tough to tighten down all the way. Currently running the following fittings with 10/16mm ZMT: EK, Monsoon, Bitspower. and Barrow.
  11. PLX on the motherboard won't necessarily help. I know that there were some modded x79 BIOS floating around that had PCIe bifurcation added. I'm not sure if ASUS ever enabled it in any of the official BIOS. PCIe bifurcation is common on server boards, but is really only common on X299, X399, and trx40 consumer boards. ASRock is really the best at this, and I was able to get a modded BIOS direct from ASRock support to allow me to run 4 GPUs plus an HBA and a NIC without using ridiculously expensive active PCIe splitters. About 2 months after ASRock sent me the modded BIOS, they added the various bifurcation configurations to the official BIOS. I'm pretty sure that the X99-E-WS doesn't support PCIe bifurcation, and if you use the passive M.2 expander card in that board, only the first NVMe drive will show up.
  12. In order for the 4x M.2 card to work, the motherboard will need to support PCIe bifurcation on a x16 slot. There is no PLX chip onboard the expander card, so you will need to be able to set the slot to x4x4x4x4 in the BIOS. ASRock has very good bifurcation support in their BIOS and ASUS has it enabled on many of their high end boards.
  13. All of the m.2 slots come off the CPU. Chipset is LAN, SATA, wifi, bluetooth, and USB. If you can run lstopo either in Linux or WSL, you can see which of your PCIe slots are connected to the numa nodes that have direct access to memory. That can be more of a bottleneck than x8 or x16. Depending on the workload, the increased latency of PCIe->CPU->Infinty Fabric->Memory is noticeable even outside of benchmarks. *Edit* I can check my x399 Taichi later today, since I think they are the same. I misremembered how x399 is split up. All of the PCIe slots can have direct access to memory, but you can check/lock your processes to run on cores with direct memory access. On Linux you can use numactl to bind processes to certain numa nodes, and on Windows I think you can use something like Process Lasso.
  14. I would go top slot for the M.2 card, 3rd slot for primary GPU and 4th slot for secondary GPU. You lose a little bandwidth on the 2nd GPU, but there's no way around that if you want full bandwidth for the M.2 card. I'm not sure if SLI will work with a x8 and a x16 card, but with the above layout you can still use the x1 slot and you can drop a nic in the 2nd slot.
  15. You should take a look at Mullvad and ProtonVPN. Mulllvad has a pretty stellar privacy policy, open source clients, easy to sign up with 100% anonymity (you can mail them cash in an envelope), no log policy, decent speeds, independently audited, and it has easy WireGuard and OpenVPN configuration. ProtonVPN has most of the same positives as Mullvad except for easy anonymous sign-up and currently they do not support WireGuard. ProtonVPN also has a number of servers and datacenters that they own rather than everything operating on shared hosts/data centers.
  16. I wouldn't sweat cable management too much. The most important thing in a rack case like that, is just keeping optimal airflow front to back. My last decent sized machine that I had running in my rack, I actually used the pile of wires to direct some air flow from the outside fan towards the HBA. :laugh_laugh:
  17. I have the same setup on my TRX40 Creator (ASRock). 2x120mm fans pulling in fresh air for the VRMs and the 10G nic, which is actually the hottest thing in my system. I had a Heatkiller VRM block on my x399 Taichi, but after testing the Creator it didn't seem necessary.
  18. I agree one pump should be just fine, unless you want a second for redundancy/failover. I have a single D5 in a loop with a CPU, 3 GPUs, and 2 radiators, and it has been going more or less 24/7 for the last three years with various CPU and GPU combinations.
  19. Nice build. :thumbs_thumbup: With only x1 CPU and x2 GPUs for dual D5's in each loop, what speed do you run your pumps?
  20. With all the storage purchases, everyone must be hoarding data while staying at home. Just purchased another pair of 2TB MX500's for my file server.
  21. You will see a marginal boost in ppd for CPU tasks in a VM. Running GPU tasks in a VM is a bit more complicated, because you need to pass the GPU through from the host to the guest OS. You will get near native performance, but for a dedicated machine just installing Linux is much easier.
  22. If you add a radiator the order won't really matter. Whichever of the two options is easier and/or looks better will be fine. The only time that I have seen it have much of an affect, is with a machine that didn't really have enough rad space for the heat load. In that scenario, I had a pair of 7990s in series and the 2nd card had an OC that was barely stable. To keep it at max clocks, I had to go Res->Pump->CPU->GPU->Rad->GPU->Rad, because it was right at the edge of stability and the increased water temperature from the CPU and other GPU didn't have enough cooling capacity to keep it stable. That was a rather specific scenario, and could have probably been solved with an additional radiator. With what you are running, you won't encounter anything like that.
  23. A single 420 radiator would probably be OK, depending on how much you are OCing the CPU and GPU, and your tolerance for fan noise. You might need to turn up the fans and/or go push/pull when both the CPU and GPU are under heavy load.
  24. With only a single GPU in the loop I would just go with whatever is easiest and/or looks the best. Overall temps won't be that different either way. The water temp will be slightly elevated after the CPU, but unless you are right on the ragged edge of stability with a GPU OC it won't matter.
  25. Two more WUs. p16905 (core_21) PCIe 3.0 x16: TPF - 02:15 | ppd - 686,311 PCIe 3.0 2.0 x1: TPF - 02:26 | ppd - 610,229 p16445 (core_22) PCIe 3.0 x16: TPF - 01:20 | ppd - 892,930 PCIe 3.0 2.0 x1: TPF - 01:31 | ppd - 736,022
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy