Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    642
  • Joined

  • Last visited

  • Days Won

    35
  • Feedback

    0%

Everything posted by tictoc

  1. The ppd hit is not too bad all things considered. The cost to go with a different platform, probably far outweighs the higher performance. I'll keep posting results as I see different WUs. Not sure if I saw this anywhere, but you are definitely going to want to run Linux on that box. Not only is it more performant the Windows, but it will handle the gear with much less brain damage. Hit me up if you need any helping getting the OS up and running.
  2. First core_22 WU. This is a good ppd WU, so due to QRB, the ppd penalty is larger when dropping down to PCIe x1. The way the WU queues and transfers data is also different. The TPF on every fourth frame increases by 11 seconds, so I averaged all the frame times for the WU. OS: Arch Linux Kernel: 5.6.8 Driver: 440.82 GPU: GTX 1070 1974core|7604mem p14253 PCIe 3.0 x16: TPF - 02:30 | ppd - 837,797 PCIe 3.0 2.0 x1: TPF - 02:53 | ppd - 676,404
  3. The first WU that I am testing is a core_21 WU. I'm booting off this GPU, so their might be a little more performance to be had. The difference should be pretty minimal since I'm not running X, and the GPU is only rendering the console while testing. Silly NVIDIA and their P2 clocks. I forgot to bump the memory up to 8008MHz, so I left it the same for the x1 and x16 test. OS: Arch Linux Kernel: 5.6.8 Driver: 440.82 GPU: GTX 1070 1974core|7604mem p16906 PCIe 3.0 x16: TPF - 02:13 | ppd - 686,675 PCIe 3.0 2.0 x1: TPF - 02:23 | ppd - 615,920 I'll compare a few more WUs as they come in. Hopefully I can grab a few of the more demanding and higher ppd core_22 WUs.
  4. I'll post some comparison numbers from a running WU later today.
  5. I would definitely go with NVIDIA GPUs. As it stands F@H is quicker on NVIDIA GPUs, and if we see CUDA capable core_22 WUs in the future, the gap is only going to get bigger. The upstream project (OpenMM) has CUDA builds and the performance is much better than OpenCL. I don't have any inside knowledge if the devs at F@H are going to push out a new core with CUDA support, but if maximum performance is the goal they should. The x1 link will be a bottleneck, but I don't have any recent testing to compare. In Linux, running a GTX 1070 on a PCIe 3.0 x4 link reduces performance by 1-2% vs PCIe 3.0 x16. I can post up some numbers later today of performance on a x1 riser.
  6. I'm not actively monitoring coolant temps right now, because my temp probe broke. I do spot check temps if I notice CPU or GPU temps going up.
  7. The junction temps only start to creep up that high under very heavy loads when overclocked. By very heavy, I am talking about loads that match or exceed FurMark, with voltage set to 1.28v and the card power limits raised to 450W. One of my cards has a very convex die, and I actually have to under-volt that card to keep temps in check. Right now with the cards running lighter fp32 work, temps are sitting at 37C edge and 45C junction. 5700XT is at 47C edge and 55C junction.
  8. GPU temps on the Radeon VIIs are around 50-55C-edge 85-90C junction, but those temps are when running the VIIs at 2121-2150MHz | 1.265v | 370W with a very heavy fp64 load. With reasonable clocks, volts, and power limits, the VIIs are around 40C edge and 75-80C junction. CPU has a static 4.3GHz OC and runs at 67C under a heavy AVX2 load. Just threw the 5700XT back in the rig after having it shelved due to the unstable drivers. Seems to be running OK now, but that blower cooler is awfully loud.
  9. That was a lie. Upgraded it about one month after that post. Current state of my main machine, now with more copper!
  10. Not the last thing I purchased, but the one I'm playing with right now. M.2 to PCIe x4 adapter
  11. That is not an error on your end. That is on Stanford's end, and it has to do with massive increase in participation since they launched the COVID-19 WUs. F@H has massively increased their bandwidth and server capacity due to donations from large companies like Microsoft and Oracle. The infrastructure is able to keep up now, but volunteers are now outpacing the rate that researchers can generate new work. F@H total points February 2020 - 85,698,427,840 F@H total points for the first part of April 2020 - 365,840,831,320 https://folding.extremeoverclocking....summary.php?s=
  12. Looks like it is time to start opening up some windows in my office. With ambient temps at 21°C, my CPU temp goes up about 5°C when the three Radeon VIIs are pegged at 300W. Cold front moving in with lows around -15°C, so it will be cooler next week. I will probably have to add one more radiator to my loop this summer if I want to keep the 3960X running at 4.3GHz. My previous systems with many power hungry 2 GPUs were running Xeons, so I rarely had to worry about increased CPU temps.
  13. If you are folding on your CPU, the ratio of WU-points will be lower than GPU folding. The reason for this is twofold: Current CPU WUs are fairly small with a low TPF (time per frame) so they will complete quickly Much of protein folding can be highly parallelized, so with an optimized GPU application, GPUs are able to do more work in a shorter amount of time than CPUs Additionally, faster GPUs and CPUs will see more points per WU. This is because of QRB (quick return bonus) which awards more points the faster that the work is turned around from server-client-server. With the newer CPU WUs, it is not nearly as big of a divide as it was a few years ago before the newer a7 tasks were released.
  14. That is correct, it is just for distros that are installed via WSL. Now that the new version of WSL is running on Hyper-V, and is really just an integrated VM, this makes sense. It was already possible to do this, but this is definitely a UX improvement. I did die a little inside when I saw the Tux icon in Windows Explorer. Too bad we aren't going to see native ext4, xfs, btrfs, etc file system mounting in Windows. Samba is definitely faster than it used to be, but there are so many other file systems that are far superior to NTFS in my opinion. *Edit after re-reading the post above*
  15. Manual 4.3 OC here on my 3970X 3960X, but my CPU only gets a short break once every few weeks when I have to reboot for some updates. For my situation where the CPU is always at a minimum of 90% load, the overshoot/undershoot voltage swings with auto-boost and PBO were never really stable.
  16. Not many tweaks to do, but you will want to adjust the number of cores threads per CPU slot. You can do that using FAHControl. There are a few things that GROMACS (the program that is used behind the scenes for CPU folding) doesn't play nicely with. Primarily this is any thread count above 32 and also any thread count that is a prime > 7 or has a large prime factor. Also, it is best to set up multiple CPU slots, so that you will get the most work. What can happen if you have too high of a thread count is that you will get assigned work that does not use all of the available threads. I have found that the best max number of threads for each slot is 24. Any higher (up to 32) and you can still get work, but sometimes the WUs will be less than 32 threads. Not sure how many GPUs you have in the system, but each NVIDIA GPU will need to be given at least 1 thread (I recommend 2 (one whole core)). Here is how I would set up a 32 core processor with two GPUs. First CPU slot - 16 Second CPU slot - 16 Third CPU slot - 16 Fourth CPU slot - 10 That leaves a few cores to feed the GPUs and a core for general overhead. To set the number of cores per slot go to Configure -> Slots Highlight your CPU slot and then click edit The default is -1, and that will just set it to fold on all threads -1. Set that to 16 then click OK. Now click on add Select CPU and set it to 16 Do the same thing again until you have added all the slots you want and then click Save That should be it, and you should start to pick up work for each slot Also for anyone else that is folding on their CPU, all of the new tasks, which are a_7 type work units, do use AVX instructions. This means you are going to see quite a bit more heat and power usage. The old version of FAHBench is compiled with an older type of CPU task that does not use AVX instructions.
  17. E beat me to it. I've never actually used the web client, so I am no help on that front.
  18. Additionally Zoom is not end to end encrypted, regardless of what their marketing info says. The only encryption that is happening in video calls is the data that is streamed over the line. No different really than any https connection. All of the data on Zoom's severs is available to anyone in Zoom. They did update their privacy policy to specify that malware (ad tracking and selling) is only on their web pages, and they are not mining calls and chats to sell to 3rd parties. https://blogs.harvard.edu/doc/2020/03/27/zoom/ E2E encryption is hard for spur of the moment teleconferencing, but the fact that they have tried to redefine what the term means is pretty shady in my opinion.
  19. On my 980, I've only had one instance, that lasted more than a few seconds, of failing to get work in the past 5 days. Although, after looking I am running beta work on that GPU.
  20. To get in the top 800, it would be 133 million points. Which isn't too much with todays GPUs.
  21. You can just grab the txt file and put it in the F@H data directory. After you get the file just stop and then restart the client. *Edit* If you can't download the file, you can just copy the list, and then save it as "GPUs.txt".
  22. The P400 is whitelisted, but being a 30W card, performance if you can get it running, is probably somewhere in the neighborhood of a GT 1030. Here is a link to the latest list of GPUs that are whitelisted for F@H - https://apps.foldingathome.org/GPUs.txt If you want to post the beginning of the log, we can see what drivers are installed and if the client is picking up the card.
  23. I'll probably have some time to sink into Minecraft. Go ahead and add me to the list. :drunk: Minecraft handle is TickTock1780.
  24. Did you ever pick up any work? The only WUs that Navi can fold are core_22 WUs. I know that Stanford is prioritizing distribution of the Covid19 WUs which are all based on the new core_22. I imagine your laptop CPU and Quadro are picking up regular core_21 WUs in addition to the core_22 WUs. Not sure if you have the amdgpu-pro drivers, but you will need those in order to run OpenCL projects in Ubuntu.
  25. More of pain for the application as well, since it will need a bit of re-writing. We did make a few changes to optimize it for it's current environment. Previously it was running on a server with a pair of E5-2695 v4 CPUs and 512GB of memory. That was me. First time I've fired up Minecraft in probably 3-4 years.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy