Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    639
  • Joined

  • Last visited

  • Days Won

    33
  • Feedback

    0%

Everything posted by tictoc

  1. That is pretty normal for the latest WUs on AMD GPUs. Some WUs are a little better at utilizing the GPU, but that is about average for most WUs.
  2. Are we still taking the OS into account for the multiplier, or are we just using the All OS average PPD?
  3. Sorry for continuing the OT. What @Avacado said for 2.0 x16 vs 3.0 x16. I don't have a thread, but I did some testing for the first Community Folding Project thread. I use nvidia-smi to log GPU data. Some of the queries will be dependent on the GPU architecture, the driver that is being used, and the OS (Windows or Linux) being used. Once upon a time PCIe utilization by percentage was reported by the driver (via nvidia-smi), but that is no longer the case (NVIDIA said it was noisy and not accurate). Now you can only get the Tx and Rx throughput in MB/s.
  4. Linux and Windows performance with AMD GPUs is pretty much equal now. What are you using for a hypervisor? I regularly passthrough some different AMD GPUs for testing, and the overhead is negligible. All of my testing has been with ESXi or KVM.
  5. I tested this last year with a 2080 Super. Performance would be somewhere between the 3.0 x1 and 3.0 x4 results, so probably a 15-20% hit in ppd. OS: Arch Linux Kernel: 5.6.14 Driver: 440.82 GPU: RTX 2080 Super p13406 (core_22) PCIe 3.0 x16: TPF - 01:09 | ppd - 2,512,601 || PCIe Utilization - 18% | GPU Utilization - 97% | GPU Power - 220W | Clocks - 1935core|7500mem PCIe 3.0 x4: TPF - 01:12 | ppd - 2,357,211 || PCIe Utilization - 34% | GPU Utilization - 95% | GPU Power - 217W | Clocks - 1935core|7500mem PCIe 3.0 x1: TPF - 01:28 | ppd - 1,744,509 || PCIe Utilization - 50% | GPU Utilization - 84% | GPU Power - 190W | Clocks - 1935core|7500mem PCIe 2.0 x1: TPF - 01:50 | ppd - 1,248,271 || PCIe Utilization - 55% | GPU Utilization - 77% | GPU Power - 168W | Clocks - 1965core|7500mem
  6. GPU projects have started for the BOINC Pentathlon, so my output is going to drop to zero for the next 7 days. After that I'll start testing a few different GPUs to see what card is the best.
  7. Beta BIOS #6 looks to be a winner. In addition to the x4 split now working, there are also x8x4x4 and x4x4x8 splits for PCIE2/3 and PCIE4/5. This should make it possible to split all the slots, including the two slots that are x8 electrical. I haven't fully tested all the options but this works. Using some old drives to make sure that the final storage configuration will work. Drive layout: 6x SATA HDDs on an HBA in PCIe slot 5 4x SATA HDDs on the mini-sas connector (chipset) 2x SATA SSDs (chipset) 1x SATA SSD (Asmedia SATA DOM) 4x NVMe SSD on an ASUS Hyper M.2 X16 v2 in PCIe slot 6 2x M.2 NVMe SSD (onboard slots) 1x U.2 Optane 905p on a PCIe adapter in PCIE slot 3 All of the above is detected and working with 2 GPUs also installed in PCIe slots 2 and 4. Only empty slot is the top x4 slot which is wired to the chipset.
  8. Sweet, then I can do whatever is best in the Wildcard slot. @BWG I promise to switch GPUs no more than once per week.
  9. I don't think the Radeon VII is going to be a very good card in this comp (probably should get it back on money making duty). There aren't really any recent entries on LARS, and if there were, the multiplier for this card would probably be something more like 4.4. The actual average ppd is probably down something like 20-25%, compared to what it was a few months ago. I'll be testing out some other GPUs over the next few weeks, but maybe a GTX 980 or an R9 290 makes more sense. @Avacado no matter what I do you will be folding on an NVIDIA GPU correct, and @axipher do you think you'll stick with the 5700?
  10. Congrats on the major milestone.
  11. That is a bummer on the GPU block. I have been mostly all Heatkiller for the last little bit, but they never made a Radeon VII block. I always go with Acetal/copper blocks when possible, since everything I build is more about performance and reliability then looks. Right now it's a little tough, since it's mostly get what you can get, as soon as you can get something. Hopefully the RMA goes quick and you get the new block soon.
  12. My output will be down for the next 2 weeks due to the BOINC Pentathlon. At some point in time it will probably drop to zero for 5-7 days.
  13. I can't read your post, do you want all the TC folders to PM @BWG or @BWG.
  14. Without a BMC, initial setup will probably be a bit painful, with a whole bunch of connecting and disconnecting. Once you have everything set up definitely save the profile, and then never update the BIOS. As far as cooling goes, I would swap the out the AIO for a something like a NH-C14S if you have the clearance. If not, you could punch some holes in the side of the case and mount the AIO in the bottom of the chasis. With the AIO out of the way you will have proper front to back airflow, and with 3-4k+ RPM fans you can probably keep the GPUs from throttling. It's not going to be quiet, but it can be done. If that's not an option, then I would hunt around on ebay for some 980ti/titan heat-sinks. https://www.ebay.com/itm/154408987068
  15. Make sure you have Above 4G encoding enabled in the BIOS. That board should allow you to save BIOS profiles to a USB stick. Then It's just a matter of loading the profile from the USB if you have to clear the CMOS.
  16. That is looking great. I didn't know that MM was still making cases. I sold my MM case more than a decade ago. It is awesome to see an OG enthusiast company still around. Now you've got me configuring a custom case for my server. lol
  17. Just realized that if I swap the switch and the patch panel in the rack, I can keep the wires from crossing and not have to label the patch panel 2,1,4,3,6,5... Now we have achieved maximum OCD relief.
  18. These p177x units are a real ppd killer on AMD gear. 2M'ish average ppd down to 440k. ?
  19. It would be nice to see the raw points. Tooltips would make it easy to see the comparison side by side, w/o having to add a column to one of the tables.
  20. For all you cable management lovers, here's a little preview of my new 12U network/power rack:
  21. Niche board with a niche use case, but every other Threadripper board I've ran worked without a hitch. I'm not sure why this doesn't work on this board. I don't care about using NVMe RAID mode or booting the OS off the drives (RAID will be handled by the OS), just looking for what has become a pretty standard option on most server motherboards. Right now I don't really need this to work, but I figured I might as well try to iron out all the possible kinks before I load the hardware in the box.
  22. After you watch that video you are then forced to go down the "Vocal Coach Reacts to Nightwish" rabbit hole. *Edit* They are an epic band to see live, even if you're not into that style of metal.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy