Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    642
  • Joined

  • Last visited

  • Days Won

    35
  • Feedback

    0%

Everything posted by tictoc

  1. Missed this over the weekend. If it's not too late, and you are interested in something like an X99-WS or an X399 Taichi, let me know. I also have a spare Xeon that can go with the X99 board. I'm across the pond, but if time isn't an issue...
  2. Should have a few more GPUs to throw at this when I get home tonight. On the CPU front my 3960X folding on 40 threads is at about 1M ppd. ?
  3. Don't forget about the m.2 slots if you want to shove a few more GPUs in the X399 machine. https://extremehw.net/topic/50-post-your-last-purchase/?do=findComment&comment=4205
  4. This ^^ Unpopular opinion incoming. ? One of the reasons why something like Ethereum was designed to be asic resistant was so that the distribution of eth would be very wide and not consolidated with just a few people holding all the coin. This implementation appears to only target the Ethereum algorithm, so other coins could still be mined. The real question is what NVIDIA does moving forward. Locking hardware functions behind a closed driver, and or bios, for market segmentation never benefits the consumer, and is only done to increase the bottom line for the company. This is nothing new for NVIDIA, since they've been doing this to professional users with their Quadro GPUs and drivers for years. Same silicon, but 10x worse performance for many professional applications without a Quadro and the secret sauce in the drivers. What's next throttling all CUDA, OpenCL, and ML functions, so that exactly zero GPGPU can be done on RTX cards?
  5. Probably too late, but the ASRock X399 Taichi is a great option for multiple GPUs with bifurcation support. I ran 5x GPUs in mine with no issues. Not sure if the latest BIOS has x8x8 bifurcation for all slots, but ASRock support sent me a BIOS with x8x8, x4x4x8, and x4x4x4x4 on all the slots. I would definitely recommend a board that has auxiliary power for the PCIe slots if you plan on running more than 3 GPUs 24/7. Otherwise you have a chance of burning up the 24 pin connector on the motherboard (at best or a toasted board at worst) if you are pulling full board power on your GPUs. I've had great luck with these risers https://peine-braun.net/shop/index.php?route=common/home, since he first went into full production over on [H].
  6. Here is the link to the Phoronix article where the info came from. https://www.phoronix.com/scan.php?page=news_item&px=AMD-Sched-Invariance-Fix-Merged and some benchmarks: https://www.phoronix.com/scan.php?page=article&item=linux511-regress-over&num=1 It should be noted that many users would not have been affected by the performance regression. I have always just used the performance governor on my AMD machines. The schedutil governor has pretty much been a hot mess since it was added to the kernel. The ondemand and performance governors still outperform schedutil. There are also still bugs with scedutil and Zen 3 CPUs that probably won't land in the kernel for a few more weeks.
  7. That script basically matches my burn-in procedure for new HDD's, except that I usually do short S.M.A.R.T.->extended S.M.A.R.T.->bad blocks->extended S.M.A.R.T.
  8. I think you are thinking of 10 or 10/100 Mbps ethernet, and yes that is slow, but 10 Mbps was blazing fast in 1982. ? Once you get up to high enough speeds the copper itself does become a limiting factor, but for consumer gear, looking at the slow adoption of 10 Gbps, that is probably a decade away. There are very few switches on the market for consumers (at a reasonable price) that can do 10 Gbps with standard RJ45 copper connectors. Anything with more than one or two 10G RJ45 ports is going to be $300+ dollars.
  9. I have a 10 Gbps copper switch and it has no issues running at full rated speed on CAT 6A. Longest run is about 30 meters, but 6A should be good out to 100 meters.
  10. ^^This. Distilled plus a few drops of Mayhems Biocide+ and Inhibitor+, keeps all of my machines running nice and clean. A 15 mL bottle of each is enough to treat like 30L of distilled, so basically a lifetime of fluid if you only have one system. ?
  11. It might be the client. I had issues with the latest one, and didn't take the time to sort it out. Maybe try rolling back to an older version of the F@H client.
  12. Sorry for the somewhat OT double post. @Diffident I'm not a fan of the way that the latest F@H client is packaged in the AUR, and I think it might actually cause some issues when running F@H on ROCm. I didn't have time to fully test it, so I've just continued to use the older client. The new client in the AUR broke some scripts that I have for F@H along with some other things that I don't really like about how it's installed. I'm currently running the 7.5.1 client. If you want to give that a go, here's my PKGBUILD along with the post-install and SystemD service. foldingathome.zip
  13. Not sure what the special sauce is for F@H on ROCm. Previously I had no issues running the closed source bits of the pro driver, but now the performance for F@H has really tanked. Just took a quick look at the rocm-opencl-runtime Gentoo package, and as far as I can tell, all of the necessary libs, compilers, and applications should be getting built. I build a bit more of the ROCm stack, but that is all for HIP, profiling, and debugging. FWIW I've only tested and ran F@H+ROCm_4.0 on kernels 5.10.7 and 5.10.8.
  14. ROCm is AMD's open source compute stack of software. https://www.amd.com/en/graphics/servers-solutions-rocm OpenCL is still very hit and miss on the open source stack. I am actually testing some HIP code that was ported from CUDA and it is working surprisingly well. For F@H the ROCm opencl-runtime is definitely the way to go, at least on anything Vega. Here's a quick little comparison of some WUs running on the AMD closed source OpenCL driver and on the ROCm 4.0 OpenCL driver. Radeon VII amdgpu-pro-20.30_1109583 Slot 01 p14914 - tpf=3:17 Slot 00 p14911 - tpf=3:10 Slot 02 p14912 - tpf=5:31 Slot 03 p13442 - tpf=11:46 ROCm 4.0 Slot 01 p14914 - tpf=1:14 Slot 00 p14911 - tpf=1:17 Slot 02 p14912 - tpf=1:16 Slot 03 p13442 - tpf=1:42 The p1491x WUs are pretty brutal on AMD GPUs, only about 800-900k ppd on a Radeon VII. That is at least tolerable compared to the 250k ppd on the amdgpu-pro driver. Some other WUs are actually pretty decent, like the p17321, which is around 2.9M ppd.
  15. I'll probably have to upgrade my bulk media storage sooner rather than later. I grabbed a few remastered Blu-ray box sets, and that is going to eat up a few TBs of storage. On a side note the X-Files Blu-rays look really good. Especially since Season 1 and 2 I had originally ripped from VHS.
  16. Have a few GPUs folding for EHW while testing the latest ROCm. I'll keeping them going through the comp.
  17. This build is still alive, but it's stuck on my test bench for now while I work out the software stack and make some final decisions on hardware. New RAM has been dead stable for 3 weeks running at 2933 CL19. Still not sure which way to go on the GPUs in the server. I had plans to swap some hardware around, but the lack of GPU availability has kind of slowed progress.
  18. Would be nuts if you could unlock the disabled CCD. I'll jump on the train if it can actually happen. My 960T clocked better than my 1055T or my 1090T.
  19. More speed is always a good thing, but for everyday user systems the benefits are pretty much nonexistent. PCIe 4.0 SSDs have been pretty underwhelming for really any workload. Hopefully the updated controllers will change that for the next round of SSD launches.
  20. Had a bit an accident late last winter with my single burner lantern, so I gave my self an early Xmas present. NEMO Sonic 0
  21. I could have saved my self hours of testing, if I would have just replaced the garbage stick sooner. Swapped out the bad stick for one of the new sticks, loaded a 2866 CL16 profile that I had saved, and I am 30 minutes into a stressapptest run with zero errors. With the old stick in I would start to see errors within 20 seconds of starting a stress test.
  22. Have you tried playing on your 5650X with the 1.05 patch?
  23. Two new sticks are set to be delivered on Wednesday, so hopefully I can get it sorted out with the new sticks. One of the other sticks also showed a single error, but I'm pretty sure that only showed up when trying to get 128GB to run at 3200. I already have mis-matched sticks, so more than likely even if the replacement is good, I will be maxed out at 2866-2933. Four of the sticks (including the bad one) have some funky Micron Rev H? chips, that I'm pretty sure are some sort of ES ICs. This late in the life cycle of DDR4 you start to see some funny stuff getting put on modules. Especially, with something like ECC UDIMMs which are not really made in real quantity.
  24. I've ran it all the way up and down from 1.25v to 1.6v while testing. That stick just doesn't want to run stable at anything over 2666, even at CL22.
  25. One less than stellar stick of RAM. That stick won't really run at anything over 2666. Rolled the dice on a new pair of sticks, and we'll see what ICs I get.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy