Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    637
  • Joined

  • Last visited

  • Days Won

    33
  • Feedback

    0%

Everything posted by tictoc

  1. Back up and running. I just remebered that I had a spicy BIOS flashed to the second slot on my Fury X, so now I am running that and slowly increasing the clocks a bit. I also forgot just how good of a clocker this card is relative to other Fiji GPUs.
  2. All of my QD3s are 4 years old or older. I wouldn't worry too much about my recent experience. I used to run a bunch of the QD3s without issue, and I sold all of my other ones when I was getting rid of a bunch of unused fittings a few years back. While I'm here, here's a run of the CPU-Z bench in Linux via wine. Multi-thread bench is a little bugged with CPU usage topped out at 68%.
  3. I have a few that stick open and one with a slow drip, which is why I currently don't have any on the main bench. Using CPC QDC's on my main machine and on the testing bench, and they have performed flawlessly.
  4. k4m1k4z3 was a member of the legendary team Infinity during our reign of dominance.
  5. You should be able to do this all from the command line without too many issues. Can you post your current working xorg.conf ?
  6. I'll be back up and running later today. The 2080S and a Fury-X are currently on the bench, and I'm just giving everything a bit of a cleaning with some Blitz Part 2. @BWG I'll let you know if I'll still be temping in a slot. I have a few messages out, but so far no replies. ;)
  7. No points from me to finish out December, since the year ended with an extended power outage. There must still be some blown transformers or downed lines, because the power is kind of back now at 90-95v.
  8. I'm going to call it #1 in the world. Anonymous - default setting for new installs ICCluster - Spare time on the GPU compute cluster at EPFL NVIDIA SaturnV - NVIDIA AI/ML clusters EarlRuby - VMWare engineer who works on GPU accelerated containers. I'm assuming this is all running somewhere in one of VMWare's datacenters. BWG - One dude crushing it from his home office.
  9. Missed the bot on this one. Nice to see you here @LarsL.
  10. I'm looking to swap out the Radeon VII for either an R9-290 or a Fury-X. I'll let @BWG know when/if I make the switch. Now that I pulled my server board off my main bench, the 2080 and whatever AMD GPU I end up going with, will be under water for January.
  11. Very nice. Did you have all those cards OC'd? Your ppd is not much less than than my ppm.
  12. Small update. Pumps installed, and tubing ran for everything except for pump->CPU and GPU->radiator. Not sure if I want to keep the drain on the pump inlet or swap it to the outlet. Either way, there will be some tipping/flippiing of the case in order to get the CPU and GPUs drained for the inevitable GPU swap out. Looking to pull the machine off my test bench later today. Then I'll crack open the CPU/GPU blocks and give them a good cleaning. After that I'll get the system installed into the case, leak test it, and then give it a final clean and flush with Blitz Part 2. I might actually have this thing up and running in the next couple of days.
  13. Maybe just bad luck on the pumps. Both of my test benches have D5s mounted vertically using the EK fan mounting bracket. One is an older EK combo unit that is basically the same as what you have, and the other has an EK top with a male-male fitting attaching the pump top to the reservoir. I also ran a D5 with the XSPC Photon pump top/reservoir combo, for a number of years without issue. I guess that pump top could be causing some sort of weird cavitation, but that seems pretty unlikely since I see quite a few rigs using those pump/res combos. Looks like the pics are visible now.
  14. Generator takes care of the essential circuits, so no worries on that front. Power is back on now, and folding has resumed.
  15. Power has been out for the last 8 hours, so the folding machines have been shutdown. No ETA on when it will be back up, but probably not until midday tommorrow at the earliest. I somehow still have internet, at least for now.
  16. Looks like I'm back in business running on a slightly older version of ROCm OpenCL.
  17. I had another 7 hours of nothing on the Radeon VII today. I'm going to see if there is an older ROCm driver that has ppd at least in the neighborhood of this old amdgpu-pro driver. Even if the ppd is less, I know that every ROCm version after 4.0 was rock stable, so there won't be any downtime.
  18. I have the pieces and parts to add a pi-kvm to the main workstation, but like my other current projects, it is just waiting until work/life slows down a bit. If I had waited until I was done with the new home server to offline my other machines, this wouldn't have been an issue since I had a monitoring and alerting stack in place, along with remote ssh access via my home VPN. I hope to get all that sorted out and everything back online in the next month.
  19. Only down side to the old driver I'm running in Linux is that it will randomly crash and not recover, and the only way to get the GPUs back online is a full system reboot. That led to a bit of downtime a few days ago, because I didn't notice that the GPUs had halted until I got home and noticed how cold it was in my office.
  20. So, I had a few minutes this morning to play around with the commit that I had reverted to solve the networking issue. The core of the problem is that the tsc clocksource was being marked unstable at boot, which caused the kernel to fall back to hpet, resulting in the abysmal network performance. Initially I just increased the MAX_SKEW back to 100µs and the problem went away. I then played with it a bit and found that 70µs was sufficient to keep the kernel from switching to hpet. Getting ready to file a bug report, and it looks like there are some patches that have been worked on over the last few weeks. For now I'll just roll with my small patch and see what the final patches look like. https://lkml.org/lkml/2021/11/18/938
  21. So I swapped out the ROCm drivers on the VII for the old closed source amdgpu-pro drivers, after remembering how the performance the last time I tested it was much better on the old drivers. The card might actually be somewhat competitive in the AMD category now. p18201 | ROCm 4.5 1.9M ppd | amdgpu-pro 20.30 3M ppd
  22. The Radeon VII was down for a day, but it is back up and running now. I also bumped the clocks up a notch or two on the 2080S, and it has been stable for the last few days.
  23. For now I just reverted the commit with a patch, and I'm using that patch to build/run stable and mainline kernels for testing. I also have a few Aquantia 10Gbps NICs, and I'll throw one of those in the server tomorrow to see if it suffers from the same issue as the Intel NICs. Currently I am running 4x 1TB NVMe on an ASUS Hyper m.2 card, in md RAID-0 for a steamcache, squid, and other "victim" data. The OS is running on 2x 1TB NVMe drives in a btrfs RAID1 array. I'm not well versed in running something like that on Windows, so I'm not sure how complex and robust it would be. I know that there were/are? quite a few issues with the fake raid (motherboard/BIOS) on AMD but I've never used it.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy