Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    647
  • Joined

  • Last visited

  • Days Won

    43
  • Feedback

    0%

Everything posted by tictoc

  1. I'm looking to swap out the Radeon VII for either an R9-290 or a Fury-X. I'll let @BWG know when/if I make the switch. Now that I pulled my server board off my main bench, the 2080 and whatever AMD GPU I end up going with, will be under water for January.
  2. Very nice. Did you have all those cards OC'd? Your ppd is not much less than than my ppm.
  3. Small update. Pumps installed, and tubing ran for everything except for pump->CPU and GPU->radiator. Not sure if I want to keep the drain on the pump inlet or swap it to the outlet. Either way, there will be some tipping/flippiing of the case in order to get the CPU and GPUs drained for the inevitable GPU swap out. Looking to pull the machine off my test bench later today. Then I'll crack open the CPU/GPU blocks and give them a good cleaning. After that I'll get the system installed into the case, leak test it, and then give it a final clean and flush with Blitz Part 2. I might actually have this thing up and running in the next couple of days.
  4. Maybe just bad luck on the pumps. Both of my test benches have D5s mounted vertically using the EK fan mounting bracket. One is an older EK combo unit that is basically the same as what you have, and the other has an EK top with a male-male fitting attaching the pump top to the reservoir. I also ran a D5 with the XSPC Photon pump top/reservoir combo, for a number of years without issue. I guess that pump top could be causing some sort of weird cavitation, but that seems pretty unlikely since I see quite a few rigs using those pump/res combos. Looks like the pics are visible now.
  5. Generator takes care of the essential circuits, so no worries on that front. Power is back on now, and folding has resumed.
  6. Power has been out for the last 8 hours, so the folding machines have been shutdown. No ETA on when it will be back up, but probably not until midday tommorrow at the earliest. I somehow still have internet, at least for now.
  7. Looks like I'm back in business running on a slightly older version of ROCm OpenCL.
  8. I had another 7 hours of nothing on the Radeon VII today. I'm going to see if there is an older ROCm driver that has ppd at least in the neighborhood of this old amdgpu-pro driver. Even if the ppd is less, I know that every ROCm version after 4.0 was rock stable, so there won't be any downtime.
  9. I have the pieces and parts to add a pi-kvm to the main workstation, but like my other current projects, it is just waiting until work/life slows down a bit. If I had waited until I was done with the new home server to offline my other machines, this wouldn't have been an issue since I had a monitoring and alerting stack in place, along with remote ssh access via my home VPN. I hope to get all that sorted out and everything back online in the next month.
  10. Only down side to the old driver I'm running in Linux is that it will randomly crash and not recover, and the only way to get the GPUs back online is a full system reboot. That led to a bit of downtime a few days ago, because I didn't notice that the GPUs had halted until I got home and noticed how cold it was in my office.
  11. So, I had a few minutes this morning to play around with the commit that I had reverted to solve the networking issue. The core of the problem is that the tsc clocksource was being marked unstable at boot, which caused the kernel to fall back to hpet, resulting in the abysmal network performance. Initially I just increased the MAX_SKEW back to 100µs and the problem went away. I then played with it a bit and found that 70µs was sufficient to keep the kernel from switching to hpet. Getting ready to file a bug report, and it looks like there are some patches that have been worked on over the last few weeks. For now I'll just roll with my small patch and see what the final patches look like. https://lkml.org/lkml/2021/11/18/938
  12. So I swapped out the ROCm drivers on the VII for the old closed source amdgpu-pro drivers, after remembering how the performance the last time I tested it was much better on the old drivers. The card might actually be somewhat competitive in the AMD category now. p18201 | ROCm 4.5 1.9M ppd | amdgpu-pro 20.30 3M ppd
  13. The Radeon VII was down for a day, but it is back up and running now. I also bumped the clocks up a notch or two on the 2080S, and it has been stable for the last few days.
  14. For now I just reverted the commit with a patch, and I'm using that patch to build/run stable and mainline kernels for testing. I also have a few Aquantia 10Gbps NICs, and I'll throw one of those in the server tomorrow to see if it suffers from the same issue as the Intel NICs. Currently I am running 4x 1TB NVMe on an ASUS Hyper m.2 card, in md RAID-0 for a steamcache, squid, and other "victim" data. The OS is running on 2x 1TB NVMe drives in a btrfs RAID1 array. I'm not well versed in running something like that on Windows, so I'm not sure how complex and robust it would be. I know that there were/are? quite a few issues with the fake raid (motherboard/BIOS) on AMD but I've never used it.
  15. No actual build progress, but I have had most of the software stack up and running for testing. This hit a bit of a stall as I ran into some networking performance regressions with newer kernels. I had previously been running on the 5.10.xx LTS kernels. There are some btrfs improvements in newer kernels that haven't been back-ported to the LTS kernel, so I decided to switch over to the stable kernel. This led me down the rabbit hole of figuring out why my 10Gbps ports were maxing out at 1Gbps. I have bisected the kernel back to the offending commit, and it is a bit of an oddball. It looks to probably be a Threadripper specific issue, due to some timer/clocksource changes. Reverting this commit: https://lore.kernel.org/all/162437391793.395.1913539101416169640.tip-bot2@tip-bot2/ allows the 10Gbps ports to operate at full speed. I'm playing around with it this evening, and I'll submit a bug report and/or a patch sometime tomorrow. I have a few 10Gbps NICs with the same controller as the onboard ports on the X399D8A-2T. I tested those NICs on two other systems, and those systems did not have the same issue as my Threadripper X399 machine. The regression occurs with the onboard 10Gbps ports and with the PCIe X550-T2 NIC on the Threadripper machine.
  16. So far this is the year of no winter here. High temp today was 62 (14°C), and I still have the window open in my office. Usually by this time of year daytime highs are around freezing or a bit above, with overnight lows anywhere from 0-20 (-7°C to -17°C). I'm ready for winter, but I can't really complain about wearing a t-shirt in December.
  17. An interesting little CPU for a new router.
  18. I'm not sure on what would be compatible with your network/provider, but in the quick search I did on that router there were quite a few posts discussing options for gear that would work as a transparent bridge. Unfortunately nearly every ISP handles this differently, so the only thing I could suggest is to search for other users with your same ISP who are running their modem/router as a transparent bridge and using their own equipment for routing and firewall.
  19. Right on. Logs attached for posterity, but no need to add the points (30.2M) to the official total. We'll just use this as a teaching moment. lol 2080S-etf_log.txt 2080S-etf_points.txt
  20. I don't know anything about that router/modem, but a quick search looks like it cannot be configured as a bridge. I'm assuming this is ADSL or VDSL2, so you should be able to use some other router/modem that can be configured as a bridge, and have your pfsense box take care of all the routing and firewall. I am in a similar situation in the states, but luckily there are a number of different routers that I can use as dumb bridges, rather than using the ISP provided gear.
  21. The price on that switch is pretty good. Multi-gig 10Gbe switches, can get pretty expensive, and the price hasn't really come down in the last 5 years. You might want to double check the firmware version on that switch. The original firmware had a bug with the fan causing it to ramp up and down, which could get pretty annoying if you have the switch in your office. Depending on how hard you are pushing the switch you might also be able to swap out the stock fan for a quieter Noctua fan and still keep temps in check..
  22. No HFM, but I do have the log. I'm not too worried about getting my points back for the ETF, I figure part of being in the competition is keeping on top of what's happening, especially since I am a team captain. I'll just have to up the clocks a few notches to make sure I can chart a win next month. I knew that the client would bork my slots, but when I double checked it I didn't compare the config to my list of my passkeys. I just saw that it correctly kept all of my options, and didn't think to double check that the passkey in the slot was the correct passkey.
  23. Last week I pulled the 980 out of the machine with my 2080S, and the client, in it's infinite wisdom, somehow managed to swap my passkey from the removed 980 to the 2080's slot. I got it sorted now, but that's why I had no points on the 2080S over the last few days. There's no way I'll be able to make up the ground and get back ahead of @Bastiaan_NL.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy