Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    639
  • Joined

  • Last visited

  • Days Won

    33
  • Feedback

    0%

Everything posted by tictoc

  1. So, I had a few minutes this morning to play around with the commit that I had reverted to solve the networking issue. The core of the problem is that the tsc clocksource was being marked unstable at boot, which caused the kernel to fall back to hpet, resulting in the abysmal network performance. Initially I just increased the MAX_SKEW back to 100µs and the problem went away. I then played with it a bit and found that 70µs was sufficient to keep the kernel from switching to hpet. Getting ready to file a bug report, and it looks like there are some patches that have been worked on over the last few weeks. For now I'll just roll with my small patch and see what the final patches look like. https://lkml.org/lkml/2021/11/18/938
  2. So I swapped out the ROCm drivers on the VII for the old closed source amdgpu-pro drivers, after remembering how the performance the last time I tested it was much better on the old drivers. The card might actually be somewhat competitive in the AMD category now. p18201 | ROCm 4.5 1.9M ppd | amdgpu-pro 20.30 3M ppd
  3. The Radeon VII was down for a day, but it is back up and running now. I also bumped the clocks up a notch or two on the 2080S, and it has been stable for the last few days.
  4. For now I just reverted the commit with a patch, and I'm using that patch to build/run stable and mainline kernels for testing. I also have a few Aquantia 10Gbps NICs, and I'll throw one of those in the server tomorrow to see if it suffers from the same issue as the Intel NICs. Currently I am running 4x 1TB NVMe on an ASUS Hyper m.2 card, in md RAID-0 for a steamcache, squid, and other "victim" data. The OS is running on 2x 1TB NVMe drives in a btrfs RAID1 array. I'm not well versed in running something like that on Windows, so I'm not sure how complex and robust it would be. I know that there were/are? quite a few issues with the fake raid (motherboard/BIOS) on AMD but I've never used it.
  5. No actual build progress, but I have had most of the software stack up and running for testing. This hit a bit of a stall as I ran into some networking performance regressions with newer kernels. I had previously been running on the 5.10.xx LTS kernels. There are some btrfs improvements in newer kernels that haven't been back-ported to the LTS kernel, so I decided to switch over to the stable kernel. This led me down the rabbit hole of figuring out why my 10Gbps ports were maxing out at 1Gbps. I have bisected the kernel back to the offending commit, and it is a bit of an oddball. It looks to probably be a Threadripper specific issue, due to some timer/clocksource changes. Reverting this commit: https://lore.kernel.org/all/162437391793.395.1913539101416169640.tip-bot2@tip-bot2/ allows the 10Gbps ports to operate at full speed. I'm playing around with it this evening, and I'll submit a bug report and/or a patch sometime tomorrow. I have a few 10Gbps NICs with the same controller as the onboard ports on the X399D8A-2T. I tested those NICs on two other systems, and those systems did not have the same issue as my Threadripper X399 machine. The regression occurs with the onboard 10Gbps ports and with the PCIe X550-T2 NIC on the Threadripper machine.
  6. So far this is the year of no winter here. High temp today was 62 (14°C), and I still have the window open in my office. Usually by this time of year daytime highs are around freezing or a bit above, with overnight lows anywhere from 0-20 (-7°C to -17°C). I'm ready for winter, but I can't really complain about wearing a t-shirt in December.
  7. An interesting little CPU for a new router.
  8. I'm not sure on what would be compatible with your network/provider, but in the quick search I did on that router there were quite a few posts discussing options for gear that would work as a transparent bridge. Unfortunately nearly every ISP handles this differently, so the only thing I could suggest is to search for other users with your same ISP who are running their modem/router as a transparent bridge and using their own equipment for routing and firewall.
  9. Right on. Logs attached for posterity, but no need to add the points (30.2M) to the official total. We'll just use this as a teaching moment. lol 2080S-etf_log.txt 2080S-etf_points.txt
  10. I don't know anything about that router/modem, but a quick search looks like it cannot be configured as a bridge. I'm assuming this is ADSL or VDSL2, so you should be able to use some other router/modem that can be configured as a bridge, and have your pfsense box take care of all the routing and firewall. I am in a similar situation in the states, but luckily there are a number of different routers that I can use as dumb bridges, rather than using the ISP provided gear.
  11. The price on that switch is pretty good. Multi-gig 10Gbe switches, can get pretty expensive, and the price hasn't really come down in the last 5 years. You might want to double check the firmware version on that switch. The original firmware had a bug with the fan causing it to ramp up and down, which could get pretty annoying if you have the switch in your office. Depending on how hard you are pushing the switch you might also be able to swap out the stock fan for a quieter Noctua fan and still keep temps in check..
  12. No HFM, but I do have the log. I'm not too worried about getting my points back for the ETF, I figure part of being in the competition is keeping on top of what's happening, especially since I am a team captain. I'll just have to up the clocks a few notches to make sure I can chart a win next month. I knew that the client would bork my slots, but when I double checked it I didn't compare the config to my list of my passkeys. I just saw that it correctly kept all of my options, and didn't think to double check that the passkey in the slot was the correct passkey.
  13. Last week I pulled the 980 out of the machine with my 2080S, and the client, in it's infinite wisdom, somehow managed to swap my passkey from the removed 980 to the 2080's slot. I got it sorted now, but that's why I had no points on the 2080S over the last few days. There's no way I'll be able to make up the ground and get back ahead of @Bastiaan_NL.
  14. Just in case there's any confusion on the ECC that is standard with DDR5. The ECC included in those modules is on-die ECC, which can automatically correct in chip bit flips. It is mostly there to increase stability as process nodes get smaller and bandwidth increases with DDR5. The on-die ECC won't correct in transit errors nor will it report the in chip errors to the platform or the OS. For that there will still be traditional RDIMMs/LRDIMMs. The on-die ECC is more similar in concept to the error correction in NAND SSDs. *Edit* On the topic of the Adata XPG, I had a few DDR3 XPG kits and had no issues with them.
  15. If you are seeing any throttling, then adjusting the offset, in tandem with adjusting the max turbo, might keep it at a higher consistent clock. Can be a bit tedious to dial it in, but might be worth it if you can squeeze out a bit more performance.
  16. No direct experience with it, but the Arctic Liquid Freezer II is supposed to be a very performant clc and is less expensive. I don't think you'll see a huge performance increase over the D15. Maybe a few degrees cooler, but not a night and day difference. Have you tried adjusting your OC and AVX offsets? CPU tasks for F@H are now AVX accelerated, and are quite a bit more stressful than the older CPU WUs.
  17. Progress has been slow to zero over the last few months. I did get the pumps installed, and plumbed up the res, pumps, and radiators last night. Hopefully I'll drop the board in tonight and get the CPU and GPU in the loop for leak testing. The down-sizing from my 42U rack to the 12U rack is mostly complete. Once I have the server done I'll finish up the rack which will be housing the following gear: two 2U UPS, one 1U UPS, one 24 port patch panel, one 12 port patch panel, 24 port 1G switch, 8 port 10G switch, 1.5U router/firewall, and 1U PDU. I'll probably throw up a thread on the router/firewall and the build-out for the rack once this machine is up and running.
  18. The latest BIOS looks to be an identical copy to the beta BIOS that ASRock sent me back in May. I was actually able to save my settings before flashing to the new BIOS, and all of the settings were intact after the BIOS update.
  19. @AllenG ASRock Rack has released an official BIOS with the additional PCIe bifurcation options. I haven't tested it yet, to see how it compares to the beta BIOS I got from ASRock, but I assume it is basically the same. I will actually be moving forward with this build over the next few days, so if anyone is still interested, there will actually be some progress incoming.
  20. Thanks E. I was offline for the last week, and wasn't able to follow up. What we have certainly works, and thanks again for trying a few different options.
  21. I think you already noticed this, but I don't see the button in the editor. --Edit-- I was on my phone for that test. The code snippet button is visible when I'm not on mobile.
  22. It had been in the editor since the restart of the site, and I used it when I made the OP
  23. We did have a button for code blocks with syntax highlighting, and it worked in both light and dark modes. I think the button was next to the justify buttons or maybe next to the emoji button it looked like this: <>
  24. The input for this looks more or less like the current code block, but it doesn't appear to have a background behind the code sections. I took a look at the plugin, and I don't think it's quite what I was asking about. The code blocks that we had before worked well, it was really more of a formatting issue. With inline code only the portion of a sentance or paragraph is highlighted, rather than an entire block. In markdown, words surrounded by a single backtick will be highlighted like the screenshot I posted a above, and triple backticks (or four preceeding spaces depending on the the flavor of markdown) will render a code block like what we had with the code <> flag. Example: https://www.freecodecamp.org/news/how-to-format-code-in-markdown/ It really isn't a big deal, and what we had before definitely worked fine in standard and dark modes. No need to waste your time or dev time on something that really isn't used that often here.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy