Welcome to ExtremeHW
Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.
Registered users can:
- Start new topics and reply to others.
- Show off your PC using our Rig Creator feature.
- Subscribe to topics and forums to get updates.
- Get your own profile page to customize.
- Send personal messages to other members.
- Take advantage of site exclusive features.
- Upgrade to Premium to unlock additional sites features.
-
Posts
640 -
Joined
-
Last visited
-
Days Won
34 -
Feedback
0%
Content Type
Forums
Store
Events
Gallery
Profiles
Videos
Marketplace
Tutorials
Everything posted by tictoc
-
Things have been busy, but I had some time over the weekend to get this partially up and running. I'm leaving the PCIe cables just in case this chassis gets repurposed some day. The rest of the cables are stashed under the PSU. Currently stress testing (mprime, y-cruncher, and stressapptest) on a minimal Arch install. At stock clocks, CPU boosts to 4050MHz all-core while running mprime blend, with CPU fan set to "Standard" mode in BIOS. Looks like there is quite a bit of headroom on the CPU, so I will be OCing the CPU on the overkill router. Currently power usage is sitting at 85W at the wall while running mprime. Both NICs detected and all ports are working with the PCIe slot set to x8 x8. Bifurcation options in the BIOS are x8 x8, x8 x4 x4, and x4 x4 x4 x4. ECC UDIMMs booted right up at 3200MHz, and can be monitored for errors via rasdaemon. Totally unnecessary for a router, but I have a bunch of extra ECC UDIMMs, so into the router they go. No issues at 3200 with jedec timings, so it looks like I will also be OC'ing the RAM. List of things still to complete: Fab custom top panel with filtered cut-outs for cooler intake and intake above the NICs Replace secondary 8-pin EPS with custom length 4-pin EPS for the bifurcation card Fab/install power switch, power LED, and activity LED Install and wire up remote power break-out board for PiKVM Once I finish the rest of the build, then it will be on to testing VyOS.
-
This would be pretty sweet if it was like first gen Ryzen. I jumped on the 1700, threw it under water, OC'd it, and then it magically turned itself into an 1800xX :)
-
Added 44 threads of my 3960X to the mix, and that looks to be good for about 1M ppd. I forgot how little power AMD GPUs use for F@H compared to other compute work where they are actually being pushed hard. Currently running F@H on the following: 6900XT @ 2800core/1075mem 2x Radeon VII @ 2080core/1200mem 3960X (44 threads) @ 4200MHz Total system power load (as reported by the UPS) = 1140W
-
Current state of affairs on the STi. The mess on the old block was from a power steering pump leak, that started about a year ago, that I never fixed. IHI VF48 Hi Flow ported/polished, with billet wheel, and ceramic coated hot side. I'm just waiting to get my decked and rebuilt heads back from the shop, and then I'll be able to put the top end back together and get her back on the road.
-
Sorry I couldn't make the time to get some subs in. You guys all did pretty great, and made a great showing for the first time competing in a team wide comp.
-
-
I do have an AM3+ setup that I could throw on the bench. It will be a few days, but I should be able to get something up in the next week or so.
-
I might join in. I'll take a look through the thread and see what we need to run. Off the top of my head, I think I could do Quad 7970s or Dual on some wickedly high clocking 290s. I'll mess around with a stripped down Windows 7 install tonight, and then see what I can get up and running over the next week.
-
EVGA Exiting GPU Market, Citing Abusive Treatment by NVIDIA as Reason
tictoc replied to Mr. Fox's topic in Hardware News
Sorry for the double. I can't possibly agree with this more. Nickel coating inevitably flakes and corrodes, plexi cracks, and I couldn't care less about RGB. The biggest enemies of long term uptime on loops are corrosion and growth. Copper/acetal blocks (which look that best anyhow), coupled with epdm tubing, and some biocide and inhibitor will allow you to have loops running 24/7 for more than a year with the only maintenace being topping off the res. If you don't care about bling, then you can pay less, and end up with a more reliable loop. -
EVGA Exiting GPU Market, Citing Abusive Treatment by NVIDIA as Reason
tictoc replied to Mr. Fox's topic in Hardware News
The last twenty minutes of that video is basically what mostly pushed me away from NVIDIA. That coupled with NVIDIA's disregard for open source GPU drivers, and more importantly open source user-space, has made it easy for me to mostly (still have a number of NVIDIA GPUs running) move away from NVIDIA for my personal machines and projects. With AMD I can easily have things like this on my Radeon VIIs: As far as the overall topic is concerned, I hope to see EVGA make a return to manufacturing a high quantity of top-of-the-line PSUs like the SuperNOVA G2 1300. I still have two of these running, and they have been going more or less 24/7 at 90+% capacity for nine years (still one year left on the warranty ). -
I don't disagree with that. I had the 400 out of my '78 Bronco in about 30 minutes. I am about 25/75, flat-four:V8, over the last twenty years. All the V8's except one have been pre-1980's, and they are a breeze to work on. I also must tinker with everything, so having AccessTUNER Race to dial in the Subies is pretty great.
-
While it's a bit more involved than pulling a small block out of a pre-80's vehicle, it's actually pretty easy to pull the motor on these cars. I've pulled at least 10 EJ's, and the way the motor is designed, nearly everything can stay attached and the whole thing comes out. It is pretty much as simple as Remove hood Disconnect and remove battery Remove intercooler Disconnect fuel lines Disconnect and pull radiator Remove A/C and PS belts Disconnect down pipe from turbo Disconnect main harness Disconnect misc. other wiring harnesses Unbolt pitch-stop Disengage clutch Unbolt A/C compressor and PS Pump and flip them out of the way Remove the 6 bellhousing to block bolts Remove 2 engine mount nuts Personally without a lift, I think it is faster and easier to yank the motor when replacing the clutch, since there is probably some other mainteneace to be done that will be easier with the motor out like: plugs, timing belt, water pump, etc. This motor has been out of the car twice. Once for a new clutch, and once to do the head gaskets. All in it's 1-1/2 to 2 hours to pull the motor.
-
I posted a big guide in a spoiler, and it got eaten by the forum. I'm following this thread now, so hit me up if you have any Linux questions. I'm not super familiar with Unraid (my VM hosts are running Debian or Arch), but I would probably go the container or VM route with folding on Unraid. For last months Folding Comp I ran F@H on my GPUs via podman with a slightly modified Docker container from the official F@H containers. https://github.com/FoldingAtHome/containers I was running on AMD GPUs, but the NVIDIA container is fairly similar. It should be pretty easy to set up if you have any interest in running containers, and want to isolate the different GPUs in their own environment. There is a thread in the Unraid forum, but it appears to be dead now. https://github.com/FoldingAtHome/containers I haven't looked at the Unraid community container, so I'm not sure why it isn't working. The offical gpu container from F@H worked fine for me.
-
The indestructible STi has traveled it's last mile. Rod went a knocking yesterday. I have a fresh shortblock to put in it, so I might post some build pics, and some pics of whatever carnage there is once I pull and disassemble the dead motor.
-
Docking station for the deck. Downloading from my SteamCache at about 600 Mb/s is a workout for that little CPU. Load is roughly 90% and temps are at 90°C. Not sure where the bottleneck is, but 600Mb/s beats the heck out of my 40Mb/s internet service.
-
The gem of this build is the 1.5U chassis. This oddball chassis size will allow for quiet 60mm fans, and a dead silent cooler for the CPU. This router will slot right into the top of my quiet networking/power rack. The switches and UPS's in the rack have already had transplant surgery to swap out the stock 40mm and 80mm fans for some quieter Noctua fans. Parts List: CPU: AMD Ryzen 3 Pro 4350G Motherboard: AsRock B550M-ITX/ac RAM: 2x 16GB Micron DDR4 3200 ECC UDIMM SSD: ADATA XPG SX8200 Pro 500GB NICs: Intel X550-AT2 10GBASE-T; Intel E1G44HT 10/100/1000 PSU: FSP FlexGURU 500W The stripped down chassis. There will be some mods to the chassis. IO will be facing the front along with the 2 NICs, and the provided plates don't have provisions for that setup. PSU will also be relocated to what will now be the rear for tidy power cable management. Add a filtered intake to the top for the CPU cooler I am also going to try and stuff one of my PiKVMs into the chassis. I'm not sure if I'll be able to squeeze it between the PSU and the front of the chassis, but there should be just enough room to get it in there.
-
QDCs installed, and now I'm mostly ready to put it into full production. I have been running it through some various testing scenarios over the last few months. Now I just need to spend a day getting the OS setup done, and then I can finally fully migrate all my data.
-
Looks like I only have some pics from leak testing after adding the 6900xt. Someday I'll pull the 6900xt and give it a bit o' copper shine to match the Radeon VIIs.
-
Ended up on a much longer than planned for break from F@H, and the forum in general. Not sure when/if I'll be back up folding anything. I'll update the thread when I figure out what I am going to do. @axipher it looks like your 750ti is still folding away.