Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    633
  • Joined

  • Last visited

  • Days Won

    33
  • Feedback

    0%

Everything posted by tictoc

  1. @daddydoitall it looks like you might have a bit of extra hardware folding on your ETF passkey.
  2. Use an open source authenticator app, but I've never had any issues logging in.
  3. Thinkpad L13 Yoga Gen 3, Ryzen 7 Pro 5875U Replaces my Thinkpad Yoga L390 that has a short somewhere in the motherboard that causes it to randomly power off.
  4. @axipher I'll be up and running again all this month, so we should have a good chance to take the top spot.
  5. With the integrated heat sink on the 905p temps are nothing to worry about as long as you have some air flow. I just wrote 600GB to one of the 905p's and temps never hit 50°C. Most of the enterprise grade U.2 SSD's are in metal enclosures, and as long as you have some air flow temps should never be an issue in a desktop case.
  6. This is my daily so no big power, just rolling the same base that I've had for the last 5 years. Exhaust: Stock uel manifold, Grimmspeed Up-pipe, TurboXS Catted Downpipe (v2), 3" mid-pipe, Nameless muffler delete with dual 3.5" tips Fuel: Cobb Accessport with a pro tune, DW65c fuel pump, ID1050 injectors, Cobb flex fuel
  7. I have a handful of 6.4TB U.2 drives with only a few PB written that I haven't put to use yet . Right now I am running two 1TB Optane 905p ssd's in my server on a dual u.2 to PCIe x8 adapter. There are some other dual adapters that flip the drives, so shorter length but adds some height. As long as your board supports bifurcation, it's easy to add a few more if you have empty PCIe slots.
  8. Brand new 2.5L type RA short block and a new IHI VF48 Hi Flow turbo. Most everything else I swapped from the old motor. Other new parts included pretty much anything that touches oil and can't be easily cleaned, so new oil pump, AVCS solenoids, AVCS gears, and oil cooler.
  9. Been away for a bit, but the STi is back on the road.
  10. It is kind of a bright red so... Maybe 10HP.
  11. Decided to add 5 HP to the turbo while I'm waiting to get the heads back, which is supposed to happen this week.
  12. I think I might get something back up and folding. Maybe the 2080S can return to folding duty. I'll figure it out in the next week or so.
  13. tictoc

    RyzenRouter

    Things have been busy, but I had some time over the weekend to get this partially up and running. I'm leaving the PCIe cables just in case this chassis gets repurposed some day. The rest of the cables are stashed under the PSU. Currently stress testing (mprime, y-cruncher, and stressapptest) on a minimal Arch install. At stock clocks, CPU boosts to 4050MHz all-core while running mprime blend, with CPU fan set to "Standard" mode in BIOS. Looks like there is quite a bit of headroom on the CPU, so I will be OCing the CPU on the overkill router. Currently power usage is sitting at 85W at the wall while running mprime. Both NICs detected and all ports are working with the PCIe slot set to x8 x8. Bifurcation options in the BIOS are x8 x8, x8 x4 x4, and x4 x4 x4 x4. ECC UDIMMs booted right up at 3200MHz, and can be monitored for errors via rasdaemon. Totally unnecessary for a router, but I have a bunch of extra ECC UDIMMs, so into the router they go. No issues at 3200 with jedec timings, so it looks like I will also be OC'ing the RAM. List of things still to complete: Fab custom top panel with filtered cut-outs for cooler intake and intake above the NICs Replace secondary 8-pin EPS with custom length 4-pin EPS for the bifurcation card Fab/install power switch, power LED, and activity LED Install and wire up remote power break-out board for PiKVM Once I finish the rest of the build, then it will be on to testing VyOS.
  14. This would be pretty sweet if it was like first gen Ryzen. I jumped on the 1700, threw it under water, OC'd it, and then it magically turned itself into an 1800xX :)
  15. Added 44 threads of my 3960X to the mix, and that looks to be good for about 1M ppd. I forgot how little power AMD GPUs use for F@H compared to other compute work where they are actually being pushed hard. Currently running F@H on the following: 6900XT @ 2800core/1075mem 2x Radeon VII @ 2080core/1200mem 3960X (44 threads) @ 4200MHz Total system power load (as reported by the UPS) = 1140W
  16. Current state of affairs on the STi. The mess on the old block was from a power steering pump leak, that started about a year ago, that I never fixed. IHI VF48 Hi Flow ported/polished, with billet wheel, and ceramic coated hot side. I'm just waiting to get my decked and rebuilt heads back from the shop, and then I'll be able to put the top end back together and get her back on the road.
  17. tictoc

    2022 Team Cup!

    Sorry I couldn't make the time to get some subs in. You guys all did pretty great, and made a great showing for the first time competing in a team wide comp.
  18. It's a four way splitter that cleans up the fan cable management on a MORA.
  19. tictoc

    2022 Team Cup!

    I do have an AM3+ setup that I could throw on the bench. It will be a few days, but I should be able to get something up in the next week or so.
  20. tictoc

    2022 Team Cup!

    I might join in. I'll take a look through the thread and see what we need to run. Off the top of my head, I think I could do Quad 7970s or Dual on some wickedly high clocking 290s. I'll mess around with a stripped down Windows 7 install tonight, and then see what I can get up and running over the next week.
  21. Sorry for the double. I can't possibly agree with this more. Nickel coating inevitably flakes and corrodes, plexi cracks, and I couldn't care less about RGB. The biggest enemies of long term uptime on loops are corrosion and growth. Copper/acetal blocks (which look that best anyhow), coupled with epdm tubing, and some biocide and inhibitor will allow you to have loops running 24/7 for more than a year with the only maintenace being topping off the res. If you don't care about bling, then you can pay less, and end up with a more reliable loop.
  22. The last twenty minutes of that video is basically what mostly pushed me away from NVIDIA. That coupled with NVIDIA's disregard for open source GPU drivers, and more importantly open source user-space, has made it easy for me to mostly (still have a number of NVIDIA GPUs running) move away from NVIDIA for my personal machines and projects. With AMD I can easily have things like this on my Radeon VIIs: As far as the overall topic is concerned, I hope to see EVGA make a return to manufacturing a high quantity of top-of-the-line PSUs like the SuperNOVA G2 1300. I still have two of these running, and they have been going more or less 24/7 at 90+% capacity for nine years (still one year left on the warranty ).
  23. I don't disagree with that. I had the 400 out of my '78 Bronco in about 30 minutes. I am about 25/75, flat-four:V8, over the last twenty years. All the V8's except one have been pre-1980's, and they are a breeze to work on. I also must tinker with everything, so having AccessTUNER Race to dial in the Subies is pretty great.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy