Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    647
  • Joined

  • Last visited

  • Days Won

    43
  • Feedback

    0%

Posts posted by tictoc

  1. On 31/03/2022 at 21:24, ArchStanton said:

    I've convinced myself that I do need to revisit my RAM and actually tune it to my system, rather than rely on the settings I plagiarized from J7.  In addition, I've got some QD's coming from Koolance itself.  I did some digging and found these:

     

    1.thumb.jpg.b4cd06f386eed1361dd986b7ee57892c.jpg

     

    But pricing and availability didn't seem better than what's available in the DIY PC enthusiast market currently, so I didn't bother researching whether chrome plated brass would function for our purposes or not (I'm guessing yes, but it would be prudent to be 100% sure).

     

    Here are links to a brochure and the main website if anyone wishes to dig deeper:

    Liquid Cooling Connectors for Thermal Management : CPC (cpcworldwide.com)

    Non-Spill Liquid Cooling Connectors | CPC (cpcworldwide.com)

     

    I think I will likely make more changes to my loop in the future.  Probably going LM on both GPU die and CPU IHS.  More robust cooling on GPU backplate (I do not anticipate active, just "more").  Planning to use QD's to make maintenance far easier and facilitate connection to a "chiller" (maybe on the "redneck" side of the spectrum of possibilities).  I know the gains might be minimal, but for squeezing out every possible drop of cooling performance, I think I will have one of my larger radiators between the CPU and GPU blocks (primarily for the small benefit it might provide when not using a "chiller" of any sort).  Aesthetics are going down the toilet, but I am okay with that now as the "benching bug" has bitten me and I am infected.

     

    I have a bunch of the CPC QD's and they work great with not a drop spilled or any issues in the 4 years (maybe longer??) that I've been using them with the external radiators for my main workstation.  There was a time when the pricing on those was really good.  I think that all ended when EK started selling them stand-alone and with their Predator kits.

    • Respect 1
  2. On 06/04/2022 at 08:11, Supercrumpet said:

     

     

    That PC is on Windows and since folding is very much a secondary thing on that PC, I don't even know if I have HFM on it. Can you pull logs from the base F@H software?

    Otherwise I'd be fine with just using averages. Sorry for putting extra work on you guys.

     

    No worries on any extra work, it's pretty much just a one liner from the log file.

     

    The client stores all the logs locally, and you shouldn't need HFM.  I haven't folded on Windows in something like 8 years, but unless something has changed, the logs should be in the FAHClient data folder at:

     

    C:\Users\{username}\AppData\Roaming\FAHClient

     

     

  3. It should be pretty easy to sort out.  Just need to parse the logs from the start of this month to when you removed the passkey, add up the points for the slot with your 2080ti, and then subtract it from your total.  The credit estimate in the logs is usually really close to the awarded credit.

     

    @Supercrumpet if you want to attach your logs, I can give them a little grep-fu.

     

    -EDIT-

     

    If you're running Linux here's an easy one liner.  Just substitute in the correct slot number.

    journalctl --utc --since "2022-04-03" -g 'FS00.*points'

     

  4. 4 hours ago, ArchStanton said:

    Anyone have time to explain the steps required to correctly (if possible) run dual PSUs in a single system (specifically a single motherboard, not one of the "2 systems in one box" builds)?  Say, GPU on its own dedicated PSU and the rest of the system on the "primary" PSU.  I can envision splitting the output from the "power on/reset/etc." buttons no problem, but do we run into issues with "desynchronized" signals from the two PSUs?  I wouldn't think so but can't be 100% sure by force of imagination alone.

     

     

    I've been using the Add2PSU boards for a long time. https://www.amazon.com/Multiple-Adapter-Connector-Genetek-Electric/dp/B0711WX9MC

     

    Below is the small add-on-board in my workstation. The top 1000W PSU powers two GPUs and the bottom 1300W PSU powers the rest of the system.  Absolutely zero issues.

    backside1.jpg

     

    • Thanks 2
  5. It's just silly for Corsair to list anything other than the max flow rate.  There is almost no way to estimate what the flow rate will be in an "average" loop, at least without doing some pretty involved fluid dynamic calculations.  It looks like Corsair did change the spec page and now it says: "800L/h at 2.1m pressure head (1500L/H theoretical max)".  The AquaComputer spec sheet is nice, since it includes some numbers for what can be expected in the real world, alongside the actual rating of the pump.

     

    A well "calibrated" bucket and clock, with the return line running to the bucket, will give you an exact measurement. :wink:

     

    At the end of the day, as long as water and component temps are acceptable, and the pump doesn't have an unreasonably short life, then that's all that really matters.

  6. Ready to leak test.  All the difficult to access wiring is done, and I'll slap the drives in after it gets a leak test and Blitz Part 2.

     

    This case really isn't big enough for everything that is going in it, but since I've come this far, I'm just going to go ahead with it for now.  I'll probably end up dropping everything in a different case in the not too distant future.

     

    Here's a few not so great pics. 🙂

     

    leakTesting1.thumb.jpg.2e5001f4a9f1ba1cef49ec1f5a708e1a.jpg

     

    leakTesting2.thumb.jpg.ffa2fb4ea8d13b72d66d29d7f16cf7a5.jpg

     

    Edit: Looking at the pics, and I just noticed that one of the hdd fans is backwards. :confused_frusty2:

  7. I didn't get this up and runing, but I do have everything except for the wiring done.  I will post some pics when I get home this evening.

     

    Part of my goal on downsizing was to offline some data from my main file server, but I didn't get rid of as much data as I thought I would. 

    Final main storage pool will be an 8 drive btrfs raid10 with 4x 8TB Seagate EXOS and 4x16TB Seagate EXOS.  I'll update the OP, once I finalize the rest of the storage.

  8. 31 minutes ago, damric said:

    People running them at max overclock while in dark mode skewed the PPD?

     

    But it's weird that there are even 2 entries for the same GPU.

     

    There are technically two variants of the 750ti.  Not much info on the one you posted, but it looks to be an OEM only card with a GK106 die vs the regular 750ti which has a GM107 die. I don't think that card ever even made it out into the wild, except as maybe an engineering sample.  It is listed in the NVIDIA driver, and apparently someone must have one, since there are entries in LARS.

     

    750ti OEM: https://www.techpowerup.com/gpu-specs/geforce-gtx-750-ti-oem.c2462

    750ti: https://www.techpowerup.com/gpu-specs/geforce-gtx-750-ti.c2548

    • Thanks 2
  9. 7 hours ago, Scc28 said:

    would be my 3090 as cpu folding isnt cutting it points wise on the 5950?

     

    Right on.  The ETF handbook is here: https://forums.extremehw.net/topic/1090-extreme-team-folding-manual/#comment-21241

     

    If all that sounds good to you and you are down to fold 24/7, just PM me the following info, and I'll get you added to the team. 👍

     

    EHW Name:
    Folding Name:

    Folding Team:
    Unique Passkey:
    Hardware:

     

     

  10. 52 minutes ago, damric said:

    I'd hate to help you guys but you should check with @BWG and be sure that handicap multi is right on that GTX 750 Ti. It seems like it should be a lot higher, like much closer to the huge ass multi that Michele has with her GTX 750.

     

    the PPDs are like

     

     

    FOLDING.LAR.SYSTEMS

    F@H GeForce GTX 750 Ti performance as of 1/25/2022. Averages across all projects PPD:81,018 - Work Units Per...

     

    and

     

     

    FOLDING.LAR.SYSTEMS

    F@H GeForce GTX 750 performance as of 1/25/2022. Averages across all projects PPD:69,207 - Work Units Per...

     

    So something looks off because your PPDs look right but your handicap multi seems too low? I didn't math it, just estimating.

     

     

     

     

    41 minutes ago, Avacado said:

    Unless the rules have changed, going to need about 350 sample units before that 750Ti is eligible right?

     

    I'm guessing this is a similar situation to the two Radeon VIIs that are in the database.  The 750ti was/is a wildly popular card, so the one to look at is here: https://folding.lar.systems/gpu_ppd/brands/nvidia/folding_profile/gm107_geforce_gtx_750_ti_1389

     

    • Thanks 1
  11. 3 hours ago, Scc28 said:

    Hi all popped over from Overclock.

     

    Can i lend some folding power and join the team? folding for fun again after recent rig refresh!

     

    image.thumb.png.440479e8a32fb0e146d1c6a38bdc0275.png

     

    Thanks

     

    Simon

    We would love to have you on the team. What are you folding on for the comp?

  12. Quote

    Nvidia Corp. is quietly preparing to abandon its purchase of Arm Ltd. from SoftBank Group Corp. after making little to no progress in winning approval for the $40 billion chip deal, according to people familiar with the matter

     

    https://www.bloomberg.com/news/articles/2022-01-25/nvidia-is-said-to-quietly-prepare-to-abandon-takeover-of-arm?srnd=premium

     

    Non-paywalled secondary source: https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-Reports-No-Arm

     

  13. 1 hour ago, NBrock said:

    I'll have to check it out. Right now I pretty much only have the 3090 in my main rig and the A2000. The 3090 sees some down time when I game.

     

    I did notice on the 3090 where I had been bouncing off the power limiter even with the slider at 114% I am no longer having that issue on some of the more demanding WUs (since the shunt mod), like project 17257. It's cranking out an estimated PPD of 10342797 now (TPF is 52 seconds). This is in Windows 10 with my normal every day image. 

     

    10M ppd is just nuts.  How much power is it pulling on the more demanding WUs?

    • Respect 1
  14. 3 minutes ago, ArchStanton said:

    As a water-cooling novice, I also snagged a Corsair XD5 pump/res combo like firedfly.  My loop consists of pump->360x64 bottom radiator->480x40 front radiator->360x20.5 top radiator->CPU block->GPU block-> Koolance INS-FM19 flowmeter (w/ ADT-FM03 frequency adapter).  At 100% PMW, the flowmeter indicates the pump is circulating 3100mL/min (186L/hr).

     

    That is adequate flow assuming your meter is accurate.  .5 gallons/minute is about the minimum acceptable flow rate, and pretty much anything above 1 gallon/minute you'll see increasingly diminishing returns.

     

    You are right in the middle, so it should be fine as long as your water and component temps are good.

    • Thanks 1
    • Respect 1
  15. On 15/01/2022 at 14:44, Avacado said:

    So far no joy. XOC bios Bricked one of the 3 vBIOS and had to get out the trusty GT710 to save it.

     

    In the future you shouldn't have to use a secondary GPU to recover a bad BIOS flash.  All you should have to do is switch to a good BIOS, boot, and then once in the OS flip the switch to the BIOS that you want to flash.

     

    To make things easier I would just run on the OC BIOS, flip both the voltage switches to on, and see how far you get with Precision X.  I think this is what I ultimately ended up doing on Windows, since I was able to pretty much max the card out that way.

    I think you might have to use an old driver to use the xoc BIOS I posted.  I honestly don't really remember for sure.  I wasn't nearly as thorough in taking notes and keeping track of changes back then.  You might try the 347.88 drivers.


     

    • Thanks 1
  16. I guess you're running Windows on the headless machine?  Do you have ssh set up?  You can flash the GPU from a terminal (even on Windows), so if you plan on keeping it headless, I would set up ssh.  I'm not positive if the Windows NVIDIA driver has the bits necessary to OC from the terminal, but you can definitely use nvidia-smi to monitor the card.

     

    As far as where you're at right now, the easiest thing to do if you have another machine, would be to throw the card in that machine and flash back to the original BIOS.  It looks like that GPU only has a single BIOS, so rather than blind flashing the card, throwing it in another machine would be the easiest. 

     

    Once you're back up and running, I would personally just tweak a stock BIOS with Maxwell BIOS editor, rather than trying to cross-flash.  Unless you have really good cooling in your 4u chassis, you will probably be temp limited before you are volt limited.  The other benefit to flashing your own BIOS to the card is that it will just be a set it and forget setup.  You can just set the card to run at the max stable frequency, power, and fan speed, rather than messing around with clocks and fans from the OS.

     

     

    • Thanks 2
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy