Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

NavJack27

Premium Platinum - Lifetime
  • Posts

    68
  • Joined

  • Last visited

  • Days Won

    4
  • Feedback

    0%

Everything posted by NavJack27

  1. NICE DUDE! I'm glad you finally have been able to witness the power!
  2. Here are some designs that I've made so far. I'll just edit this post when I make more or change previous ones. (I'm not a scrub, no 3rd party tools were used to assist the making of these)
  3. *long sigh* yes, I'm getting one. 3080 Ti.
  4. dang weed growers are stealing your electricity!
  5. annoyingly, if you want to adjust THESE right here, you need to do it every work unit. if you are just doing the overall priority and selected cores, you can automate that with some utilities.
  6. I got this one work unit right now that is just confusing. It's a GPU one. P16433 R490 C0 G8. It won't go over like 60-70% GPU usage. The PPD is ultra low. the ETA is long. Just a heads up if you end up getting one that has your rig running oddly cool with low usage.
  7. Good to know about the packet size thing. Its annoying and time consuming to test all the variations of things, as you know. :-P
  8. Process Hacker is the only thing I'm aware of that allows you to adjust the program or process threads of a thing that is running. Not sure why other utilities don't do this.
  9. For whatever reason, every time I use an Intel iGPU I get the feeling that they don't "cheat" with actual driver performance tweaks... If you get what I'm saying. Like, AMD and Nvidia for years have worked to do gameready or whatever type tweaks to get games more optimal and my feeling is that Intel is almost like, if you use their GPU you are getting something akin to a 'reference rasterizer' driver for better or worse. I could be wrong. But yeah, I was way more hype for Intel graphics when I met the team a year ago at GDC 2019... But now... Not so much, especially with all this Xe and Ponte Vecchio stuff going to datacenters and compute first. I mean, that is kind neat, but people are expecting gaming performance and if they don't make a showing for that... Yeah... But I meet Bob Swan in person and DAMN he has nice teeth so I could be entirely wrong about everything! I met Jim Keller... That's cool.
  10. That could definitely be true. But that brings up further complication in terms of using something like process lasso to get foreground applications to avoid the dedicated cores/threads for a totally optimal setup.
  11. I'm going to start this thread out with me testing my theory about GPU folding and program thread affinity. Program Thread Affinity Effect On Folding@Home GPU Folding System Info: Programs running are: * HFM.NET * Process Hacker (2 windows, main window & ‘fahcore_22.exe properties’) * Notepad.exe * Timertool set to 0.5ms Hardware Specs: * 8700k 5ghz core 4.4ghz cache * 2070 Super with a tiny overclock on both the memory and core and a static 100% fan speed Let Me Explain: Manual program thread affinity is basically me giving the program thread of FAHCORE_22 that uses the most CPU its own processor to work with. This is processor 10 of 11 on my 8700K. I also give the rest of the program threads their own affinity by deselecting processors 10 and 11. Work Unit: P16434 (R168, C0, G14) The Results: Manual Affinity (idle priority everything but the main program thread which is on highest priority) 06% - 00:01:46 07% - 00:01:43 08% - 00:01:44 09% - 00:01:45 10% - 00:01:43 Auto Affinity (normal priority) 11% - 00:01:45 12% - 00:01:44 13% - 00:01:45 14% - 00:01:43 15% - 00:01:45 Manual Affinity (Normal Priority Everything) 16% - 00:01:44 17% - 00:01:46 18% - 00:01:43 19% - 00:01:43 20% - 00:01:44 Manual Affinity (everything on one processor a la task manager and high priority) 21% - 00:01:47 22% - 00:01:44 23% - 00:01:45 24% - 00:01:44 25% - 00:01:48 Auto affinity retest (I paused the work unit and started it back up and kept everything running) 30% - 00:01:43 31% - 00:01:43 32% - 00:01:43 33% - 00:01:46 34% - 00:01:43 Observed Testing Issues: Once I set affinity for a program thread that program thread then has what is called an ideal processor. This is a very sticky thing that is hard to break and have it just randomly pick a new processor to use. Some of the results might not be totally indicative of real world due to this and other factors like other things running at the same time. But it would be unfair to have a set of tests be ran without those things running that were running in other sets of tests. Manual Affinity Retest (paused the work unit and closed everything except hfm.net and notepad once I set affinity) 37% - 00:01:46 38% - 00:01:43 39% - 00:01:43 40% - 00:01:43 41% - 00:01:46 Auto Affinity Retest (paused the work unit and closed everything except hfm.net and notepad) 44% - 00:01:42 45% - 00:01:45 46% - 00:01:42 47% - 00:01:42 48% - 00:01:43 Thoughts and Conclusions: Well, my theory either was correct at one time and Stanford updated things a bit or it was never right to begin with or the last time I messed with this I was on Maxwell and Pascal based GPUs and maybe the drivers work different… Any number of things. But at least my testing results show that leaving things on auto and not changing a damn thing with the affinity nets the best results on a totally idle computer.
  12. Its hard to find time to bench when there are so many proteins to fold.
  13. Beta cuz beta. Gimmie the beta. I'll be your testing monkey! Next unit at 100% so nothing sits in a queue losing time and points. The packet size thing I noticed in the past would get me quicker, less point heavy CPU work units if set to small. I just do the opposite for the GPU, big, cuz reasons.
  14. I've been running with these config flags. I'd like to know if other people notice a difference running these. CPU: GPU: Oh, and also don't forget to set that checkpoint to 30min instead of the default 15.
  15. Again, just post your scores up in here and maybe I'll get around to having a google sheet to keep track of everyone's bidness.
  16. Yeah, overclocking these never seems all that engaging. Do a + whatever offset to the core and if its cold enough maybe it clocks higher, maybe it doesn't. It's never a big number change but in some benchmarks even 25mhz more nets a difference. Strange times we live in.
  17. blah blah blah basically a copy of that other forums same thread except I posted it first and who knows if I'll have a google sheet to keep track of who owns what with what overclocks and benchmark scores.
  18. Post your friend code stuff in here so you can be less lonely.
  19. This thread will be about our adventures with our villagers and our islands in the newest entry of the Animal Crossing series. Post your ID thing for the game if you want. Link to little clips you've captured in the game. Complain about wasps! Share strategies for efficiency. Share design QR codes. Just have fun up in here!
  20. Honestly? LTT, I'll assume, just has a bunch of hardware posers. Minimal actual high throughput optimized rigs. We can beat most of the teams even though we have less members if we just help each other out. I might do a thing and post it in the discord or maybe gather a voice chat to brainstorm stuff. If we can get the people with the top hardware that want to use their rigs for half day or full day folding to also do a couple other tweaks... I got a good feeling.
  21. Oh, @ENTERPRISE going by WU count is quite cheatable. Just set a bunch of 2 or 4 core CPU workers to crunch through WUs like crazy. Just make as many workers that you can on all your rigs. I'm sure there is a way you could also split up GPU resources with like a VM or something or maybe just natively with two folding installs or one In a sandboxie.
  22. I gotta say that I'm extremely impressed by the output this 2070 Super 15.5GHz memory edition has. Only brings my system up to like 250w from the wall or something like that with just the GPU folding. I'd get a bunch of these if I had the money to do so just to do folding. Seems efficient and cost effective.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy