Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    633
  • Joined

  • Last visited

  • Days Won

    33
  • Feedback

    0%

Everything posted by tictoc

  1. Manual 4.3 OC here on my 3970X 3960X, but my CPU only gets a short break once every few weeks when I have to reboot for some updates. For my situation where the CPU is always at a minimum of 90% load, the overshoot/undershoot voltage swings with auto-boost and PBO were never really stable.
  2. Not many tweaks to do, but you will want to adjust the number of cores threads per CPU slot. You can do that using FAHControl. There are a few things that GROMACS (the program that is used behind the scenes for CPU folding) doesn't play nicely with. Primarily this is any thread count above 32 and also any thread count that is a prime > 7 or has a large prime factor. Also, it is best to set up multiple CPU slots, so that you will get the most work. What can happen if you have too high of a thread count is that you will get assigned work that does not use all of the available threads. I have found that the best max number of threads for each slot is 24. Any higher (up to 32) and you can still get work, but sometimes the WUs will be less than 32 threads. Not sure how many GPUs you have in the system, but each NVIDIA GPU will need to be given at least 1 thread (I recommend 2 (one whole core)). Here is how I would set up a 32 core processor with two GPUs. First CPU slot - 16 Second CPU slot - 16 Third CPU slot - 16 Fourth CPU slot - 10 That leaves a few cores to feed the GPUs and a core for general overhead. To set the number of cores per slot go to Configure -> Slots Highlight your CPU slot and then click edit The default is -1, and that will just set it to fold on all threads -1. Set that to 16 then click OK. Now click on add Select CPU and set it to 16 Do the same thing again until you have added all the slots you want and then click Save That should be it, and you should start to pick up work for each slot Also for anyone else that is folding on their CPU, all of the new tasks, which are a_7 type work units, do use AVX instructions. This means you are going to see quite a bit more heat and power usage. The old version of FAHBench is compiled with an older type of CPU task that does not use AVX instructions.
  3. E beat me to it. I've never actually used the web client, so I am no help on that front.
  4. Additionally Zoom is not end to end encrypted, regardless of what their marketing info says. The only encryption that is happening in video calls is the data that is streamed over the line. No different really than any https connection. All of the data on Zoom's severs is available to anyone in Zoom. They did update their privacy policy to specify that malware (ad tracking and selling) is only on their web pages, and they are not mining calls and chats to sell to 3rd parties. https://blogs.harvard.edu/doc/2020/03/27/zoom/ E2E encryption is hard for spur of the moment teleconferencing, but the fact that they have tried to redefine what the term means is pretty shady in my opinion.
  5. On my 980, I've only had one instance, that lasted more than a few seconds, of failing to get work in the past 5 days. Although, after looking I am running beta work on that GPU.
  6. To get in the top 800, it would be 133 million points. Which isn't too much with todays GPUs.
  7. You can just grab the txt file and put it in the F@H data directory. After you get the file just stop and then restart the client. *Edit* If you can't download the file, you can just copy the list, and then save it as "GPUs.txt".
  8. The P400 is whitelisted, but being a 30W card, performance if you can get it running, is probably somewhere in the neighborhood of a GT 1030. Here is a link to the latest list of GPUs that are whitelisted for F@H - https://apps.foldingathome.org/GPUs.txt If you want to post the beginning of the log, we can see what drivers are installed and if the client is picking up the card.
  9. I'll probably have some time to sink into Minecraft. Go ahead and add me to the list. :drunk: Minecraft handle is TickTock1780.
  10. Did you ever pick up any work? The only WUs that Navi can fold are core_22 WUs. I know that Stanford is prioritizing distribution of the Covid19 WUs which are all based on the new core_22. I imagine your laptop CPU and Quadro are picking up regular core_21 WUs in addition to the core_22 WUs. Not sure if you have the amdgpu-pro drivers, but you will need those in order to run OpenCL projects in Ubuntu.
  11. More of pain for the application as well, since it will need a bit of re-writing. We did make a few changes to optimize it for it's current environment. Previously it was running on a server with a pair of E5-2695 v4 CPUs and 512GB of memory. That was me. First time I've fired up Minecraft in probably 3-4 years.
  12. The 128GB is actually a little short of what I would like to have. Running particle simulations, and as the application has been optimized, RAM usage has gone up per 8 core task. Right now I need to allocate about 35GB per task, so now I'm just giving each task 6 cores. Peak throughput for the current jobs are sitting at 8-core/32GB. I might be able to get by with offloading some of the in-memory data to NVMe drives, but that would require me to take my loop apart and add an NVMe RAID card. The card would need to run in a x16 slot, so I'd have to lose the triple-parallel bridge I am currently using. I might give it a go later this week, since a RAID card plus the NVMe drives is quite a bit cheaper than upgrading to 32GB DIMMs. In it's normal day-to-day life, in addition to being my daily machine, it runs a few build servers, a host of differnt VMs, and does CI/CD for a few repos/applications. I had planned on bumping that machine up to 256GB with ECC UDIMMs, but putting a few things on hold due to the state of the world at the moment.
  13. Snow is in Colorado. 28" last Thursday, and another 16" this past Friday. It was super windy yesterday, so I was camped out in the house most of the day.
  14. Getting used to working from home is a challenge for people that have never done it before. I'll second the getting in a routine suggestion. Doing everything that you would normally do, waking up to an alarm at your usual time, showering, eating breakfast, and getting dressed for work, or whatever your normal schedule entails, will really help to structure the day, and get started on the right track. During the lock-down, work is still work for me, with the added bonus of a short staff and long hours, since we are considered critical infrastructure. School is a bit of a mess. Restricted lab access and most of the compute resources are prioritizing Covid19 related work. I am currently running a number of different projects for other graduate and post-doc students on my personal gear. That has been an interesting challenge and change of pace. Free time is spent mostly outdoors, but I am lucky since my back yard is many square miles of public land. Tomorrow is a full day off, so I'll probably go snowshoeing, shoot the bow, and maybe fire off a few rounds. Snapshot from my phone for everyone that is stuck inside in the city.
  15. It didn't throttle at 75% fan speed, but at that speed it is probably louder than most people would want deal with. It was getting nice cool ambient air (16°C), because the blower sits right on top of the bottom vents on my case. The posted temps was where it leveled out after 10-11 minutes of sustained load. Gaming you could probably get away with lower fan speed, and not lock the boost clocks at max OC.
  16. GPU is a reference model with the blower fan. Running some stress tests now using OpenMM (molecular modeling tool-set) I'll report back in a bit once the tests are done. 5700XT @ 2050core/875mem 75% Fan Edge Temp (avg) | 78°C Junction Temp (avg) | 102°C Memory Temp (avg) | 80°C Power (avg) | 216W Load (avg) | 98%
  17. I thought I might be looking at dropping a new short block in my STI, but that was not the case, and I am continuing on my march to 200k miles on the original block. I had a consistent misfire on cylinder #4 under heavy load (+50% throttle and +9 lbs of boost) and a bit of an unstable idle. If I stayed off the skinny pedal the car ran fine. Those symptoms can point to a ringland failure, and with full bolt-ons, many track and autocross days, along with generally aggressive driving I've been prepared for the motor to go boom for quite some time. Compression test was good, so I started looking at plugs, coils, and injectors. Turned out it was just a bad coil on the #4 cylinder. 20 minutes and 77 dollars later, I'm back to hooning around like a proper STI driver.
  18. Just got my 5700XT up and running. I bought it a few months ago, but the drivers were a mess, so I pulled it and set it aside while I waited for a more stable kernel driver. If anyone is contemplating running this card in Linux, you are definitely going to want to run the latest 5.5 rc kernel along with the most recent Linux firmware. Before the latest kernel the card couldn't be OC'd in Linux, and it had all sorts of bugs that caused random and somewhat frequent crashes. I've now had it up and running without crashing for the last three days. It is stable at 2085/900 under heavy OpenCL stress testing. There is still no ROCm support (that stack is generally a bit of a mess to run with upstream kernels anyway), and generally speaking, OpenCL code paths and/or AMD's OpenCL driver will probably have to be updated to be completely for compatible with Navi. I haven't done any 3D or game testing, but I might fire it up if anyone is interested in Linux performance.
  19. Just pulled the trigger on an ASRock TRX40 Creator. I guess we'll see how the 8-phase VRM does with a 3960X. I've been running an OC'd 2970WX on an X399 Taichi 24/7 for the last year without any issues, so I don't anticipate any throttling issues with the Creator plus an OC'd 3960X. The Creator had everything I wanted (10gbe onboard, 4 full length pci-e slots, ATX) with no RGB or other things that would be useless for me.
  20. My daily driver is a Filco Majestouch 2 TKL w/ Browns and Double Shot PBT caps. I also have a Max Keyboard Blackbird with Browns and a Cooler Master Quick Fire TK with Browns and white backlight that I use at work. Of the three the Filco is the nicest to type on, but I do like the hybrid layout of the Quick Fire TK. Previously I had a full size Filco, and it was a great board for many years. I only retired it to save a little space, and I have a separate numpad that I use when necessary. The Blackbird is a fine board, but the blue back light drove me nuts.
  21. I usually run the 2970 at 3.9, since it really eats power/volts when I bump it up over 3.9. I keep three GPUs under water and leave the 4th slot free for testing GPUs, nics, or HBAs.
  22. With 4 OC'd GPUs and the Threadripper OC'd to 4075 it can pull over 2000W. Even running everything with lower power limits, it can still hover around 1600W. I've never been comfortable running a PSU 24/7 right at the rated max output, especially on consumer boards where you don't have the ability to easily run multiple redundant PSUs. Running a pair of PSUs is even easier now since my current case (PC-O11 Dynamic) is built to house two PSUs. I haven't dome much gaming in CF since I was running a pair of 7970s, but all the games I play now run just fine for me on either a single vega64 or the VII. Back on topic. A bit of a Frankenstein bench rig, with a 980 KPE and a 7990, running on a Prime X370-Pro with an R7-1700.
  23. It would be a pretty nice upgrade, but I'm probably going to stick with it for now. Next upgrade will probably be swapping out the Vegas for two more Radeon VIIs. If I go that route I might need to swap out one of my PSUs. Currently running a 1300W and a 1000W, and if I OC everything it gets pretty close to maxed out.
  24. 2970WX in that system, and thanks for the welcome.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy