Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    642
  • Joined

  • Last visited

  • Days Won

    35
  • Feedback

    0%

Everything posted by tictoc

  1. It's alive!! Just wrapping up 10 hours of stressapptest on the memory at 2800MHz.
  2. I didn't watch the video, but I assume this just comes down to dual rank vs single rank. Was the 2x configuration done with single rank DIMMs? If so, then populating the board with 4x single rank DIMMs will effectively give you the same thing as running a pair of dual rank DIMMs. There is a bit of performance to be had going with dual rank DIMMs when everything else is equal. It can be a little tougher to push clocks on 4 DIMMs vs 2 DIMMs. Dual rank DIMMs on both of my Threadrippers were better than single rank. Not a huge difference, but enough that it was outside of the margin of error when I was testing various DIMMs.
  3. It seems like we should at least have a catch-all thread for Linux, BSD based, and other open-source operating systems. Not sure where this thread will go, but it can at least serve as a landing spot for anyone that has random questions or just wants to share screenshots and/or info about their Linux/BSD/Open-source OS adventures. Currently, everything I have running on bare metal is Linux. I have FreeBSD, pfense, and Haiku VMs running, but everything else is Linux. My bare metal installs are running Arch, Gentoo, and Debian, along with Debian, CentOS, and Ubuntu VMs for testing.
  4. Time to test a new PCIe 3.0 drive. SK Hynix P31 1TB
  5. I'm probably just going to end up pretending that all the Threadripper memory settings don't exist in the BIOS, and just treat it like my C602 board. That board has a nice and easy memory OC. Set speed to 1866MHz, up memory voltage until it's stable, and then just be content that you're running cheap 1333MHz RDIMMs at 1866MHz.
  6. I forgot the post office closed early on Saturday, so I didn't make it down from my house in time to pick up my Optane drive. Looks like I'll just be tuning RAM this weekend. Tuning RAM is going to be a real pain on this board. If the RAM fails to train at a given speed/timings, it won't post, and then I have to go through the following process. Power down -> remove two sticks -> power up -> pray to the memory gods for a successful post -> adjust speed/timings -> reboot with two sticks -> verify post -> power down -> install removed sticks -> power on -> cross fingers and pray for post -> boot into OS -> rinse and repeat. If after pulling two sticks it still fails to post, then it takes a CMOS reset to get back to a state where the board will post. Running at 2400 with pretty trash timings right now, hopefully I can just take very small steps and avoid the whole shut down remove RAM reset CMOS shenanigans.
  7. I've been running the Heatkiller block since I first jumped on the Threadripper train with a 2970WX on a Taichi board. Currently using the same block on my main machine which is running a 3960X @ 4.25GHz all-core OC 24/7. This will be running a very mild OC, since my 2970WX is not a great clocker and hits a pretty steep wall at about 3.75-3.8GHz all-core. The GPU in the loop (Vega 64 for now) will be undervolted, so the pair of 360s should keep things nice and cool. Getting everything set up on air on my bench to start doing some testing today. It will be a few weeks before the build actually starts going together, since there is a bunch of testing and validation that I need to do before I migrate everything over to this build. As part of my down-sizing I am purging a ton of data from storage, since I am a recovering data hoarder. The build itself is pretty straight forward, but the underlying stack will take a bit to iron out. I am going to be moving all my data from mdadm/xfs to btrfs, and before I do that there will be a fair amount of testing to do. Also, I am going to do a bit of poking at bcache, since I haven't played around with that in 4ish years.
  8. It is the only tube I've used for the last 10 years. For the last 6-7 years it's been EK ZMT, and before that it was just some random EPDM tubing. Best tubing there is. Zero maintenance, no funky plasticizer, blocks all UV light to help minimize any growth, and can be turned at pretty tight angles without collapsing. The tubing being matte black is the icing on the cake. The pair of X550's is a nice bonus and frees up a PCIe slot. Just a few GPUs in this machine, one for the HTPC and one to be used in various VMs. Although I'm sure there will be a bit of F@H run from time to time.
  9. It has been about 6 years since I've posted anything resembling a build log online. That ended up being more of a summary due to time constraints, and that log has pretty much gone the way of the dodo. This build will be somewhat of an evolution of that machine which housed my daily driver and my HTPC in one case. For this machine I am planning on retiring most of the gear in my rack (ASRock Rack w/ 2x 2680 v2, X99-E WS w/ 2697 v4, ASRock X570M Pro 4 w/ Ryzen 3 3200G) and consolidating down to a single machine running in a Thermaltake Core X5. I already have most of the parts for this build, so I hope to have it on the bench to start some testing this weekend. Parts list Case - Thermaltake Core X5 Motherboard - ASRock Rack X399D8A-2T PSU - EVGA Supernova G2 1000W Silverstone Strider Platimum 1200W CPU - AMD Threadripper 2970WX RAM - 4x 8x Micron 16GB DDR4-3200 ECC-UDIMM (eventually 8x 16GB) GPUs - Radeon 5700XT 2x Vega 64 2x Radeon VII SSDs - 2x Intel Optane 905P 960GB U.2 NVMe, Intel Optane 905P 480GB U.2 NVMe, ADATA SX8200 Pro 1TB NVMe, SK Hynix P31 1TB NVMe, 4x Inland Premium 1TB NVMe, 2x crucial MX500 2TB SATA HDDs - 2x HGST Deskstar NAS 4TB, 1x Seagate IronWolf Pro 4TB, 5x 1x Seagate EXOS 8TB, 1x Seagate Ironwolf Pro 8TB, 4x 8x Seagate EXOS X18 16TB, 1x 4TB Toshiba N300 (Hot swap bay for backups) CD/DVD/BluRay - LG WH14NS40 flashed for 4k UHD Cooling CPU - Watercool HEATKILLER IV Pro Copper GPU - EK Vector Radeon VII Copper/Acetal Radiator - 2x XSPC EX360 2x Watercool HEATKILLER 360-L Pump - 2x Watercool D5 w/ EK-XRES 100 Reservoir Watercool Industrial Dual D5 Top w/ 100mm Watercool HEATKILLER Tube Reservoir Fans - Silverstone AP183 (front intake), 6x Arctic P14 PWM (rear exhaust, HDD fans, midplane), 6x Artic P12 PWM (radiator fans) The first potato pic of this log will be a quick shot of the ASRock board.
  10. Back home after an unplanned for two week stay-away, and now I can start testing my new do everything home server. ASRock Rack X399D8A-2T
  11. The SX8200 Pro is a great drive (I have a pair of them). If I were buying a similar drive today it would probably be the SK Hynix Gold P31. Top of the line PCIe 3.0 performance, 5 year warranty, 750 TBW, and excellent temps. STH Review: https://www.servethehome.com/sk-hynix-gold-p31-1tb-nvme-ssd-review/
  12. NVMe only is a head scratcher, especially since there are SSDs running in the wild that are more than a decade old. Good to see that this is going to be accessible from the desktop, but if you want most of the S.M.A.R.T. details you can always jump into PowerShell. For temp, errors, wear level, and power on hours: Get-Disk | Get-StorageReliabilityCounter For all the S.M.A.R.T. data that Windows can see: Get-Disk | Get-StorageReliabilityCounter | Select-Object -Property "*"
  13. ROCm is a bit of a mess. I am actually running it on an upstream kernel, but the OpenCL implementation in ROCm is incomplete at best. Right now I'm not sure what AMD wants developers to do. AMD is pushing their HIP/HCC stack, OpenCL 3.0 rolls back the clock about a decade on the standard, and Intel is pushing OneAPI. All the while CUDA is remains the dominant platform. Hopefully their is a future for OpenCL, but right it doesn't look great.
  14. AMDGPU clock controls have been broken in rc2-rc6. Same thing happened in the run up to the 5.8 kernel, but at the last rc the offending patches were all reverted before the stable kernel was released. Just built rc7, so we'll see how it goes. ? Hoping that it will be fixed in this rc or the final rc, since the 5.9 amd-staging kernel has been working flawlessly for me for the last three weeks.
  15. My only Windows machines are VM's, so I haven't gone very deep in the weeds to look for monitoring/profiling tools. I know that there is a free trial of a Windows client for collectd, but no idea if it is any good. I think your current monitoring stack should be able to hook into your Windows clients, but no idea what all is exposed. Nice work consolidating down into a single rack.
  16. Upped the power limit, core clocks, and memory clocks on the 2080S, and now I'm sitting at 4.2M on a p17402. Also, I just threw a 5700XT on that machine, and with the power limit maxed, but no OC on the core or the memory, it is at 1.25M on a p17402.
  17. Mainly because I have to enable a bunch of scripts and canvas elements, and my Firefox install is pretty locked down.
  18. Grafana does excel on the eye candy. Now you can start down the rabbit hole of collectd, InfluxDB, and grafana to chart performance metrics
  19. Chromium for streaming sports, Firefox as the daily driver.
  20. Good call on replacing the UPS fans. I swapped out mine for some Noctuas, and it is much quieter now. I think that UnRaid uses NUT for UPS monitoring. If that's the case, you can pretty easily script custom actions based on the different states of the UPS. For example, with cron or systemd timers + ssh you could setup scripts to do things like pause folding after a certain amount of time, extending the uptime of your server and/or network appliances, during an extended outage.
  21. With the 1343x WUs the highest I've seen is 3.8M ppd. That's with stock core/mem clocks. You can probably get over 4M with a bit of an OC. Right now my memory is just crunching in the default P2 state, but with a 100MHz boost to the core and mem clocks at 8000MHz it should push it up over 4M ppd.
  22. 2080S now up and folding 24/7 for EHW.
  23. What will be really interesting to see is how much of the Google back-end web services and other misc. Google stuff has been gutted from Chromium by Microsoft. I've been building and running ungoogled-chromium for some specific web tasks for the last few years. If Microsoft hasn't replaced the Google sauce in Chromium with their own alternatives, Edge might make a good base for an unmicro-googled Chromium. ? **Edit** Looks like they are just shipping the binary, so my unmicro-googled chromium fork won't happen.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy