Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

tictoc

Members
  • Posts

    637
  • Joined

  • Last visited

  • Days Won

    33
  • Feedback

    0%

Everything posted by tictoc

  1. Back home after an unplanned for two week stay-away, and now I can start testing my new do everything home server. ASRock Rack X399D8A-2T
  2. The SX8200 Pro is a great drive (I have a pair of them). If I were buying a similar drive today it would probably be the SK Hynix Gold P31. Top of the line PCIe 3.0 performance, 5 year warranty, 750 TBW, and excellent temps. STH Review: https://www.servethehome.com/sk-hynix-gold-p31-1tb-nvme-ssd-review/
  3. NVMe only is a head scratcher, especially since there are SSDs running in the wild that are more than a decade old. Good to see that this is going to be accessible from the desktop, but if you want most of the S.M.A.R.T. details you can always jump into PowerShell. For temp, errors, wear level, and power on hours: Get-Disk | Get-StorageReliabilityCounter For all the S.M.A.R.T. data that Windows can see: Get-Disk | Get-StorageReliabilityCounter | Select-Object -Property "*"
  4. ROCm is a bit of a mess. I am actually running it on an upstream kernel, but the OpenCL implementation in ROCm is incomplete at best. Right now I'm not sure what AMD wants developers to do. AMD is pushing their HIP/HCC stack, OpenCL 3.0 rolls back the clock about a decade on the standard, and Intel is pushing OneAPI. All the while CUDA is remains the dominant platform. Hopefully their is a future for OpenCL, but right it doesn't look great.
  5. AMDGPU clock controls have been broken in rc2-rc6. Same thing happened in the run up to the 5.8 kernel, but at the last rc the offending patches were all reverted before the stable kernel was released. Just built rc7, so we'll see how it goes. ? Hoping that it will be fixed in this rc or the final rc, since the 5.9 amd-staging kernel has been working flawlessly for me for the last three weeks.
  6. My only Windows machines are VM's, so I haven't gone very deep in the weeds to look for monitoring/profiling tools. I know that there is a free trial of a Windows client for collectd, but no idea if it is any good. I think your current monitoring stack should be able to hook into your Windows clients, but no idea what all is exposed. Nice work consolidating down into a single rack.
  7. Upped the power limit, core clocks, and memory clocks on the 2080S, and now I'm sitting at 4.2M on a p17402. Also, I just threw a 5700XT on that machine, and with the power limit maxed, but no OC on the core or the memory, it is at 1.25M on a p17402.
  8. Mainly because I have to enable a bunch of scripts and canvas elements, and my Firefox install is pretty locked down.
  9. Grafana does excel on the eye candy. Now you can start down the rabbit hole of collectd, InfluxDB, and grafana to chart performance metrics
  10. Chromium for streaming sports, Firefox as the daily driver.
  11. Good call on replacing the UPS fans. I swapped out mine for some Noctuas, and it is much quieter now. I think that UnRaid uses NUT for UPS monitoring. If that's the case, you can pretty easily script custom actions based on the different states of the UPS. For example, with cron or systemd timers + ssh you could setup scripts to do things like pause folding after a certain amount of time, extending the uptime of your server and/or network appliances, during an extended outage.
  12. With the 1343x WUs the highest I've seen is 3.8M ppd. That's with stock core/mem clocks. You can probably get over 4M with a bit of an OC. Right now my memory is just crunching in the default P2 state, but with a 100MHz boost to the core and mem clocks at 8000MHz it should push it up over 4M ppd.
  13. 2080S now up and folding 24/7 for EHW.
  14. What will be really interesting to see is how much of the Google back-end web services and other misc. Google stuff has been gutted from Chromium by Microsoft. I've been building and running ungoogled-chromium for some specific web tasks for the last few years. If Microsoft hasn't replaced the Google sauce in Chromium with their own alternatives, Edge might make a good base for an unmicro-googled Chromium. ? **Edit** Looks like they are just shipping the binary, so my unmicro-googled chromium fork won't happen.
  15. You are correct, 2 6-pins would probably be a bad idea. This shouldn't be a huge issue for most people, since most new PSUs come with at least two 8 pin connectors. Ultimately there is little difference in the end when you consider even most high quality, high wattage PSUs, are currently running a single set of wires to power dual 8-pin connectors. Regardless of what the connector is rated for, I find it pretty hard to believe that NVIDIA is going to release a GPU that pulls over 500W at stock volts and temps. Cooling something like that would be pretty ridiculous on air.
  16. With decent pins and cables, converting 2x 6 or 8 pin cables shouldn't be an issue. Digi-Key has the connectors for about $1.50. https://www.digikey.com/products/en/...=1&pageSize=25
  17. If you find yourself needing mare RAM, shoot me a message. I have a bunch of 8GB ECC RDIMMS. https://www.samsung.com/semiconductor/dram/module/M393B1K70DH0-YH9/ The sticks are 1333 rated dimms, but I ran them at 1866Mhz with a bump in voltage, on my old 2P ASRock server.
  18. I know you just got this up and running, but I would nuke the steamcache containers and start over with lancache. ​​ The quick start guide at lancache will get you up and running in just a few minutes, since you already have your VM and storage pool set up. The only additional setup step will be installing Docker compose. http://lancache.net/docs/ *Edit* Additionally I would install all of the docker components using the Ubuntu repos. On Ubuntu 20.04 I think everything is in the official repos, so you don't need to add the Docker repos. This should result in a setup that is easier to maintain over the long term, without having to manually update packages and repos. Remove existing Docker versions that were previously installed $ sudo apt remove docker docker-engine docker.io containerd runc docker-compose Install docker engine $ sudo apt install docker-compose Verify that docker is working $ docker run hello-world Verify docker-compose is working $ docker-compose version Check if your user is in the docker group. I'm not sure if Ubuntu's package automatically adds your user to the docker group. $ groups Add user to docker group if you aren't already added. This allows you to run docker commands without using sudo. You will need to log out and log back in after adding your user to the group. $ sudo usermod -aG docker $USER This step might not be necessary, but I don't know how Ubuntu's docker service is set up. Set docker service to start at boot $ sudo systemctl enable --now docker Now just follow the steps on lancache, and you should be good to go with a nice stable set up. :thumbs_thumbsupup: *Edit 2* Since this is just a minimal VM to run docker containers, I wouldn't bother with a full Ubuntu Desktop install. I would just install Ubuntu Server https://ubuntu.com/download/server Everything will be done from the terminal, so there really isn't any need to run a full graphical environment, unless that is easier for you.
  19. All of the ready made solutions that I am aware of are Linux based. Docker on Windows from what I know, is still a pretty painful experience. You might be able to set up a WSL2 Ubuntu install, but it will be a more complicated (maybe impossible??) to do the DNS redirects from a container inside of WSL. The easiest road would probably be to set up an Ubuntu Server VM with VirtualBox or Hyper-V. Once you have your VM up and running, you just need to get Docker installed and running. https://docs.docker.com/engine/install/ubuntu/ After that you can just follow the guide at lancache to get your containers installed and running. It might seem a bit involved, but it should be fairly painless to get it up and running. Not sure on any ready made and performant Windows solutions, but maybe someone else that runs Windows will chime in.
  20. Not sure if you are still looking to do this, but lancache is probably the easiest way to set this up. Downloading games in a few seconds is fun. ​
  21. If the system is truly headless, you will have to do a bit of Xorg hackery to be able to control fans and clocks with no display server running. The GPUs will fold at full speed (in the P2 state) without any dummy adapters. To make things easier, I would recommend having the primary GPU hooked up to a monitor at least through install and testing. Things are a little more complicated if this is going to be a truly headless machine but not impossible. There are some gains, since there is zero display overhead if you are truly running headless with no display server.
  22. Everything that I had previously tested at x1, was at PCIe 2.0 x1.
  23. Here's a WU running on a GPU that is more than 2x as powerful as the 1070 I tested before. OS: Arch Linux Kernel: 5.6.14 Driver: 440.82 GPU: RTX 2080 Super p13406 (core_22) PCIe 3.0 x16: TPF - 01:09 | ppd - 2,512,601 || PCIe Utilization - 18% | GPU Utilization - 97% | GPU Power - 220W | Clocks - 1935core|7500mem PCIe 3.0 x4: TPF - 01:12 | ppd - 2,357,211 || PCIe Utilization - 34% | GPU Utilization - 95% | GPU Power - 217W | Clocks - 1935core|7500mem PCIe 3.0 x1: TPF - 01:28 | ppd - 1,744,509 || PCIe Utilization - 50% | GPU Utilization - 84% | GPU Power - 190W | Clocks - 1935core|7500mem PCIe 2.0 x1: TPF - 01:50 | ppd - 1,248,271 || PCIe Utilization - 55% | GPU Utilization - 77% | GPU Power - 168W | Clocks - 1965core|7500mem While I was testing this, I realized that the earlier numbers I reported with a 1070 are not accurate. I was using the x1 slot in my x399 board, and that slot is PCIe 2.0 x1. That means that the performance at PCIe 3.0 x1 is going to be quite a bit better than what I posted before.:applaud:
  24. I've been running ZMT tubing for about the last 5 years. Prior to that I ran norprene tubing. The ZMT does tend to a have an OD that is on the large side of 16mm|5/8". I've yet to have a single fitting not work or leak, but some are a bit tough to tighten down all the way. Currently running the following fittings with 10/16mm ZMT: EK, Monsoon, Bitspower. and Barrow.
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy