Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

xlen

Members
  • Posts

    76
  • Joined

  • Last visited

  • Days Won

    6
  • Feedback

    0%

Posts posted by xlen

  1. I might be able to help. I know of some good fans, and of course some not so good. Airflow is all about preessure differentials. Higher pressure air flows into lower pressure areas. First thing (don't now if you know or not) is fan static pressure rating is how much pressure fan and push into a sealed container at it's full speed. Now to put 1.5mm H2O rating into perspetive, it is about as much difference in pressure as we have on our waist vs on your feet when standing at sea level. 1.5mm H2O rating is almost no pressure at all, and fans need to be spinning relatively fast (1000-1500rm) to achieve 1.5mm H2O of pressure.

     

    I know about that, fan size and RPM are linked directly, fan shape directly changes noise, airflow and pressure, I also know that airflow optimized fans(Kaze Flex comes to mind) actually are almost always rated at 1.5-2mm H20, meanwhile pressure optimized fans can do many times higher pressure(pretty much any Delta fan comes to mind here).

    I myself am annoyed that people that do fan and cooler reviews have no idea what metrics they should use and use the popular crap CFM and dB, which tbf has no meaning without pressure for both airflow and sound and frequency for sound, low humming fans like Noctua are pretty inaudible even at higher dB, meanwhile something with higher frequency will be really annoying even at low dB, so dB(A) is what makes you love or hate noises coming from a fan, while most reviewers use dB and misleadingly write the dB(A). (here is a lot of good information about A-weighting: https://en.wikipedia.org/wiki/A-weighting ).

  2. This is going to be a fun and informative thread ! :cool:

     

    I am particularity interested in fast scenery-change games, like flight sims and racing games on ultra graphics / 4K, and NVLink-capable (CFR even better). Here's my short list:

     

    MS Flight Simulator 2020

    Various Crytek- engine games (does Ampere' play Crysis ?', hehe)

    Metro Exodus

    RDR2

    Various 'Need for Speed' (good to stress graphics at speed), may be also The Crew 2, Forza Horizon 4.

     

    BTW, most of the above work with NVlink/SLI (some even with the future 'CFR' rather than just 'AFR'). Also, my 2x w-cooled 2080 Ti run at between 2145 and 2205 on NVlink, so for now, I don't feel the rush to buy Ampere on release but instead look at threads such as this to figure out what / when to buy.

     

    Speaking of 2080 Ti NVlink, here are two vids for RDR2 @ 4K ultra :D

     

     

     

     

    I'm sorry to bring the bad news, but NVLink/SLI won't be scaling in games, nVidia has been killing it off in gaming, the same is true about AMD's Crossfire.

  3. LOL No, nothing that technical.

     

    My knowledge is more the application of airflow. Like how to setup a system so it's airflow supplies cool air to components by creating good case airflow that flows heated air flow out without mixing it (and heating) with cool air coming from intake fans. When done well we can have a nice cool and quiet system.

     

    Oh well, I was just hoping that you'd be the one person that knows something more about airflow vs pressure optimization, etc than what I know and I've been able to find online, well still, I hope that we can learn from each other ;)

  4. Heard about EHW from Enterprise so decided to join. Hoping my background in air cooling can help others.

     

    Hello, taking your background in air cooling as something related to designing fans?

  5.  

    If that was the case would that flaw not enable access to hardware level control such as voltage etc ? In order to kill them or are we talking about another vector ?

     

    It doesn't require hardware access, but it does require elevated privileges and some registry trickery, he managed to pump 1.7V while CPU was thinking it was getting 1.2, so my guess is if anybody weaponizes it, that's a whole lot of dead intel CPUs. Fortunately from the discussion we had I understood that Ryzen CPUs wouldn't be affected and it is not possible to repeat it with it with current knowledge, but Ryzen APUs might be affected in some way, further investigation is still required.

  6.  

    Yeah, just not looking forward to migrating to a new Cache drive, I'm thinking a pair of the same SX8200 would be an okay Cache drive unless you can recommend something else. I think 512 GB might still be okay for now as I have a separate 1 TB 660p mounted to my Windows VM for game servers and the like.

     

    Any TLC SSD with DRAM cache should do okay, SX8200 or the Pro both will have pretty decent performance as well.

    • Thanks 2
  7.  

    Well I blew through my 512 GB 660p's rated 1 TBW Endurance in about a year on my Plex Server as the Cache drive:

    [TABLE]

    [TR]

    [TD]-[/TD]

    [TD]Critical warning[/TD]

    [TD=colspan: 8]0x00[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Temperature[/TD]

    [TD=colspan: 8]39 Celsius[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Available spare[/TD]

    [TD=colspan: 8]100%[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Available spare threshold[/TD]

    [TD=colspan: 8]10%[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Percentage used[/TD]

    [TD=colspan: 8]40%[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Data units read[/TD]

    [TD=colspan: 8]18,890,734 [9.67 TB][/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Data units written[/TD]

    [TD=colspan: 8]262,780,475 [134 TB][/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Host read commands[/TD]

    [TD=colspan: 8]193,405,751[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Host write commands[/TD]

    [TD=colspan: 8]2,680,048,673[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Controller busy time[/TD]

    [TD=colspan: 8]18,679[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Power cycles[/TD]

    [TD=colspan: 8]30[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Power on hours[/TD]

    [TD=colspan: 8]11,446[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Unsafe shutdowns[/TD]

    [TD=colspan: 8]6[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Media and data integrity errors[/TD]

    [TD=colspan: 8]0[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Error information log entries[/TD]

    [TD=colspan: 8]0[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Warning comp. temperature time[/TD]

    [TD=colspan: 8]0[/TD]

    [/TR]

    [TR]

    [TD]-[/TD]

    [TD]Critical comp. temperature time[/TD]

    [TD=colspan: 8]0[/TD]

    [/TR]

    [/TABLE]

     

    That Cache drive hosts:

    - Two VM boot drives

    - Plex Library files (not media)

    - Plex Temporary Transcode directory

    - Docker Image and Docker App Data (InlfuxDB, Tautalli, Unifi Controller, etc.)

    - Write Cache for network writes

     

     

    I should probably look at replacing those soon as Intel only rate them for 1 TBW or 5 years, no errors yet, but not super keen on risking things on my server...

     

    QLC as a cache drive is a bad idea and damn you already are 34TBW over the 100 it's rated for, won't live for long now, probably will begin to throw write errors within the next 15 TBW

    I'd get a MLC/TLC SSD with DRAM as a cache drive, with this kind of write pattern will live much longer.

  8.  

    I have in fact been reconsidering. Thinking about a 1080 Ti, which does seem to have aged quite well. Additionally I'd save a few hundred and I'm not entirely sure if I need 2080 Super/Ti kind of power.

     

    2080S and TI only bring RTX to the table, which tbh is why they won't age well, Ampere will perform much better with it and Turing will become obsolete.

    I'd go for the 1080Ti and upgrade in 3-6 years

  9.  

     

    I also noticed this with the version 1.7.3 as I tried to configure my G.Skill F4-3600C15-8GTZ which is a Samsung B-die module. The system would not POST after soft reboot and after long testing, I figured out the problem was with RTT_WR being OFF. When left to Auto, everything suddenly started to work on my Asus Strix X570 E-Gaming motherboard with Ryzen 5 3600 even at DRAM voltage of 1.4 V. Of course, Gear Down mode had to be enabled, otherwise BSOD.

     

    https://photos.app.goo.gl/AN2gEBTrUzfemk4x7

     

    GDM must be enabled due to the odd timings you have, anything that doesn't divide by 2 will need GDM for stability.

    Again I'd rather see people test their luck with manual OCing than use the Calculator, it's not good and manual OCing isn't that hard if you do some reading in Integral's guide.

  10.  

    Nice thanks for the info, Will have to look into those as well in the future then, for a moment I could not remember the company name responsible for XPG. As for other stats other than BackBlaze, not that I know of, that was the only one I knew of and since then I have not needed to look. There might be one I do not know of. Hahah well maybe one day we can look into an SSD endurance review of sorts :p

     

     

    XPG is ADATA's Gaming branding.

  11.  

    I've used ADATA drives before for friends and family builds and all the reviews have always been great. I probably wouldn't use an ADATA drive in my server environment without more long-term reliability data, but they have higher Endurance then Intel 660p's I'm currently using and still a 5-year warranty so either the products are actually great, or cheap enough that they can afford to have multiple warranty returns that they somehow keep quiet (I highly doubt that in today's age of loud internet negative reviews).

     

    Does anyone know of good stats similar to something like BackBlaze's Hard Drive reports?

     

    Does @ENTERPRISE want to fund a SSD reliability project :p

     

    Intel 660P is a QLC drive, pretty much any TLC will have much larger lifespan. I did the math on the average QLC and if you use it as a main game drive with games like COD:MW constantly getting large updates you can realistically get 8TBW per month on a 1TB SSD, which is enough to wear it down in 2 years, TLC drives like SX8200 PRO at the same 8TB per month would live 6.67 years before it reaches the manufacturers set EOL TBW.

    Overall I don't want to recommend QLC SSDs to anybody who doesn't build some kind of quarterly back-ups on them where they would serve long and well.

     

    Speaking about backblaze, I haven't heard that anybody is running something like that for SSDs the main cause probably is that there are way too many models in the market.

    • Thanks 1
  12.  

    They are better in some ways, far worse in others.

     

    Now ReLive is just on by default I think, and they added a bunch of hotkeys, so I had to go in and Disable all the hotkeys since some of them line up with in-game shortcuts I have setup. Also if you grew up in Catalyst Control Center days and then early Adrenaline, you will have a hard time with how they moved stuff around again in the latest driver software and some stuff is only accessible buy the little settings gear icon on the top right (like the hotkeys I talked about) which is annoying.

     

    nVidia Shadowplay is also enabled by default, there are quite a lot of people that use them, so it kinda makes sense.

    The old CCC was much better designed than Adrenaline software is now, but it still is better to overview nVidia software...

  13. I have not used AMD's GPU driver suite in yonks, I hear their drivers did get better though.

     

    Tbh, AMD drivers are more sensitive to RAM stability which is why a lot of people with enabled XMP(unstable and untested) will complain about random issues.

    nVidia drivers have never been superior as nVidia tends to ignore quite a lot of driver problems that have been there for generations(mostly related to multi monitor systems), there is a reason why they are called novideo and why it's true up to this day.

  14. I've been saying I was going to look for a used 20XX card late in the year after new hardware comes out and old prices drop hopefully, but if AMD comes out with something with a competitive price/performance ratio that won't break the bank, I'll happily switch back over. Getting pretty sick of using Nvidia Surround, their control panel is a pretty crap piece of software.

     

    Please don't go for used 20XX, they won't age well even nVidia themselves said that.

  15.  

    Have you tried just applying the settings on the main page of the calculator and not the settings on the Advanced page ?

     

    Speaking from personal experience, sometimes applying some of the more granular settings can actually be more trouble than they are worth. I tend to "cheat". I apply the main page settings and let the motherboard take care of the rest.

     

    Also to remove voltages as a factor I tend to set my ram to 1.45 - 1.5v, at least while testing to see if the proposed settings will even work. SOC I set to 1.1v

     

    I also set the VDDG CCD, IOD andtand CLDO VDDP to1.1 across the board, again for testing. You could reduce them later if you wish.

     

    1.45V is max I'd recommend for most dies, B die however can go up to 1.8V with no additional cooling, but I only recommend that only to people that know what they are doing.

  16.  

     

    Yeah that's the safe calculations. That's why I thought maybe I did something wrong.

     

    I'll try the stepbacks like you suggested.

     

    As I said I consider the DRAM calc a problem on top of a haystack of problems rather than a solution, if you decide to use it then please stress test everything with TM5 to make sure it's actually stable.

     

    Speaking about motherboard recommendation list, that's simply validated to work with mobo and should have no real impact on how the RAM performs, but the combinations of CPU and mobo and some luck will have a lot of impact on what your RAM will achieve, for example I on my first gen zen can't go over 2933C14, but if I'd change mobo or CPU I'd probably get either higher or lower clocks depending on my luck.

  17. I think I've already said this, but I am strongly against the usage of DRAM calc, it is a bunch of stolen XMP profiles. XMP on its own already isn't a great thing as in around 60% systems it's not stable and people don't stress test XMP!!! When it comes to memory OCing the best approach is to set loose timings like 22-22-22-50-80 and search for the highest stable frequency if needed raise voltage up to like 1.45V, then drop primary timings and go down to secondaries and tertiaries following the general guidelines, here is a guide from Integral, it's very well written:

    https://github.com/integralfx/MemTestHelper/blob/master/DDR4%20OC%20Guide.md

  18.  

    Not sure exactly, but I'm pretty sure it would require soldering as it was intermittent for a few weeks before it finally stopped working completely. I've already got the DT1770 Pros now, I don't think it's going to be easy to go back to 770s ?

     

    Yeah it does require soldering, but it's pretty quick and easy to solder a new cable on DT770's, if mine will break I will add an XLR jack on mine for the convenience of removable cable.

×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy