Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.
IGNORED

Threadripper Home Server


tictoc
43 Attachments

Recommended Posts

16 minutes ago, tictoc said:

Just a little update.  It looks like this is going to turn into a very overkill server.  Thinking about swapping out a few thing on my workstation, so I might be adding a pair of Radeon VIIs to this machine.  For now just testing with a pair of Vega 64s.

 

tRipperServer_testing3.thumb.jpg.7601160a9490a80258a2c9ca96967f78.jpg

  

overkill = moar = betta ?
   

moar4x.jpg.ae49f6bfa2d426fd33077db8be99d3f7.jpg

Owned

 Share

CPU: CPU: ><.......7950X3D - Aorus X670E Master - 48GB DDR5 7200 (8000) TridentZ SK Hynix - Giga-G-OC/Galax RTX 4090 670W - LG 48 OLED - 4TB NVMEs >< .......5950X - Asus CH 8 Dark Hero - 32GB CL13 DDR4 4000 - AMD R 6900XT 500W - Philips BDM40 4K VA - 2TB NVME & 3TB SSDs >> - <<.......4.4 TR 2950X - MSI X399 Creation - 32 GB CL 14 3866 - Asus RTX 3090 Strix OC/KPin 520W and 2x RTX 2080 Ti Gigabyte XTR WF WB 380W - LG 55 IPS HDR - 1TB NVME & 4TB SSDs
Full Rig Info
Link to comment
Share on other sites

On 12/9/2020 at 3:33 AM, J7SC_Orion said:

  

overkill = moar = betta ?
   

moar4x.jpg.ae49f6bfa2d426fd33077db8be99d3f7.jpg

What GPU's are those again ?

£3000

Owned

 Share

CPU: AMD Ryzen 9 7950X3D
MOTHERBOARD: MSI Meg Ace X670E
RAM: Corsair Dominator Titanium 64GB (6000MT/s)
GPU: EVGA 3090 FTW Ultra Gaming
SSD/NVME: Corsair MP700 Pro SE Gen 5 4TB
PSU: EVGA Supernova T2 1600Watt
CASE: be quiet Dark Base Pro 900 Rev 2
FANS: Noctua NF-A14 industrialPPC x 6
Full Rig Info

Owned

 Share

CPU: Intel Core i5 8500
RAM: 16GB (2x8GB) Kingston 2666Mhz
SSD/NVME: 256GB Samsung NVMe
NETWORK: HP 561T 10Gbe (Intel X540 T2)
MOTHERBOARD: Proprietry
GPU: Intel UHD Graphics 630
PSU: 90Watt
CASE: HP EliteDesk 800 G4 SFF
Full Rig Info

£3000

Owned

 Share

CPU: 2 x Xeon|E5-2696-V4 (44C/88T)
RAM: 128GB|16 x 8GB - DDR4 2400MHz (2Rx8)
MOTHERBOARD: HP Z840|Intel C612 Chipset
GPU: Nvidia Quadro P2200
HDD: 4x 16TB Toshiba MG08ACA16TE Enterprise
SSD/NVME: Intel 512GB 670p NVMe (Main OS)
SSD/NVME 2: 2x WD RED 1TB NVMe (VM's)
SSD/NVME 3: 2x Seagate FireCuda 1TB SSD's (Apps)
Full Rig Info
Link to comment
Share on other sites

6 hours ago, ENTERPRISE said:

What GPU's are those again ?

   

980 Classies ...those four found new pastures (pair each) in recent retro-builds after getting their air-cooler dress back on...

  

TT120_5960x_bU.thumb.jpg.22a6c037dcfc0aa8beda8706280ee3cd.jpg

  

StrykerBlu.thumb.jpg.484662ac15018bc15a42fd0fde37a584.jpg

Edited by J7SC_Orion
  • Thanks 1

Owned

 Share

CPU: CPU: ><.......7950X3D - Aorus X670E Master - 48GB DDR5 7200 (8000) TridentZ SK Hynix - Giga-G-OC/Galax RTX 4090 670W - LG 48 OLED - 4TB NVMEs >< .......5950X - Asus CH 8 Dark Hero - 32GB CL13 DDR4 4000 - AMD R 6900XT 500W - Philips BDM40 4K VA - 2TB NVME & 3TB SSDs >> - <<.......4.4 TR 2950X - MSI X399 Creation - 32 GB CL 14 3866 - Asus RTX 3090 Strix OC/KPin 520W and 2x RTX 2080 Ti Gigabyte XTR WF WB 380W - LG 55 IPS HDR - 1TB NVME & 4TB SSDs
Full Rig Info
Link to comment
Share on other sites

17 hours ago, J7SC_Orion said:

   

980 Classies ...those four found new pastures (pair each) in recent retro-builds after getting their air-cooler dress back on...

  

TT120_5960x_bU.thumb.jpg.22a6c037dcfc0aa8beda8706280ee3cd.jpg

  

StrykerBlu.thumb.jpg.484662ac15018bc15a42fd0fde37a584.jpg

Those were great GPU's back in their day !

£3000

Owned

 Share

CPU: AMD Ryzen 9 7950X3D
MOTHERBOARD: MSI Meg Ace X670E
RAM: Corsair Dominator Titanium 64GB (6000MT/s)
GPU: EVGA 3090 FTW Ultra Gaming
SSD/NVME: Corsair MP700 Pro SE Gen 5 4TB
PSU: EVGA Supernova T2 1600Watt
CASE: be quiet Dark Base Pro 900 Rev 2
FANS: Noctua NF-A14 industrialPPC x 6
Full Rig Info

Owned

 Share

CPU: Intel Core i5 8500
RAM: 16GB (2x8GB) Kingston 2666Mhz
SSD/NVME: 256GB Samsung NVMe
NETWORK: HP 561T 10Gbe (Intel X540 T2)
MOTHERBOARD: Proprietry
GPU: Intel UHD Graphics 630
PSU: 90Watt
CASE: HP EliteDesk 800 G4 SFF
Full Rig Info

£3000

Owned

 Share

CPU: 2 x Xeon|E5-2696-V4 (44C/88T)
RAM: 128GB|16 x 8GB - DDR4 2400MHz (2Rx8)
MOTHERBOARD: HP Z840|Intel C612 Chipset
GPU: Nvidia Quadro P2200
HDD: 4x 16TB Toshiba MG08ACA16TE Enterprise
SSD/NVME: Intel 512GB 670p NVMe (Main OS)
SSD/NVME 2: 2x WD RED 1TB NVMe (VM's)
SSD/NVME 3: 2x Seagate FireCuda 1TB SSD's (Apps)
Full Rig Info
Link to comment
Share on other sites

13 hours ago, tictoc said:

One less than stellar stick of RAM.

tRipperServer_weakStick.png.03d36b75b1799f864441e7ee93c1797f.png

 

That stick won't really run at anything over 2666.  Rolled the dice on a new pair of sticks, and we'll see what ICs I get.

 

 

Hopefully you can get that working, have you tried just upping the voltage on the memory then bringing it back down?  Sometimes that can just magically fix a DRAM stick.  I know it's not actually magic, but it has worked for me in the past.

Link to comment
Share on other sites

On 12/19/2020 at 3:06 PM, axipher said:

 

Hopefully you can get that working, have you tried just upping the voltage on the memory then bringing it back down?  Sometimes that can just magically fix a DRAM stick.  I know it's not actually magic, but it has worked for me in the past.

 

I've ran it all the way up and down from 1.25v to 1.6v while testing. That stick just doesn't want to run stable at anything over 2666, even at CL22.  

Link to comment
Share on other sites

9 hours ago, tictoc said:

 

I've ran it all the way up and down from 1.25v to 1.6v while testing. That stick just doesn't want to run stable at anything over 2666, even at CL22.  

 

That's a real pain...  The last time I had RAM issues was on my X6-1100T, one bad stick that would not run at stock voltage and unfortunately you don't get per-module voltage settings on pretty much any consumer motherboards.

 

Nowadays, I just take PSU and RAM to be the two most stable parts in a system as they have given me near-zero issues over 15 years of building.

Link to comment
Share on other sites

Two new sticks are set to be delivered on Wednesday, so hopefully I can get it sorted out with the new sticks.  One of the other sticks also showed a single error, but I'm pretty sure that only showed up when trying to get 128GB to run at 3200.

 

I already have mis-matched sticks, so more than likely even if the replacement is good, I will be maxed out at 2866-2933.  Four of the sticks (including the bad one) have some funky Micron Rev H? chips, that I'm pretty sure are some sort of ES ICs.  This late in the life cycle of DDR4 you start to see some funny stuff getting put on modules.  Especially, with something like ECC UDIMMs which are not really made in real quantity.

Link to comment
Share on other sites

I could have saved my self hours of testing, if I would have just replaced the garbage stick sooner.

 

Swapped out the bad stick for one of the new sticks, loaded a 2866 CL16 profile that I had saved, and I am 30 minutes into a stressapptest run with zero errors.  With the old stick in I would start to see errors within 20 seconds of starting a stress test.  

Link to comment
Share on other sites

13 minutes ago, tictoc said:

I could have saved my self hours of testing, if I would have just replaced the garbage stick sooner.

 

Swapped out the bad stick for one of the new sticks, loaded a 2866 CL16 profile that I had saved, and I am 30 minutes into a stressapptest run with zero errors.  With the old stick in I would start to see errors within 20 seconds of starting a stress test.  

 

Woohoo :)

Link to comment
Share on other sites

5 hours ago, tictoc said:

I could have saved my self hours of testing, if I would have just replaced the garbage stick sooner.

 

Swapped out the bad stick for one of the new sticks, loaded a 2866 CL16 profile that I had saved, and I am 30 minutes into a stressapptest run with zero errors.  With the old stick in I would start to see errors within 20 seconds of starting a stress test.  

 That my friend is always the bloody way lol. We I.T people can be stubborn haha.

£3000

Owned

 Share

CPU: AMD Ryzen 9 7950X3D
MOTHERBOARD: MSI Meg Ace X670E
RAM: Corsair Dominator Titanium 64GB (6000MT/s)
GPU: EVGA 3090 FTW Ultra Gaming
SSD/NVME: Corsair MP700 Pro SE Gen 5 4TB
PSU: EVGA Supernova T2 1600Watt
CASE: be quiet Dark Base Pro 900 Rev 2
FANS: Noctua NF-A14 industrialPPC x 6
Full Rig Info

Owned

 Share

CPU: Intel Core i5 8500
RAM: 16GB (2x8GB) Kingston 2666Mhz
SSD/NVME: 256GB Samsung NVMe
NETWORK: HP 561T 10Gbe (Intel X540 T2)
MOTHERBOARD: Proprietry
GPU: Intel UHD Graphics 630
PSU: 90Watt
CASE: HP EliteDesk 800 G4 SFF
Full Rig Info

£3000

Owned

 Share

CPU: 2 x Xeon|E5-2696-V4 (44C/88T)
RAM: 128GB|16 x 8GB - DDR4 2400MHz (2Rx8)
MOTHERBOARD: HP Z840|Intel C612 Chipset
GPU: Nvidia Quadro P2200
HDD: 4x 16TB Toshiba MG08ACA16TE Enterprise
SSD/NVME: Intel 512GB 670p NVMe (Main OS)
SSD/NVME 2: 2x WD RED 1TB NVMe (VM's)
SSD/NVME 3: 2x Seagate FireCuda 1TB SSD's (Apps)
Full Rig Info
Link to comment
Share on other sites

  • 4 weeks later...

This build is still alive, but it's stuck on my test bench for now while I work out the software stack and make some final decisions on hardware. 

New RAM has been dead stable for 3 weeks running at 2933 CL19.

 

Still not sure which way to go on the GPUs in the server.  I had plans to swap some hardware around, but the lack of GPU availability has kind of slowed progress.

Link to comment
Share on other sites

I'll probably have to upgrade my bulk media storage sooner rather than later.  I grabbed a few remastered Blu-ray box sets, and that is going to eat up a few TBs of storage.  On a side note the X-Files Blu-rays look really good.  Especially since Season 1 and 2 I had originally ripped from VHS.

Edited by tictoc
Link to comment
Share on other sites

3 hours ago, tictoc said:

I'll probably have to upgrade my bulk media storage sooner rather than later.  I grabbed a few remastered Blu-ray box sets, and that is going to eat up a few TBs of storage.  On a side note the X-Files Blu-rays look really good.  Especially since Season 1 and 2 I had originally ripped from VHS ?  

 

I ripped my X-Files on S-VHS ? - 400 lines of horizontal resolution - WOW.

 

On the other hand, 400 lines of resolution these days are sort of like...

Spoiler

 

   

Edited by J7SC_Orion

Owned

 Share

CPU: CPU: ><.......7950X3D - Aorus X670E Master - 48GB DDR5 7200 (8000) TridentZ SK Hynix - Giga-G-OC/Galax RTX 4090 670W - LG 48 OLED - 4TB NVMEs >< .......5950X - Asus CH 8 Dark Hero - 32GB CL13 DDR4 4000 - AMD R 6900XT 500W - Philips BDM40 4K VA - 2TB NVME & 3TB SSDs >> - <<.......4.4 TR 2950X - MSI X399 Creation - 32 GB CL 14 3866 - Asus RTX 3090 Strix OC/KPin 520W and 2x RTX 2080 Ti Gigabyte XTR WF WB 380W - LG 55 IPS HDR - 1TB NVME & 4TB SSDs
Full Rig Info
Link to comment
Share on other sites

  • 2 months later...

I am hoping to get this build kick started this weekend.  I have basically just given up on GPUs for now, so this will probably just be running a pair of Vega 64s until hardware availability frees up.

 

My 42U rack has been stripped of everything but network gear, and I have sold most of the gear that was in the rack.  I am downsizing to a small 12U rack for my network gear (router, 10G switch, 1G PoE switch) and power distribution (PDU and UPS's).  I'll include some pics of that once I get things set up. 

Link to comment
Share on other sites

Solid server there. I'm running a similar setup, aircooled with a 1920x downclocked and locked to 3.2ghz w/ ECC ram @ cl17 2866. Runs my whole home setup, a few virtualized routers, pbx, gameserver, and a nas\dlna with an lsi passed through to it. Whole setup sits at about 100w from the wall that way. The 10G integrated was a massive bonus, especially for the price of the board when i got it. A taichi + x550 would have been almost $480 anyways. Been in service about a year... not one hiccup, love it. Replaced a dual x5650 with it, night and day diff in every aspect. Less power, less heat, boots quicker, and is quite a bit snappier.

 

Only troublesome things i have found are:

1) This board lacks is proper pci-e bifurcation options for using pci-e x16 4x nvme cards.  Maybe if two of us bug them about it at the same time, we can get them to release a new bios with those options available. Im super surprised they overlooked this on a board of this caliber.

 

2) It doesn't play nice when trying to run with Windows 7, 8, 8.1, or 10 LTSB natively though. 7 & 8 will run, but almost every reboot it will re-install all your hardware. 8.1 and 10 LTSB flat out wont boot, they sit spinning forever at bootup. LTSC and newer worked fine, all Linux and Unix I tried worked fine too. All those OS that wont run natively, will run via virtualization just fine though.

 

Interested to hear more out of your further testings.

 

Link to comment
Share on other sites

  • 2 weeks later...
On 4/6/2021 at 8:53 PM, AllenG said:

Solid server there. I'm running a similar setup, aircooled with a 1920x downclocked and locked to 3.2ghz w/ ECC ram @ cl17 2866. Runs my whole home setup, a few virtualized routers, pbx, gameserver, and a nas\dlna with an lsi passed through to it. Whole setup sits at about 100w from the wall that way. The 10G integrated was a massive bonus, especially for the price of the board when i got it. A taichi + x550 would have been almost $480 anyways. Been in service about a year... not one hiccup, love it. Replaced a dual x5650 with it, night and day diff in every aspect. Less power, less heat, boots quicker, and is quite a bit snappier.

 

Only troublesome things i have found are:

1) This board lacks is proper pci-e bifurcation options for using pci-e x16 4x nvme cards.  Maybe if two of us bug them about it at the same time, we can get them to release a new bios with those options available. Im super surprised they overlooked this on a board of this caliber.

 

2) It doesn't play nice when trying to run with Windows 7, 8, 8.1, or 10 LTSB natively though. 7 & 8 will run, but almost every reboot it will re-install all your hardware. 8.1 and 10 LTSB flat out wont boot, they sit spinning forever at bootup. LTSC and newer worked fine, all Linux and Unix I tried worked fine too. All those OS that wont run natively, will run via virtualization just fine though.

 

Interested to hear more out of your further testings.

 

 

I'll shoot ASRock a message.  I was able to get a beta BIOS for my X399 Taichi with proper PCIe bifurcation a few months before they released the official BIOS with proper bifurcation options (x4x4x4x4; x4x4x8, x8x8, x4x4) for all the slots.

 

Ultimately this machine will be running pretty much everything for my home.

  • VM with GPU pass-through for my main TV
  • Multiple testing VMs (Debian, Ubuntu, CentOS Stream w/ and w/o GPU pass-through)
  • Local mirror and build server for all my Arch boxes
  • Build server for my Gentoo and Arch Pi's
  • Containers or VMs for: NextCloud, lan-cache, unifi, VPN, matrix, irc, and cameras
  • File and backup server running btrfs (48TB-RAID-10 and 8TB-RAID-1)
  • Monitoring and Log server - TICK/TIG and possibly Prometheus

Currently I am working through some bugs/regressions on ROCm, but once I get that ironed out, I will hopefully get this machine off the test bench and into it's new home. :)

 

*EDIT* @AllenG You should send ASRock Rack a support message. http://event.asrockrack.com/tsd.asp   There is a beta BIOS floating around with more bifurcation options.  I haven't tested it yet.

 

Edited by tictoc
Link to comment
Share on other sites

Flashed to a beta bios with additional PCIe bifurcation options.  I've yet to test the additional options other than double checking the link speed.  No odd splits like x4x4x8 or lower like I have on some intel platforms, but 4x or 8x NVMe for a smoking fast db server is now possible.

 

bif3.png.350be5e76120840cb6ea50677e085cbd.pngbif2.png.4231080987d5be391412e613669fa357.pngbif1.png.a05a2fc21c5635deaa3739f5f76b7c07.png

Edited by tictoc
  • Thanks 1
Link to comment
Share on other sites

5 minutes ago, tictoc said:

Flashed to a beta bios with additional PCIe bifurcation options.  I've yet to test the additional options other than double checking the link speed.  No odd splits like x4x4x8 or lower like I have on some intel platforms, but 4x or 8x NVMe for a smoking fast db server is now possible. :devil:

 

bif3.png.350be5e76120840cb6ea50677e085cbd.pngbif2.png.4231080987d5be391412e613669fa357.pngbif1.png.a05a2fc21c5635deaa3739f5f76b7c07.png

This interests me. 

Link to comment
Share on other sites

Yeah, i ended up getting that beta bios too after e-mailing a few times. Unfortunately in my case, the functions don't seem to work correctly. The closest thing i can get working is dividing slot PCIE 6 into x8 x8... this sees nvme slot 1 & 3 of a 4x nvme aic such as the asrock or asus. x4x4x4x4 does not work, stays in x16 mode.

 

The other slots (PCIE 2/3 and 4/5) are actually already bifurcated by asrock, so i am not surprised i don't get working results out of them. All 4 of those X8 slots actually stem from 2 x16 that has been split into two x8's each to make the 4 slots. Now PCIE 6 on the other hand is a full native X16 slot without any of the bifurcation going on to make it into two slots, so really they could get that one working fully... but i think they just copied some code from the Taichi that is close, but not quite right and don't want to check it further. This last part (PCIE 6) is really the ticket to getting a 4x nvme card running... to be honest having bifurcation options on those other shared slots wouldn't really function correctly anyways from what i can tell as it is always auto detecting for a device in slave slot and changing bifurcation options on its own, every boot. My testing below also shows some oddities on the shared slots, so i'm wondering if they didn't wire things a little non-standard for the slave slots... making giving us a x4 x4 x4 x4 out of the masters of those master/slave ports physically impossible? Not entirely sure, all i know is it definitely didn't work as you'd think it would or should. LOL

 

For those interested, here is what i have found and let Asrock know... but they still haven't said anything back after about 2 weeks. Kinda seems like they want to drop the issue even though its not fully fixed. Kind of a long list of testing and outcomes... but hey, might help.

 

Spoiler

Tools I had available to test:
1 x Asus Hyper M2 x16 card w/ 3x nvme ssd's on slots 1-3
1 x Single M2 x4 NVME Aic w/ 1 nvme ssd.

 

-PCIE 6
x16 works correct
x8 x8 works correct, sees nvme slot 1 & 3 of aic.
x4x4x4x4 does not work, stays in x16 mode

 

-PCIE 2/3
x4x4x4x4 does not work, only one of pcie slot can be used at a time in this config it doesnt seem to care which.
Testing in PCIE2(master x16) sees only nvme slot 1 of aic.
Testing in PCIE3(slave x8) sees only nvme slot 2 of aic. (will not see first x4 in this config, only 2nd x4 of the x8 in slot. Confirmed this by using the additional single x4 nvme aic)

x8x8 mode does the same things as x4x4x4x4, only one pcie slot can be used at a time in this config. It is no longer dividing the x16 into the two x8 slots properly like it used to. Effectively it has made one slot useless.

Auto Switch mode also does the same things as x4x4x4x4, only one pcie slot can be used at a time in this config. It is no longer dividing the x16 into the two x8 slots properly like it used to. Effectively it has made one slot useless.

 

-PCIE 4/5 is the exact same behavior as PCIE 2/3.
x4x4x4x4 does not work, only one of pcie slot can be used at a time in this config it doesnt seem to care which.
Testing in PCIE4(master x16) sees only nvme slot 1 of aic.
Testing in PCIE5(slave x8) sees only nvme slot 2 of aic. (will not see first x4 in this config, only 2nd x4 of the x8 in slot. Confirmed this by using the additional single x4 nvme aic)

x8x8 mode does the same things as x4x4x4x4, only one pcie slot can be used at a time in this config. It is no longer dividing the x16 into the two x8 slots properly like it used to. Effectively it has made one slot useless.

Auto Switch mode also does the same things as x4x4x4x4, only one pcie slot can be used at a time in this config. It is no longer dividing the x16 into the two x8 slots properly like it used to. Effectively it has made one slot useless.

 

-PCIE6 is close to working, since it has no slave slots attached to it getting that one fully working with 4x4 mode would be a big help alone. I can tell dividing up the other two x16 master slots (PCIE2, PCIE4) which have x8 slave slots(PCIE3, PCIE5) might be harder or maybe impossible to do. Those two slot sets definitely took a big step backwards and were better left working the way they had been previously... if it's a trade-off issue. I understand it will have to disable the slave x8 slots in order to run these two master x16's in 4x4 mode.

 

(This was the meat of the correspondence on testing to Asrock)

 

 

 

Edited by AllenG
  • Thanks 1
Link to comment
Share on other sites

Folding@Home Staff

I was looking at the board pics and scratching my head on their choice of colors for an AMD board. Kinda glad to see they were hidden after you installed things. I see you're a fan of monsoon fittings too. I really enjoy their stuff. It's different and easy to work with.

3.50

Owned

 Share

CPU: 5600x
GPU: EVGA RTX 3090 FTW3 Ultra
GPU 2: EVGA RTX 3080ti FTW3 Ultra
GPU 3: EVGA RTX 3080ti XC3 Hybrid
GPU 4: EVGA RTX 3070ti FTW3 Ultra
GPU 5: MSI RTX 3070 Gaming X Trio
GPU 6: Asus RTX 2080ti ROG STRIX
GPU 7: EVGA RTX 3080ti FTW3 Ultra
Full Rig Info
Link to comment
Share on other sites

 

3 hours ago, AllenG said:

Yeah, i ended up getting that beta bios too after e-mailing a few times. Unfortunately in my case, the functions don't seem to work correctly. The closest thing i can get working is dividing slot PCIE 6 into x8 x8... this sees nvme slot 1 & 3 of a 4x nvme aic such as the asrock or asus. x4x4x4x4 does not work, stays in x16 mode.

 

The other slots (PCIE 2/3 and 4/5) are actually already bifurcated by asrock, so i am not surprised i don't get working results out of them. All 4 of those X8 slots actually stem from 2 x16 that has been split into two x8's each to make the 4 slots. Now PCIE 6 on the other hand is a full native X16 slot without any of the bifurcation going on to make it into two slots, so really they could get that one working fully... but i think they just copied some code from the Taichi that is close, but not quite right and don't want to check it further. This last part (PCIE 6) is really the ticket to getting a 4x nvme card running... to be honest having bifurcation options on those other shared slots wouldn't really function correctly anyways from what i can tell as it is always auto detecting for a device in slave slot and changing bifurcation options on its own, every boot. My testing below also shows some oddities on the shared slots, so i'm wondering if they didn't wire things a little non-standard for the slave slots... making giving us a x4 x4 x4 x4 out of the masters of those master/slave ports physically impossible? Not entirely sure, all i know is it definitely didn't work as you'd think it would or should. LOL

 

For those interested, here is what i have found and let Asrock know... but they still haven't said anything back after about 2 weeks. Kinda seems like they want to drop the issue even though its not fully fixed. Kind of a long list of testing and outcomes... but hey, might help.

 

  Reveal hidden contents

Tools I had available to test:
1 x Asus Hyper M2 x16 card w/ 3x nvme ssd's on slots 1-3
1 x Single M2 x4 NVME Aic w/ 1 nvme ssd.

 

-PCIE 6
x16 works correct
x8 x8 works correct, sees nvme slot 1 & 3 of aic.
x4x4x4x4 does not work, stays in x16 mode

 

-PCIE 2/3
x4x4x4x4 does not work, only one of pcie slot can be used at a time in this config it doesnt seem to care which.
Testing in PCIE2(master x16) sees only nvme slot 1 of aic.
Testing in PCIE3(slave x8) sees only nvme slot 2 of aic. (will not see first x4 in this config, only 2nd x4 of the x8 in slot. Confirmed this by using the additional single x4 nvme aic)

x8x8 mode does the same things as x4x4x4x4, only one pcie slot can be used at a time in this config. It is no longer dividing the x16 into the two x8 slots properly like it used to. Effectively it has made one slot useless.

Auto Switch mode also does the same things as x4x4x4x4, only one pcie slot can be used at a time in this config. It is no longer dividing the x16 into the two x8 slots properly like it used to. Effectively it has made one slot useless.

 

-PCIE 4/5 is the exact same behavior as PCIE 2/3.
x4x4x4x4 does not work, only one of pcie slot can be used at a time in this config it doesnt seem to care which.
Testing in PCIE4(master x16) sees only nvme slot 1 of aic.
Testing in PCIE5(slave x8) sees only nvme slot 2 of aic. (will not see first x4 in this config, only 2nd x4 of the x8 in slot. Confirmed this by using the additional single x4 nvme aic)

x8x8 mode does the same things as x4x4x4x4, only one pcie slot can be used at a time in this config. It is no longer dividing the x16 into the two x8 slots properly like it used to. Effectively it has made one slot useless.

Auto Switch mode also does the same things as x4x4x4x4, only one pcie slot can be used at a time in this config. It is no longer dividing the x16 into the two x8 slots properly like it used to. Effectively it has made one slot useless.

 

-PCIE6 is close to working, since it has no slave slots attached to it getting that one fully working with 4x4 mode would be a big help alone. I can tell dividing up the other two x16 master slots (PCIE2, PCIE4) which have x8 slave slots(PCIE3, PCIE5) might be harder or maybe impossible to do. Those two slot sets definitely took a big step backwards and were better left working the way they had been previously... if it's a trade-off issue. I understand it will have to disable the slave x8 slots in order to run these two master x16's in 4x4 mode.

 

(This was the meat of the correspondence on testing to Asrock)

 

 

 

 

The board does have some limits on the number of addressable peripherals.  On the previous BIOS I tested with all the PCIe slots populated and PCIE 2 split into two slots with a passive splitter.  In order for all the devices to be recognized I had to disable onboard audio and the USB 3 header.  I've ran into this issue on other platforms, and depending on what all you need there can be some work-arounds with some things disabled.

 

Your testing seems pretty thorough, but I'll give it a go with an ASRock Ultra Quad M.2 card.

 

1 hour ago, BWG said:

I was looking at the board pics and scratching my head on their choice of colors for an AMD board. Kinda glad to see they were hidden after you installed things. I see you're a fan of monsoon fittings too. I really enjoy their stuff. It's different and easy to work with.

 

Choice of colors is pretty standard for a server motherboard. :)  I don't think I've ever seen an ASRock Rack board that is anything other than green with white, blue, or black RAM and PCIe slots.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy