Welcome to ExtremeHW
Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.
Registered users can:
- Start new topics and reply to others.
- Show off your PC using our Rig Creator feature.
- Subscribe to topics and forums to get updates.
- Get your own profile page to customize.
- Send personal messages to other members.
- Take advantage of site exclusive features.
- Upgrade to Premium to unlock additional sites features.
-
Posts
2,210 -
Joined
-
Last visited
-
Days Won
96 -
Feedback
0%
Content Type
Forums
Store
Events
Gallery
Profiles
Videos
Marketplace
Tutorials
Everything posted by J7SC_Orion
-
Below... 1.) Updated DLSS performance (3DM Port Royal as before, but added DLSS feature test) 2.) Thinking of installing the MSI Aero 4x M.2 add-in card that came with the X399 Creation mobo for extra storage (7x M.2 total)...just need to figure out which slot (given dual GPUs) and whether to trust the PCIe riser cable that came included with the TT Core P5.
-
IMO, nuclear power technology has matured with newer and very different systems that either use waste from the prior (current) nuclear plants, and/or do not produce the same type and level of byproducts and waste. They are also much safer, with emergency shutdowns based on natural physical laws, rather than relying on a (possible submerged or damaged) diesel generator... One problem seems to be that a lot of public and private sector utilities placed humongous financial bets (50 year bonds etc) on the original 'dirty' tech which stemmed from earlier applications (such as nuclear subs) where space was at a premium - and in the process generated a horrible environmental reputation in the public, not to mention Fukushima Daiichi, Chernobyl, Three Mile island... If they can solve the hydrogen storage and transportation issues, I do believe hydrogen could be a game changer, not least as a lot of the 'globally installed tech base' could easily be converted (ie most types of internal combustion engines). This is akin to the battery technology race with electric mobility. Time will tell, though with the current 'cov-virus' leading to massive public-sector debt and reduced economic activity, energy-related R+D and implementation budgets could get quite dented
-
Yeah, the RTX launch was a bit odd. That includes the closely-spaced release of the Titan RTX with only marginal gains for twice the costs compared to the 2080 Ti (unless the extra VRAM was crucial to given apps)...perhaps all rushed because of the mining collapse ? I'm taking my time re. a potential 3080 Ti upgrade as I want to know more about AMD's upcoming flagship, and even NVidia's 'Hopper'. Add to that the much stronger performance and applicability of DLSS 2.0 even for current RTX cards... That said, on the 'mini-enterprise' level, NVidia's Ampere DGX-3 'mGPU' actually looks like a deal for a given set of companies...8x A100 GPUs, 600gb/s NVlink bridges (non-SLI), 2x 64c/128t AMD CPUs and on and on...about half price of the current version but with much higher performance...since NVidia is not in the charity business, they must be expecting some stiff competition in that lucrative space from Intel and/or AMD and their next-gens (and are perhaps even pulling Hopper ahead for that). All in all, this should mean a whole busload of architectural trickle-down tech for consumer and prosumer cards
-
Tx...I also wonder what the specs of the 3080 Ti will be, including on power consumption....not least as the just-announced Ampere A100 7nm workstation card at 6192 Cudas / 40GB HBM2 is rated at 400W @ ~ 1.41 Ghz, according to AnandTech (repeated below) - and that apparently is still not the full-fat die GA100, which has not yet been revealed. 3080 Ti is unlikely to have the 6192 Cuda and respective Tensor cores (though it is all just guessing at this stage) and likely will come with a smaller GDDR6 VRAM set. But still - 400W @ ~ 1.41 for a 7nm is a lot, though it has 54 billion transistors...May be I should stock up on SuperFlower's LEADEX Platinum 2000W PSUs, HeHe
-
I know what you mean...I have been thinking about upgrading from the 2950X / X399 / dual 2080 Tis but there really is no need, at least not yet. The 2950X is a good sample (low-v, high oc, strong IMC) and I use 4K monitors, so GPUs carry a bit more weight in upgrade decisions. Also, I need HEDT work-wise. That superb 3950X / X570 Aqua should last you for quite a while; still, it is good to have an upgrade path if / when you want to exercise that option. BTW, how do you like that EVGA 1600 Supernova P2 ?
-
3D Bench Mark, Older but still relative....
J7SC_Orion replied to schuck6566's topic in Benchmarking General
...cold and rainy Saturday + some Covd restrictions make for some good benching weather I re-ran my favourite (PortRoyal) and also the DLSS feature test for the first time (which is basically 2x Port Royal in a row, with / without DLSS). That makes it a nice long bench to test cooling system performance...all those pumps and rads did their job; GPU loop is not heat soaking and temps remained fairly flat at under 30 c / full load, which makes a big difference with 2080 Tis -
Nice ! When I checked the pricing for the Aquacomputer D5 Next, I started to feel a bit guilty, but then checked other D5 pump and water-cooling gear prices - wow, what happened (rhetorical question) ? The good part about D5 is that they last a long time, but since we bought a bunch back in 2013 -2014 (same for XSPC RX 360 rads, fittings, GentleTyphoon and other hi-po fans), I haven't really kept up on pricing...just clean and recycle the old stuff . That said, if/when more D5s are needed, it's going to be the Aquacomputer D5 Next with embedded flow-meter. Looking forward to your full build pics - those Asrock X570 Aqua full-block mobos are cool(ed) and gorgeous (not to mention ready for the next-gen Ryzen)
-
Dual loops ? What is EHW coming to ? Elsewhere @ EHW, there was a bit of coverage of why it can make sense to have dual D5 pumps in series per loop under certain circumstances. As context, this includes fail-over (in particular when dealing with workstations and servers), as well as usually higher and consistent flow, and more resilience to air bubbles - especially if there are multiple rads and CPU+ GPU blocks. That said, D5s rarely break (unless they ran dry for extended periods), and dual D5 pumps are certainly not a necessity even for a high-end enthusiast build project. Still, I prefer build all HEDT (work and play) systems with dual D5s per loop. But what about separate loops for CPU (w/ perhaps VRM), and GPUs ? Why does that even make sense, especially if a given loop layout already has dual pumps ? From a cooling temp point of view, it really isn't necessary, and it also costs a bit more (extra reservoir, fittings, tubing, pumps...) But it can have advantages: - First, when building a complex water-cooled system with a single loop serving 4+ rads, a CPU block and dual (or more) GPU blocks, there will invariably be a lot of twists and turns via tubing and/or angled fittings. Splitting the loop into two discreet ones is easier to lay out (depending on the case of course) and avoid tighter turns such as 90 degree ones, and it also reduces the number of overall impedances per loop. - Second, loop maintenance becomes a bit easier (ie. draining, air bubble bleeding) - Third, changing, say, the CPU and/or mobo but keeping the GPU(s) later on becomes easier...ditto for changing the GPU(s) but keeping the rest - Fourth, it is fun ! - Fifth, I like anchovies on my pizza, so there... apparently, Iamjanco likes anchovies, too: Do Orcas eat anchovies ?
-
...Tx. And in the interest of public safety, I also already warned innocent bystanders of potentially contracting pumpritis and fanritis after prolonged exposure <> here & below
-
...I usually set all pumps to the same speed, though I have read that w/ D5 type pumps, it is not that big an issue if speeds are different, other than may be efficiency...argument is along the same lines why 'fail-safe' works w/ D5s, ie. one one pump is out, flow still works via the other... ...if you got the room next to the pump / rez combo and don't end up with tight '90 degree' bends etc, that would work fine Pictures / vid when you're done !
-
...oh, one more thing: A note of caution : Once you get into multiple pumps, you might contract 'pumpritis' and 'fanritis', like the ongoing build of my buddy 'Iamjanco' underscores
-
Well, fresh from Jensen Huang's kitchen per this mornings event: Not much on Ampere 'gaming' yet, but as expected, on data-center versions. Top Ampere data-center version seems to have 6912 Cudas (not 8192 as per earlier leaks) - for now, anyway ;-) That's still a lot of room for various lower-level versions, such as Titan and Ti. It seems the whole generation will run on the same base architecture (thus including 'GeForce'), but obviously w/ different Cuda, tensor counts and VRAM types (HBM2 vs GDDR) and amounts.
-
...probably doesn't really matter, as long as both (D5) pumps are below the reservoir (and thus always get 'fed' and lubricated). As far as I recall, your current pump is a combo / the bottom of the reservoir, so if you can find some space in the case 'near it' and vertically below the reservoir for the 2nd pump, it should not make a difference cooling-wise - flow will equalize anyhow in the loop. If there's enough room at the back in your case and some space for tube pass-throughs, that's another option as well. Finally, within the above parameters, I would put the second pump between the CPU block and GPU block, but only if I would have the choice. BTW, an example of a nice free-standing D5s with OLED display (incl. rpm, L/hr)... [video=youtube;ya-8E7Hzj3s]
-
...along the lines of what happened with DLSS 1.0 > DLSS 2.0 ? Per Nvidia's site, " The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalized network that works across games. This means faster game integrations, and ultimately more DLSS games."
-
...depends on pricing. But with RTX, 2080 Ti has 4352 Cuda / 544 tensor versus Titan RTX, Quadro RTX8000 et al that have 4608 Cuda / 576 tensor (Tesla V100 is a different beast / architecture). So the much 'cheaper' (lol) 2080 Ti RTX is already very close to the full-fat RTX, and with some custom PCBs can match the Titan RTX etc via higher clocks. It only really depends on whether the extra Titan RTX VRAM and select other features are needed by the user. If the Ampere rumours pan out, the new top-spec G100 has 8192 Cuda and 1024 tensor vs. G102 (3080 Ti ?) at 5376 Cuda and (? tensor)...a much more dramatic difference. But Nvidia will undoubtedly charge for this extra oomph. One wildcard re. purchasing decisions is what AMD's Big Navi and Intel's Xe end up bringing to the table and at what price...another one is 'Hopper' lurking
-
...good point "that the API fragmentation regarding DX, Vulkan, OpenGL has not helped matters either. It is great to have the competition for sure but multi-gpu has been even more hit and miss over these different API's." What I'm thinking, based on some rather preliminary and speculative pieces, is that future mGPU will not so much be separate cards (like current NVLink/SLI), but more like multiple (ie 5nm) chiplets on a single card, thus the renewed need for some mGPU drivers...and when properly implemented, CFR has some advantages over AFR. But as with anything else 'forecasting is difficult, especially when applied to the future'...
-
...things are definitely heating up in the 'pre-Ampere' release arena - apart from NVidia's briefing event tomorrow which may be more focused on next-gen pro cards. Also, similar info from TweakTown a few days ago. If all this is true, NVidia seems to re-establish a clearer line between their consumer and pro models: GA100 *Specs* - 8192 CUDA cores, 1024 Tensor Cores, 256 RT Cores. That one might tidy me over until Hopper
-
Yeah, I saw that...reminds me a bit about the Ferrari situation in the 1966 "Grand Prix" film (hopefully with a different ending). I always wanted to see Seb at Mercedes...though it would get quite crazy between Hamilton & Seb. Also wondering re. the Covid-19 race schedule this year
-
...we have some proprietary and somewhat complex databases that started up in 1996 (to present)....GPU acceleration is for our data analytics and deep learning. At the time of the GPU purchase, NVidia had just extended relevant software to run on Titan RTX and 2080 Ti...
-
Well Sir, you might have seen Orca and his pod come up for air here and there before... ...and yes, lonely slim 120mm above VRM and DRAM is supposed to help w/ airflow, though not really necessary, given temp sensors...still, why not add to the 20+ 120mm fans anyhow in this build to make 'Orca' really comfy?
-
...single pump ? My inner Bonobo is offended (...then again, he's offended quite easily...) But seriously, never had a single D5 fail in operation at work or home in 10 years plus; though I still run two of them in series for fail-over (just in case..)
-
GAME: Ban the Above User for a Reason - EHW Edition
J7SC_Orion replied to Simmons's topic in Chit Chat General
banned coz..you're doing well with this weight lifting program, even if Leo & Gus are chikinz-less in their tummies for now...keep it up ! -
Admins please note: This thread is intended to settle somewhere between 'GPU' and 'Benchmarking' ...Ah, the joys of mGPU (multi-GPUs), such as NVidia SLI/NVLink and AMD Crossfire/Quadfire...and why CFR (checkerboard / tile-based) mGPU vs AFR (traditional Alternate Frame Rendering) matters...may be not now, but certainly in the not-too-distant future. Yes, yes - there is the chorus that SLI/mGPUs "is dead"...while not entirely true, it certainly is the case that a single GPU will usually be far more painless to optimize for a given game, or really be the only option for other games...that said, I recently switched from four Quad-SLI / Quad-Fire systems to 'only' dual NVLink (2x 2080 TIs), and while I do not play as many games on this HEDT system that also does 'work' as others, I have relatively few problems with NVLink on my fav games I do play, such as various NFS and also Metro Exodus, never mind 3DM and other benches. Yet this thread is NOT intended to convert anyone to mGPU. Instead, it is pointing out that there seems to be a movement aloft by various GPU producers (NVidia, Intel and likely AMD) to introduce 'mGPUs' in future generations of their GPUs. Think AMD Ryzen 7nm and soon smaller 'chiplets' vs Intel's difficulties with large 10nm 'all-in-one' giant and complex dies in the CPU realm. Likely, the next gens of GPUs will still be single die, but sooner or later, it will be mGPUs for the middle and upper class performance graphics cards - for which you need extremely-well performing mGPU drivers As such, NVidia released, rather quietly, their CFR capability in their drivers as of late 2019 to do 'CFR' - for RTX only. CFR is actually not new, but was supplanted by the easier-for-developers 'AFR'. Yet with future GPUs, CFR seems to be the far more capable ticket for future mGPU generations than AFR... Below are some early CFR vs AFR comps with the current gen of RTX GPUs. I already have run benchmarks of my own such as 8K Unigine Superposition with CFR vs AFR, but much more (tedious mod and setup) work is needed. I will update this post as I get more results of my own, time permitting... In the meantime, I will note that CFR is not always faster than AFR in outright FPS, but it seems to have better frame times...and below are some YouTube vids someone else ran for DCX12 (Titan RTX) and Tom Clancy's Division 2 (DX11)...have fun and plan you next mGPU (oops )
-
lol ...point well taken - plug away for EHW ! I do point out though that this build is in part a workstation that, well, works for a living. Also, from my perspective, I was being prudent with 'Orca', coming from at least four prior HEDT projects that were either NV Quad-SLI or AMD Quad-Fire, needing multiple PSUs and sturdy circuit-breakers...
-
I used to live in places with high humidity, but I can now confirm that ocean breezes are nice re. humidity, apart from keeping the mosquitoes at bay... As to the overall theme of global warming, 'human activities' clearly play a very big role, and '''we''' have to think about how we can contribute in helping to solve this via looking at our daily behavior, especially with 7.8 billion people now on our earth. All that said, there are also 'natural cycles' which drive climate change...even trickier when they combine with human-made climate impacts. BUT...but at the end of the day, there needs to be a discussion about how technology can be best employed (and its potential risks mitigated) to check the rise (if not lower) temps, humidity and greenhouse gasses. IMO, things already have progressed so much and moved us so much closer to tipping points that we need technical solutions, not just 'behavior modification'. At the end of the day, the hick-hack between 'man-made climate impacts vs natural climate cycles' is somewhat irrelevant anyway - because it needs to be dealt with one way or the other. Consider the billions of people that live in cities and areas near oceans and river deltas (where much of the early civilizations were established). Rising temps will lead to, among other things, higher water levels and flooding - and it makes little difference what caused it. It is happening and we have to deal with it, instead of suggesting, rightfully or wrongfully, that 'it wasn't man made, so ignore it'... One day, we'll manage to get to the hydrogen economy...since the rest of the universe seems to run on hydrogen as an energy source, clearly 'not a bad' idea. The overriding challenge seems to be how to SAFELY store and transport it...now back to my overpowered 1000+ watt computer

