Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.

PontiacGTX

Members
  • Posts

    5
  • Joined

  • Last visited

  • Days Won

    1
  • Feedback

    0%

Posts posted by PontiacGTX

  1. this feels like a downgrade if there isn't a single model with more vram, RX 5500 has 8GB models... and if this has 6GB it will be slower once games use 6GB or more. about the bandwidth it will be interesting to see if Navi has good enough utilization and good compression but technically the performance with same core count will be pretty much the same as a 5700 non XT if it is non bandwidth limited and the BIOS is not limiting the power, a proof is how better a RX 5700 is once it is using a RX 5700 BIOS boosting 9% performance

    • Thanks 1
  2. The full specifications for the card are listed and it is indeed based on the 7nm Navi RDNA Graphics architecture. The surprising thing is that the Radeon RX 5600 XT has much better GPU specification than we expected. Rocking 36 Compute Units or 2304 stream processors, this chip offers the same core count as the Radeon RX 5700. The specific variant for the Radeon RX 5600 XT has not yet been confirmed but it could either be a low binned Navi 10 SKU or a totally different GPU.

     

     

    Coming to the memory design, this is where we start seeing major differences between the Radeon RX 5700 and the Radeon RX 5600 XT. While the Radeon RX 5700 rocks an 8 GB GDDR6 memory with a 256-bit wide bus interface, the Radeon RX 5600 XT would rock a 6 GB GDDR6 memory with a 192-bit bus interface. The Radeon RX 5700 also delivers a higher 448 GB/s bandwidth, utilizing the 14 Gbps DRAM dies while the Radeon RX 5600 XT would offer 288 GB/s bandwidth, utilizing slower 12 Gbps DRAM dies.

     

    Source

  3. A new flood of leaks has flowed onto the net, so it increasingly looks like Intel will launch its 10th-Gen Core CPUs for the desktop, codenamed Comet Lake, at CES in early January. Most notably, according to the latest leaked information, the i9-10900K will feature 10-cores with a maximum “velocity boost� of 5.3GHz. The new chips should also mark the debut of the new 400-Series chipset.

    According to Informatica's slides, the K-series of overclockable CPUs all have a 125W TDP, HyperThreading, UHD Graphics 630 and support for DDR4-2933 and 40 total platform PCIe 3.0 lanes. The slides also point to enhanced core and memory overclocking support and ‘Active Core Group Tuning.’ Other notable listings include Intel Rapid Store Technology (Likely IRST), and Wi-Fi 6 and 2.5G Ethernet support.

     

     

    Logically, the Core i9-10900K would succeed the i9-9900KS. It features 10 cores and 20 MB of cache and has a base frequency of 3.7GHz, but this is improved via a new boost technology for the desktop. The single-core turbo frequency is listed at 5.1GHz, but this is extended to 5.2GHz with Turbo Boost Max Technology 3.0, and further improved to 5.3GHz with Thermal Velocity Boost (TVB), which has now apparently expanded from the mobile segment to the desktop. If intel sticks to the same tactic it used for the mobile space, the TVB opportunistic boost will kick in when the processor falls below a 50C temperature threshold, so much like the standard TurboBoost frequencies, it won't be guaranteed in all conditions. The slides also list the all-core turbo at 4.9GHz.

     

    Source

     

    Unless Intel priced this accordingly I dont see people spending more for a 10core than a 12core cpu

     

     

    • Thanks 1
  4. Exascale for Everyone

     

     

    Intel says that it is hard not to notice the ‘insatiable’ demand for faster, more power efficient compute. Not only that, but certain people want that compute at scale, specifically at ‘exascale’. (It was disclosed at a high-performance supercomputing event, after all). For 2020 and beyond, Intel has designated this the ‘Exascale’ era in computing, where no amount of compute is good enough for leading edge research.

     

    DEVCON%202019_16x9_v13_FINAL%5B2%5D-page-006_575px.jpg

     

    On top of this, Intel points to the number of connected devices in the market. A few years ago analysts were predicting 50 B IoT devices by 2020-2023, and in this presentation Intel is saying that by mid-2020 and beyond, there will be 100 billion devices that require some form of intelligent compute. The move to implementing AI, both in terms of training and inference, means that performance and computational ability have to be ubiquitous: beyond the network, beyond the mobile device, beyond the cloud. This is Intel’s vision of where the market is going to go.

     

    DEVCON%202019_16x9_v13_FINAL%5B2%5D-page-009_575px.jpg

     

    Intel splits this up into four specific categories of compute: Scalar, Vector, Matrix, and Spatial. This is certainly one blub part of the presentation I can say I agree with, having done high-performance programming in a previous career. Scalar compute, is the standard day-to-day compute that most systems run on. Vector compute is moving to parallel instructions, while Matrix compute is the talking point of the moment, with things like tensor cores and AI chips all working to optimize matrix throughput. The other part of the equation is spatial compute, which is derived from the FPGA market: for sparse compute that is complex and can be optimized with its own non-standard compute engine, then an FPGA solves it. Obviously Intel’s goal here is to cover each of these four corners with dedicated hardware: CPU for Scalar, GPU for Vector, AI for Matrix, and FPGA for Spatial.

     

    DEVCON%202019_16x9_v13_FINAL%5B2%5D-page-013_575px.jpg

     

    One of the issues with hardware, as you move from CPU to FPGA, is that it becomes more and more specialized. A CPU for example can do Scalar, Vector, Matrix, and Spatial, in a pinch. It’s not going to be much good at some of those, and the power efficiency might be poor, but it can at least do them, as a launching point onto other things. With GPU, AI, and FPGA, these hardware specializations come with different amounts of complexity and a higher barrier to entry, but for those that can harness the hardware, large speed-ups are possible. In an effort to make compute more ubiquitous, Intel is pushing its oneAPI plan with a singular focal resource for all four types of hardware. More on this later.

     

    DEVCON%202019_16x9_v13_FINAL%5B2%5D-page-029_575px.jpg

     

    Intel’s Xe architecture will be the underpinning for all of its GPU hardware. It represents a new fundamental redesign from its current graphics architecture, called ‘Gen’, and pulls in what the company has learned from products such as Larrabee/Xeon Phi, Atom, Core, Gen, and even Itanium (!). Intel officially disclosed that it has its first Xe silicon back from the fabs, and has performed power cycling and basic functionality testing with it, keen to promote that it is an actual thing.

     

    DEVCON%202019_16x9_v13_FINAL%5B2%5D_46_575px.jpg

     

    So far the latest ‘Gen’ graphics we have seen is the Gen11 graphics solution, which is on the newest Ice Lake consumer notebook processors. These are out in the market, ready to buy today, and feature performance 2x over the previous Gen9/Gen9.5 designs. (I should point out that Gen10 shipped in Cannon Lake but was disabled: this is the only graph ever where I’ve seen Intel officially acknowledge the existence of Gen10 graphics.) We have seen diagrams, either potentially from Intel or elsewhere, showing ‘Gen12’. It would appear that ‘Gen12’ was just a holding name for Xe, and doesn’t actually exist as an iteration of Gen. When we asked Raja Koduri about the future of Gen, he said that all the Gen developers are now working on Xe. There are still graphics updates to Gen, but the software developers that can be transferred to Xe have been already.

     

    Since I didnt find the thread related to this here is the link of the analysis

    • Thanks 1
  5. What settings are you guys running? We should establish a default so our scores are a bit more normalized... Crytek should show relevant specs in the results ;)

     

    What a nice benchmark. The reflections look absolutely stunning, no difference that I can see between that and the "established" raytracing w/ RT cores.

     

    Here is a 980Ti @ 1420mhz core / 3900 mem (on default and 1080P) 4000 mem (4k)... It performs shockingly well at 1080P, this tech would easily make a 1080Ti viable in games for a few more years. Not sure how the 980Ti would fare once more assets are thrown in the mix, probably fairly well.

     

    Default (I believe that was 1366x768), RT ULTRA, windowed, no loop

    [ATTACH=JSON]{"alt":"Click image for larger version Name:\tneon noir.PNG Views:\t0 Size:\t517.1 KB ID:\t3122","data-align":"none","data-attachmentid":"3122","data-size":"medium"}[/ATTACH]

     

    1080P, RT ULTRA... windowed, no loop

    [ATTACH=JSON]{"alt":"Click image for larger version Name:\tneon 1080.PNG Views:\t0 Size:\t821.8 KB ID:\t3123","data-align":"none","data-attachmentid":"3123","data-size":"medium"}[/ATTACH]

     

    for giggles... 4k, rt ultra, windowed, no loop.. that ran about how you would expect a benchmark to run on contemporary hardware :p

    [ATTACH=JSON]{"alt":"Click image for larger version Name:\tneon 4k.PNG Views:\t0 Size:\t2.09 MB ID:\t3124","data-align":"none","data-attachmentid":"3124","data-size":"medium"}[/ATTACH]

     

    Interesting result so a 980Ti does better than my Stock Rx Vega 56. so Crytek has to improve their DX11 performance or at least release this "Raytracing feature" for Directx 12 and Vulkan

×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy