Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.
IGNORED

9800X3D Specs leaked


Recommended Posts

Quote

Advanced Micro Devices (AMD) has announced the upcoming release of its Ryzen 9000X3D desktop processor, scheduled for November 7. 

https://www.guru3d.com/story/amd-set-to-launch-ryzen-9000x3d-desktop-processor-on-november-7/

 

 

 

Quote

Advanced Micro Devices (AMD) has officially implemented price reductions across its Ryzen 9000 "Granite Ridge" desktop processor lineup. The adjustments apply to all four current models, making these high-performance CPUs more accessible to consumers through retail channels. Notably, the flagship Ryzen 9 9950X, featuring 16 cores and 32 threads, has seen a price decrease of up to $50 from its initial launch cost, now available for approximately $600. Similarly, the Ryzen 9 9900X with 12 cores and 24 threads has been reduced by up to $30, bringing its price down to around $470. The Ryzen 7 9700X (8 cores/16 threads) and Ryzen 5 9600X (6 cores/12 threads) have also experienced price cuts of up to $30, with new prices set at approximately $330 and $250, respectively. These reductions are effective immediately, allowing consumers to benefit from the updated pricing without delay.

https://www.guru3d.com/story/amd-reduces-prices-for-ryzen-9000-series-desktop-processors/

 

 

 

Definitely curious to see how the X3D chips shake out.

 

Edited by UltraMega
  • Thanks 2

null

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: 32GB 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

  • UltraMega changed the title to 9000X3D Specs leaked

Update:

 

Quote

AMD’s upcoming CPU will reportedly utilize eight Zen 5 cores with a 4.70GHz base clock speed, which can be boosted up to 5.20GHz - a 0.20GHz jump over the AMD Ryzen 7 7800X3D, and with the CPU multiplier unlocked, users will be able to manually overclock this new processor. This wasn’t possible with previous AMD CPUs that utilized 3D V-Cache, which could be a significant boon for overclocking enthusiasts.

 

The leaked specifications reference 96MB of 3D V-Cache, identical to the 7800X3D (which is among AMD’s best processors), but the aforementioned marketing leak suggested that the 9800X3D may have an 8% performance boost over AMD's current high-end gaming CPU. This is reportedly powered by ‘Next-Gen 3D V-Cache’, but the Geizhals leak doesn’t appear to add any more information.

 

AMD’s Ryzen 7 9800X3D full specifications have leaked, and it’s great news for gamers and overclockers alike

 

  • Thanks 1

null

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: 32GB 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

  • UltraMega changed the title to 9800X3D Specs leaked
On 28/10/2024 at 14:21, UltraMega said:

The biggest interesting rumor to me with this is the unlocked multiplier. Definitely interested to see what it's actually capable of when pushed if they have figured out a way to deal with thermals on stacked cache.

  • Agreed 1

null

Showcase

 Share

CPU: AMD Ryzen 9 5900X
GPU: Nvidia RTX 3080 Ti Founders Edition
RAM: G.Skill Trident Z Neo 32GB DDR4-3600 (@ 3733 CL14)
MOTHERBOARD: ASUS Crosshair VIII Dark Hero
SSD/NVME: x2 Samsung 970 Evo Plus 2TB
SSD/NVME 2: Crucial MX500 1TB
PSU: be Quiet! Straight Power 12 1500W
MONITOR: LG 42" C4 OLED
Full Rig Info

null

Owned

 Share

CPU: E8400, i5-650, i7-870, i7-960, i5-2400, i7-4790k, i9-10900k, i3-13100, i9-13900ks
GPU: many
RAM: Corsair 32GB DDR3-2400 | Oloy Blade 16GB DDR4-3600 | Crucial 16GB DDR5-5600
MOTHERBOARD: ASUS P7P55 WS SC | ASUS Z97 Deluxe | EVGA Z490 Dark | EVGA Z790 Dark Kingpin
SSD/NVME: Samsung 870 Evo 1TB | Inland 1TB Gen 4
PSU: Seasonic Focus GX 1000W
CASE: Cooler Master MasterFrame 700 - bench mode
OPERATING SYSTEM: Windows 10 LTSC
Full Rig Info

Owned

 Share

CPU: M1 Pro
RAM: 32GB
SSD/NVME: 1TB
OPERATING SYSTEM: MacOS Sonoma
CASE: Space Grey
Full Rig Info
Link to comment
Share on other sites

null

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: 32GB 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

On 29/10/2024 at 21:02, Sir Beregond said:

The biggest interesting rumor to me with this is the unlocked multiplier. Definitely interested to see what it's actually capable of when pushed if they have figured out a way to deal with thermals on stacked cache.

Hopefully that comes with an unlocked core voltage beyond 1.2v and vsoc beyond 1.3v also

  • Agreed 1
Link to comment
Share on other sites

21 hours ago, mouacyk said:

Hopefully that comes with an unlocked core voltage beyond 1.2v and vsoc beyond 1.3v also

Good call out, and agreed.

null

Showcase

 Share

CPU: AMD Ryzen 9 5900X
GPU: Nvidia RTX 3080 Ti Founders Edition
RAM: G.Skill Trident Z Neo 32GB DDR4-3600 (@ 3733 CL14)
MOTHERBOARD: ASUS Crosshair VIII Dark Hero
SSD/NVME: x2 Samsung 970 Evo Plus 2TB
SSD/NVME 2: Crucial MX500 1TB
PSU: be Quiet! Straight Power 12 1500W
MONITOR: LG 42" C4 OLED
Full Rig Info

null

Owned

 Share

CPU: E8400, i5-650, i7-870, i7-960, i5-2400, i7-4790k, i9-10900k, i3-13100, i9-13900ks
GPU: many
RAM: Corsair 32GB DDR3-2400 | Oloy Blade 16GB DDR4-3600 | Crucial 16GB DDR5-5600
MOTHERBOARD: ASUS P7P55 WS SC | ASUS Z97 Deluxe | EVGA Z490 Dark | EVGA Z790 Dark Kingpin
SSD/NVME: Samsung 870 Evo 1TB | Inland 1TB Gen 4
PSU: Seasonic Focus GX 1000W
CASE: Cooler Master MasterFrame 700 - bench mode
OPERATING SYSTEM: Windows 10 LTSC
Full Rig Info

Owned

 Share

CPU: M1 Pro
RAM: 32GB
SSD/NVME: 1TB
OPERATING SYSTEM: MacOS Sonoma
CASE: Space Grey
Full Rig Info
Link to comment
Share on other sites

Whenever I can walk into Microcenter and grab one for under $400, I'll do so. I'm not really in a rush since I already have a 7800X3D rig. I also have a 7700X on the test bench I could run as a placeholder and go ahead and rebuild the 5900X rig soon-ish. Then I could toss the 9800X3D in the Gene and bench it before it gets swapped into the main rig. 

 

I guess it'll depend on how unlocked it is, if I want to set aside time to bench it first. Just haven't been as interested in PC stuff the past year or so. 

  • Thanks 1
  • Agreed 1

Owned

 Share

CPU: 5900X + Optimus block
MOTHERBOARD: MSI X570 Ace
GPU: EVGA RTX 3090 FTW3 Ultra + Optimus block
RAM: 32GB Oloy Blade 3600CL14
SSD/NVME: 1TB SN750 Black
SSD/NVME 2: 1TB SN750 Black
SSD/NVME 3: 2TB Samsung 970 Evo Plus
CASE: LD PC-V7
Full Rig Info

Owned

 Share

CPU: 7800X3D
MOTHERBOARD: B650E-I Strix
RAM: G.Skill Flare X5
GPU: Reference 6950 XT
CASE: InWin D-Frame Mini
WC RADIATOR: MO-RA3 with 4x180mm Silverstone fans
FAN CONTROLLER: Aquaero
Full Rig Info

Owned

 Share

CPU: 12600KF
MOTHERBOARD: Z790I Strix
RAM: G.Skill Trident Z5 Neo
GPU: RTX 2080
CASE: Sliger SM580
WC RESEVOIR: Phanteks R160C
WC RADIATOR: XSPC TX240 Ultrathin
FANS: Phanteks T30
Full Rig Info
Link to comment
Share on other sites

30 minutes ago, Fluxmaven said:

Just haven't been as interested in PC stuff the past year or so. 

 

I'm just going to take this as an opportunity to go on a mini-rant:

 

We're at the point where Moore's Law is dead. 

 

I think Moore's law actually died at 16nm. It's my understanding that there are parts of a CPU that just cannot get smaller than 16nm, where as others can. Individual parts of a CPU have different limits to how small they can be. While we're not quite at the limit for how small transistors can be, we're close and we are at limits for a lot of other parts of modern chips. Ever since we started hitting these limits, there is less and less to gain and less and less to be excited about with new hardware. 

 

That's why efficiency gains are a bigger deal now than ever before. Partly because increasing efficiency is one way to reduce heat and thus push chips harder, and also because there are not big wins left for performance. 

 

 

But then there's NPUs. 

 

I honestly think Nvidia did not have a plan to create DLSS when it launched the first NPU/Tensor enabled graphics cards. I don't know this to be true and it's pure speculation, but just following the development of this stuff as it went on I felt like DLSS was Nvidia trying to create something to justify the extra cost of the tensor cores. I think Nvidia knew they would be useful for something, and they wanted to create a foundation of hardware that could support a market for things like LLMs and all the other stuff NPUs can do. Nvidia was lucky that DLSS worked out so well, much better than I think they had any idea of when they first released the 2000 series. 

 

Fun fact about DLSS

Spoiler

At one point I did a deep dive into DLSS and how it compared to other stuff like FSR and XeSS to try to answer the question of "Could DLSS work on AMD GPUs that don't have tensor cores?" And the answer is definitely no.

 

DLSS hammers the tensor cores with data that gets processed extremely fast. The tensor cores can appear to be doing almost nothing during a DLSS workload because they process the data so quickly that it basically has a lot of idle time between loads. The data has to be process super-fast in order for it to work at the level DLSS does it, with temporal data and "AI" driven inferencing and all that. 

 

Another fun fact, XeSS on an Intel GPU actually processes way more data than DLSS, like ten times as much. On other GPUs it uses a lower quality fall back method, but on an Intel GPU it's doing a ton of work. 

 

AMD GPUs just don't have that capability right now. They can do AI workloads, but not real time 3D rendering AI-driven workloads because they just don't have a component to push through huge amounts of inferencing data in real time. 

 

I think NPUs are a very interesting future direction for hardware, much more so in relation to gaming than anything else going on at the moment. We're still in the early days with what NPUs can bring to the table for 3D rendering. NPUs can do a lot to speed up the actually rendering process, and so far we are just using it for image enhancement, so it's basically a post processing effect in a lot of ways (though not entirely). 

 

Nvidia was basically able to get an 8X multiplier of rendering power in the sense that DLSS3 with frame gen can render 1 out of ever 8 pixels you actually see and get roughly comparable quality to native rendering, though with some obvious trade-offs. I don't think most people realize how impressive that is right now because the effects of Moore's law ending make it feel like there are just normal gains, but they're actually much more fascinating gains based on new hardware that is still really early yet extremely capable. 

null

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: 32GB 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

28 minutes ago, UltraMega said:

 

I'm just going to take this as an opportunity to go on a mini-rant:

 

We're at the point where Moore's Law is dead. 

 

I think Moore's law actually died at 16nm. It's my understanding that there are parts of a CPU that just cannot get smaller than 16nm, where as others can. Individual parts of a CPU have different limits to how small they can be. While we're not quite at the limit for how small transistors can be, we're close and we are at limits for a lot of other parts of modern chips. Ever since we started hitting these limits, there is less and less to gain and less and less to be excited about with new hardware. 

Yeah I think in many way yes, in some ways no.

 

Analog components (your I/O, etc.) stopped scaling with new nodes way back in 28nm and maybe even before then. Cache was scaling well until around 7nm and has been leveling off since. Logic (your CPU / GPU cores, etc) on the other hand, continues to scale with each new process shrink, and actually looks like it might start scaling even better as we continue.

 

Seems to suggest scaling is favoring logic these days while cache and analog transistors are already so much smaller than logic transistors, that they are probably already at their physical limits and in that sense, yeah Moore's Law is dead.

 

And then yeah you mention the efficiency part which is interesting because on one hand companies are really trying to market themselves as efficient. Meanwhile the actual base power draw is so much more these days that they used to be. Top end GPUs used to max out at 250W. Now you have a 450W (default) 4090 that is arguably one of the most efficient power to performance GPUs Nvidia has released to the consumer market, yet its still drawing far more than we used to.

 

Anyway, I'm interested in hardware still to an extent, but likewise hard to get excited about stuff when large market segments have stagnated and only gotten more expensive.

 

image.png.718947ea501baf21d3b6305ef80386cc.png

 

Based on this chart from AMD, one of the reasons chiplets made sense. Logic on the most advanced node they want to use, meanwhile offload I/O component to older nodes because there is no reason to use the more advanced/expensive ones.

Edited by Sir Beregond
  • Respect 1

null

Showcase

 Share

CPU: AMD Ryzen 9 5900X
GPU: Nvidia RTX 3080 Ti Founders Edition
RAM: G.Skill Trident Z Neo 32GB DDR4-3600 (@ 3733 CL14)
MOTHERBOARD: ASUS Crosshair VIII Dark Hero
SSD/NVME: x2 Samsung 970 Evo Plus 2TB
SSD/NVME 2: Crucial MX500 1TB
PSU: be Quiet! Straight Power 12 1500W
MONITOR: LG 42" C4 OLED
Full Rig Info

null

Owned

 Share

CPU: E8400, i5-650, i7-870, i7-960, i5-2400, i7-4790k, i9-10900k, i3-13100, i9-13900ks
GPU: many
RAM: Corsair 32GB DDR3-2400 | Oloy Blade 16GB DDR4-3600 | Crucial 16GB DDR5-5600
MOTHERBOARD: ASUS P7P55 WS SC | ASUS Z97 Deluxe | EVGA Z490 Dark | EVGA Z790 Dark Kingpin
SSD/NVME: Samsung 870 Evo 1TB | Inland 1TB Gen 4
PSU: Seasonic Focus GX 1000W
CASE: Cooler Master MasterFrame 700 - bench mode
OPERATING SYSTEM: Windows 10 LTSC
Full Rig Info

Owned

 Share

CPU: M1 Pro
RAM: 32GB
SSD/NVME: 1TB
OPERATING SYSTEM: MacOS Sonoma
CASE: Space Grey
Full Rig Info
Link to comment
Share on other sites

46 minutes ago, Sir Beregond said:

Yeah I think in many way yes, in some ways no.

 

Analog components (your I/O, etc.) stopped scaling with new nodes way back in 28nm and maybe even before then. Cache was scaling well until around 7nm and has been leveling off since. Logic (your CPU / GPU cores, etc) on the other hand, continues to scale with each new process shrink, and actually looks like it might start scaling even better as we continue.

 

Seems to suggest scaling is favoring logic these days while cache and analog transistors are already so much smaller than logic transistors, that they are probably already at their physical limits and in that sense, yeah Moore's Law is dead.

 

And then yeah you mention the efficiency part which is interesting because on one hand companies are really trying to market themselves as efficient. Meanwhile the actual base power draw is so much more these days that they used to be. Top end GPUs used to max out at 250W. Now you have a 450W (default) 4090 that is arguably one of the most efficient power to performance GPUs Nvidia has released to the consumer market, yet its still drawing far more than we used to.

 

Anyway, I'm interested in hardware still to an extent, but likewise hard to get excited about stuff when large market segments have stagnated and only gotten more expensive.

 

image.png.718947ea501baf21d3b6305ef80386cc.png

 

Based on this chart from AMD, one of the reasons chiplets made sense. Logic on the most advanced node they want to use, meanwhile offload I/O component to older nodes because there is no reason to use the more advanced/expensive ones.

 

 

The gains are getting smaller and smaller, and further apart. Even if smaller nodes scale we'll, we're still almost at the end of the road. We're at 3nm right now, right? Can't get any smaller than 1nm or even if we can it will be just barely. Once we get there, that's it. The only gains left will be architecture improvements until and unless we find some new ways to make chips. 

Edited by UltraMega

null

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: 32GB 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

I thought predictions on compute chips (CPU / FPU {then} | NPU / APU {now} / etc) some 25 years ago (1999 computer mags) envisioned chips would become independent rather than combined once proper interconnects (then theory was possibly graphene, photonics, etc) could handle the data sets at speeds required for the 'future' of computing to continue. 

 

We're seeing this beginning to happen now, no? 🤔

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy