Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, register to take part in our community, don't worry this is a simple FREE process that requires minimal information for you to signup.

 

Registered users can: 

  • Start new topics and reply to others.
  • Show off your PC using our Rig Creator feature.
  • Subscribe to topics and forums to get updates.
  • Get your own profile page to customize.
  • Send personal messages to other members.
  • Take advantage of site exclusive features.
  • Upgrade to Premium to unlock additional sites features.
IGNORED

Nvidia drivers reveal AI-powered downscaling feature called DLDSR


UltraMega

Recommended Posts

Quote

Dynamic Super Resolution is Nvidia’s method for letting users easily downsample games. It renders a game at a higher resolution, then shrinks it back down to the native resolution of your monitor. This results in extremely effective anti-aliasing but only performs well if a GPU has the extra horsepower to run at that higher resolution. It’s a good way to make older games with outdated anti-aliasing technology look cleaner.

 

Deep Learning Dynamic Super Resolution (DLDSR) uses RTX graphics cards’ Tensor cores to make this process more efficient. Nvidia’s announcement claims using DLDSR to play a game at 2.25x the output resolution looks as good as using DSR at 4x the resolution, but achieves the same framerate as 1x resolution.

WWW.TECHSPOT.COM

Dynamic Super Resolution is Nvidia’s method for letting users easily downsample games. It renders a game...

Will be interesting to see where this goes. 

 

 

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: XMP 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

11 minutes ago, UltraMega said:

This results in extremely effective anti-aliasing but only performs well if a GPU has the extra horsepower to run at that higher resolution. It’s a good way to make older games with outdated anti-aliasing technology look cleaner.

It's really sounding like:

Render at 2.25x native resolution -> AI downscale to native resolution -> display

rather than

Render at native resolution -> AI upscale to 2.25x -> downsample -> display

 

What does AI downscaling provide in the former, that normal downscaling doesn't do? Proper edge detection for meaningful anti-aliasing?
It is unlikely to be the latter, because motion vectors or game-specific training would be needed.

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, mouacyk said:

It's really sounding like:

Render at 2.25x native resolution -> AI downscale to native resolution -> display

rather than

Render at native resolution -> AI upscale to 2.25x -> downsample -> display

 

What does AI downscaling provide in the former, that normal downscaling doesn't do? Proper edge detection for meaningful anti-aliasing?
It is unlikely to be the latter, because motion vectors or game-specific training would be needed.

As I understand it, it's basically like using DSR and DLSS at the same time and that's basically about it. DSR will give better AA results and so will DLSS so I guess this would be like super extra (probably overkill) AA. 

 

I could see this being great for 1080p and 1440p but at 4k the need for this level of AA is just not that great so the performance trade off probably wouldn't be worth it. 

 

I wonder if this is going to be a game specific feature like DLSS or if it will be at the driver level. If it's the latter, that implies that Nvidia could make DLSS a driver level implementation if they wanted to. 

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: XMP 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

I wonder what's different about this that let's it work at the driver level. 

Owned

 Share

CPU: 5800x
MOTHERBOARD: ASUS TUF Gaming B550-Plus
RAM: XMP 3600mhz CL16
GPU: 7900XT
SOUNDCARD: Sound Blaster Z 5.1 home theater
MONITOR: 4K 65 inch TV
Full Rig Info
Link to comment
Share on other sites

The AI downscale is likely using a neural network trained to eliminate shimmering and edge aliasing, and thus needs the tensor cores to run. However, being a downscale process, you could probably achieve just as good a result by using spatial algorithms that do not need the tensor cores.

 

Honestly, when you're down-sampling, there won't be much difference between a source image that is 4x or 2x. Even at 2x, a simple fast sharpen filter would have been sufficient.

Edited by mouacyk
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy