INTO THE DEEP
Maximum PC|April 2020
Introduced in Turing cards, DLSS promised better images via machine learning. PHIL IWANIUK plumbs its depths
PHIL IWANIUK
INTO THE DEEP

As sales pitches go, Nvidia’s deep learning supersampling (DLSS) is as revelatory, and preposterously sci-fi, as it gets. Specific cores dedicated to machine learning, harnessing bleeding-edge algorithms, take resource-friendly 1440p resolution images and blow them up to super-sharp 4K proportions—and all without the performance hit of actually running at that higher resolution in the first place.

Along with that quietly revolutionary real-time ray tracing capability, DLSS was the big selling point of Turing cards. But while the advantages are real, and available to you in a generous swathe of games, its uneven execution from title to title is confusing. In its worst moments, it’s been capable of actually making your game look worse when enabled than in the original native resolution.

But the teething problems might prove trivial when we look at deep learning’s potential to turn graphics rendering on its head. Just as real-time ray tracing’s proved to be what some might call a soft launch, offering tantalizing glimpses of its promise amid the noise of patchy game support and performance issues, so it goes with DLSS. For owners, it feels like being in on the ground floor on an important new advancement, and having to deal with adjustment inertia from software developers and driver writers.

この記事は Maximum PC の April 2020 版に掲載されています。

7 日間の Magzter GOLD 無料トライアルを開始して、何千もの厳選されたプレミアム ストーリー、9,000 以上の雑誌や新聞にアクセスしてください。

この記事は Maximum PC の April 2020 版に掲載されています。

7 日間の Magzter GOLD 無料トライアルを開始して、何千もの厳選されたプレミアム ストーリー、9,000 以上の雑誌や新聞にアクセスしてください。