This E3 was somewhat anomalous. We are now on the way to the transformation of this event, less and less central in a world that thrives on constant information and leaks. However, there is still that aura of an exclusive event, with the eyes of the world on them, which allows E3 to continue to be a good time to present information to a mass of journalists and even the most general public, not only in the sector. It is in this frame that AMD decided to present its new products.

AMD is the supplier of CPUs and GPUs for the home consoles of Microsoft and Sony and it will also be for the next generation, leaving Intel and Nvidia alone in the PC world, even if the latter has managed to take the slice of Nintendo gamers. With this article we will try to understand what the improvements are and how these will impact on future gaming consoles and PCs.

Many cores for everyone

The new AMD offer branches out on both their product lines: CPU and GPU, with the 3000 series of CPUs and 5700 GPUs. The Ryzen that are based on the Zen 2 architecture were already presented at the end of May for the Computex 2019. They took the E3 to present their flagship CPU: the Ryzen 3950X. 16 core. 32 Threads. 4.7Ghz of boost, 3.5Ghz base. 105 Watts of consumption. 749 $ price.

My doubt at this point is: why present it as a gaming processor? Modern video games, if they come to use 6 core, is a miracle. Here there would be 10 as well, stopping to turn their thumbs. The solution was quickly announced and was unveiled at the beginning of the conference: streaming. Today everyone wants to try to pass their game on Twitch. Encoding a video stream while playing a heavy game, ensuring high quality to its viewers, requires a lot of computing power. The task falls on the CPU.

RDNA Ryzen 3000
Preliminary tests with an unspecified set up, but AMD shows an equivalent performance, so it relies on other factors to sell its 3900X.

The streamer that do it by profession, often have a set up with two computers: one for the game and one to manage the streaming. With 12 and 16 core processors, available on a consumer platform, AMD attacks this need: to do everything on one computer. In the demo shown by them, an 3900X (12 core) was able to manage a stream in full HD at 10Mbps with Slow preset of The Division 2, a game capable of adequately exploiting an exacerbation. The Intel processor was particularly busy with the game to ensure a stream of this quality, while the 3900X had a lot of free power.

For professional streamers, the proposal is very interesting, mainly because it has drastically lowered the cost to obtain a multicore solution. For the average user, aspiring to buy such a processor just for streaming is largely useless. The Hardware encoder of the Nvidia Turing cards guarantees an excellent visual quality (equal to the medium x264 software) with a very low demand for resources. This core is for all colors that need so many cores, so of professional work, but not so much bandwidth on the memory.

Radeon DNA

The central part of the conference was the most interesting one, because it presented la new family of graphics processors from AMD: Ships. The 7 July will arrive in 2 variants: the 5700 and the 5700 XT. With Navi, AMD has finally changed architecture. We move from the now antiquated and inadequate GCN (stuff born in 2012) to the RDNA. In terms of numbers, AMD declares an increase in 25% in performance per clock and an 50% better performance per watt. The change in architecture can be seen from the number of transistors. In just the 251 mm2 they managed to place 10,3 billion transistors, an increase of 80% compared to previous Radeon 580.

The Achilles heel of the GCN architecture has always been the efficiency of using their computing power. On paper, their cards literally had to shred Nvidia in every market segment. Premises that, in reality, have not been maintained.
AMD has therefore greatly streamlined the highways that channel data to the computing units in the architecture.

Graphically the change of logic in the data path between the architectures is much more evident

GCN had wide 64 wavefronts, and their SMID units had 16 slots. In RDNA they are both great 32 threads. Okay, big words. Let us try to examine the concept we want to highlight with these technical changes without going too deep. The wavefront is the set of instructions grouped together and ready to be sent to the calculation units. The old architecture needed 4 cycles to pass instructions to all SMIDs (64 / 16). Also, if you can't group the instructions in blocks from 64 threads, you waste a lot of time with parts of the GPU waiting.

To make everything work better, a lot of work was needed from the developers, to find something to do to the stalled pipelines. This is why the much-talked about Async Compute on AMD's GPUs has an extremely beneficial effect. It allows you to take advantage of breaks. Having reduced the size of the wavefront in RDNA, it allows to group workloads more easily, while having enlarged the size of the SMID allows to occupy 1 only cycle to pass the data. This is the keystone that allows architecture to be a real step forward towards efficiency. Note that AMD has maintained compatibility with the 64 width wavefront. Silicon and more logic, but it is essential to not break the console ecosystem in a single shot and to allow backwards compatibility with all the code optimized for GCN.

The triangle no

The other major weakness of the GCN architecture was the polygon processing capacity, which has remained practically static since the second generation of CGN. This is also the reason why games on PS4 and Xbox One have not seen an excessive increase in the polygonal mass, favoring improvements in other fields. On PC instead, some games close to Nvidia, could offer the use of tessellation, and therefore of many small polygons, to enrich the scene with detail, with a more limited performance weight on the green team cards. Well, RDNA also improves in this field in two different ways.

First of all, now architecture is capable of handle twice as many polygons in its work cycle compared to GCN. Values, however, lower than those of Nvidia with Turing, but a huge step forward. Then, primitive shaders were reintroduced. They were presented with Vega, but they were deactivated because they basically didn't work. These are codes that are able to process geometry much more efficiently. These shaders are able to eliminate all the elements that are not needed for the scene, at a high rate. If this feature were finally used, we would talk about 30% performance improvements.

To this is added also a new cache memory level, which should greatly help overall performance. Turing is a big step forward for Nvidia mainly for having filled the GPU with memory caches close to the computing units and fast. The AMD approach is less invasive, but nevertheless an improvement. The other features presented are various and possible software solutions, difficult to judge without the slightest information, so overflight, also because they die almost all with time.


I don't know about you, but all this dullness for a project called Scarlett ...

Let's talk about the elephant in the room. Scarlett. And that statement by 4 times the power of the Xbox One X. Let everyone immediately make quick calculations in mind. 6 × 4 = 24TF. SCARLETT 24 TERAFLOP. Then, calm down. The new Navi 5700XT, has peak 9,75TF. Consume 225 Watt only and costs 449 $. What will Scarlett ever have inside? Considering the efficiency per clock of before, the 25%, an RDNA to 10TF, should render as an 12,5 TF CGN. Here, then, that the 5700XT or in any case its variant seems a good candidate. But the real step forward is another. The transition from the Jaguar core to a Ryzen. Using the power of conspiracy, we can say that the IPCs of a Zen 2 are the largest 95 of Jaguar cores. This means that a Ryzen at 1,17Ghz will have the same performance as the Xbox One X core that travels to 2,3Ghz. Now, let's take the 4,7X's turbo 3950Ghz. It would be the equivalent 9,1Ghz of a Jaguar. That is, almost 4 times.

So it's a bit tight, I expect frequencies realistically around the 3Ghz, and a core count of 8. If they had the SMT enabled, it would speak of twice the cores of the current consoles, even bringing with the 3Ghz a quadrupling of the power.

At the bottom there was talk of 120fps. And 120fps is 30 x 4.

I would say that the time has come to draw conclusions. AMD's products could be the beginning of a great return even on the GPU side. The mentality behind their GPU has changed, aligning itself with that of Nvidia. The new architecture is designed for speed, less units but extremely more efficient. The increase in performance on the geometry side will really bring about a great polygonal leap in the exclusively next gen games. We look to future times with a keen eye, because the technology that will drive the next consoles is finally something decent and not a warehouse waste.