AMD vs NVIDIA – Two figures that can tell a whole story
Update September ’13: AMD gets their new GPUs “Volcanic Islands” with GCN 2.0 out in October. For this reason the HD 9790′s price has dropped to €250. This shakes up some of the things described in this article.
Update June ’14: It has become clear that Titan is not a consumer device and should be categorised as a “Quadro for compute”. All consumer devices of both AMD and Nvidia show relatively low GFLOPS for dual precision.
AMD/ATI has always had the fastest GPU out there. Yes, there were lots of times in which NVIDIA approached the throne, or even held the crown for a while (at least theoretically), but it was Radeon, at the end, the one who had the right claim.
Nevertheless, some things have changed:
- AMD has focused more on the new architecture, making it easier to program while keeping the GFLOPS the same.
- AMD bets on their A-series APU with integrated GPU.
- NVIDIA has increased both memory bandwidth and GFLOPS at a steady pace.
- NVIDIA has done the nitro-trick for double precision.
With NVIDIA GTX Titan (see three of them in the image), NVIDIA snatched victory from the jaws of defeat.
I’m not saying you should jump now to CUDA; there’s more than just GFLOPS. We should think also of costs and prevention of vendor-lockin. More particularly, I would like to show how unpredictable the market for accelerator-processors is.
Let’s take a look at the figures.
Below are the fastest consumer-targeted GPUs. As you can see, AMD has a flatline, while NVIDIA increased their performance at a fast pace.
Note that the dates are actual release dates, not the announcement dates. Also, there are a lot more differences between the dots than just the GFLOPS. There’s also architecture, memory bandwidth, memory size, PCIe-version, etc.
No, this is not a mistake. NVIDIA decided to put double precision in consumer-GPUs. Only the Titan has it, the rest still has 1/8th of single precision.
For the professional accelerator market see the “answer to” series on this blog.
When it comes to cost, there is a big difference between the two competitors. This could be an effect of the vendor lock-in by CUDA, or a nice void in the market.
Radeon HD 8970 (4 GB): €550
Radeon HD 7970 Extreme edition (6 GB): €570
NVIDIA GTX Titan (6 GB): €970
… and they all have 288 GB/s memory bandwidth.
AMD Radeon 8970 XT
|Higher clock speed||1,050 MHz||vs||925 MHz|
|Better floating-point performance||5,376 GFLOPS||vs||3,789 GFLOPS|
|Significantly higher pixel rate||50.4 GPixel/s||vs||29.6 GPixel/s|
|Higher texture rate||168 GTexel/s||vs||118.4 GTexel/s|
|Significantly more render output processors||48||vs||32|
|Slightly higher effective memory clock speed||6,000 MHz||vs||5,500 MHz|
|More shading units||2,560||vs||2,048|
|More texture mapping units||160||vs||128|
|More compute units||40||vs||32|
|Higher memory clock speed||1,500 MHz||vs||1,375 MHz|
The source mentions the normal 8970, but that is a mistake. See the Official specs of 8970 here [PDF].
Titan II / Ultra
NVIDIA is known for whispering specs of upcoming products far before they launch. Take the example of 3D stacked memory for products in 2016.
But when will the Titan II arrive? Nobody knows for sure. The fact that the card will actually show up, has already been put as a controlled rumor. Now, under what name it will appear (2, II, Ultra), or what specs it will have, is also very difficult to tell. We will probably know better by late 2013 or early 2014.
What is sure is that this battle continue until the discrete GPU market vanishes.
Share your thoughts! How long you think NVIDIA can hold on to power?