Market Positioning of Graphics and Compute solutions

positioningWhen compute became possible on GPUs, it was first presented as an extra feature and did not change much to the positioning of the products by AMD/ATI and Nvidia. NVidia started with positioning server-compute (described as “the GPU without a monitor-connector”), where AMD and Intel followed. When the expensive Geforce GTX Titan and Titan Z got introduced it became clear that NVidia still thinks about positioning: Titan is the bridge between Geforce and Tesla, a Tesla with video-out.

Why is positioning important? It is the difference between “I’d like to buy a compute-card for my desktop, so I can develop algorithms that run as well on the compute-server” and “I’d like to buy a graphics card for doing computations and later run that on a passively cooled graphics card”. The second version might get a “you don’t want to do that”, as graphics terminology is used to refer to compute-goals.

Let’s get to the overview.

AMD NVIDIA Intel ARM
Desktop User * A-series APU Iris / Iris Pro
Laptop User * A-series APU Iris / Iris Pro
Mobile User Tegra Iris Mali T720 / T4xx
Desktop Gamer Radeon GeForce
Laptop Gamer Radeon M GeForce M
Mobile High-end Tegra K (?) Iris Pro Mali T760 / T6xx
Desktop Graphics FirePro W Quadro
Laptop Graphics FirePro M Quadro M
Desktop (DP) Compute FirePro W Titan (hdmi) / Tesla (no video-out) XeonPhi
Laptop (DP) Compute FirePro M Quadro M XeonPhi
Server (DP) Compute FirePro S Tesla XeonPhi (active cooling!)
Cloud Sky Grid

* = For people who say “I think my computer doesn’t have a GPU”.

My thoughts are that Titan are to promote compute at the desktop, while also Tesla is promoted for that. AMD has the FirePro W for that, for both Graphics professionals and Compute professionals, to serve all customers. Intel uses XeonPhi for anything compute and it’s is all actively cooled.

The table has some empty spots: Nvidia doesn’t have IGP, AMD doesn’t have mobile graphics and Intel doesn’t have a clear message at all (J, N, X, P, K mixed for all types of markets). Mobile GPUs from ARM, Imagination, Qualcomm and others have a clear message to differentiate between high-end and low-end mobile GPUs, whereas NVidia and Intel don’t.

Positioning of the Titan Z

Even though I think that Nvidia made a right move with positioning a GPU for the serious Compute Hobbyist, they are very unclear with their proposition. AMD is very clear: “Want professional graphics and compute (and play games after work)? Get FirePro W for workstations”, whereas Nvidia says “Want compute? Get a Titan if you want video-output, or Tesla if you don’t”.

See this Geforce-page, where they position it as a gamers-card that competes with the Google Brain Supercomputer and a MAC Pro. In other places (especially benchmarks) it is stressed that it is not meant for gamers, but for compute enthusiasts (who can afford it). See for example this review on Hardware.info:

That said, we wouldn’t recommend this product to gamers anyway: two Nvidia GeForce GTX 780 Ti or AMD Radeon R9 290X cards offer roughly similar performance for only a fraction of the money. Only two Titan-Zs in SLI offer significantly higher performance, but the required investment is incredibly high, to the point where we wouldn’t even consider these cards for our Ultimate PC Advice.

As a result, Nvidia stresses that these cards are primarily intended for GPGPU applications in workstations. However, when looking at these benchmarks, we again fail to see a convincing image that justifies the price of these cards.

So NVIDIA’s naming convention is unclear. If TITAN is for the serious and professional compute developer, why use the brand “Geforce”? A Quadro Titan would have made much more sense. Or even “Tesla Workstation”, so developers could get a guarantee that the code would run on the server too.

Differentiating from low-end compute

Radeon and Geforce GPUs are used for low-cost compute-cluster. Both AMD and NVidia prefer to sell their professional cards for that market and have difficulties to make a clear understanding that game-cards are not designed for compute-only solutions. The one thing they did the past years is to reserve good double precision computations for their professional cards only. An existing difference was the driver quality between Quadro/FirePro (industry quality) and GeForce/Radeon. I think both companies have to rethink the differentiated driver-strategy, as compute has changed the demands in the market.

I expect more differences between the support-software for different types of users. When would I pay for professional cards?

  1. Double Precision GFLOPS
  2. Hardware differences (ECC, NVIDIA GPUDirect or AMD SDI-link/DirectGMA, faster buses, etc)
  3. Faster support
  4. (Free) Developer Tools
  5. System Configuration Software (click-click and compute works)
  6. Ease of porting algorithms to servers/clusters (up-scaling with less bugs)
  7. Ease of porting algorithms to game-cards (simulation-mode for several game-cards)

So the list starts with hardware specific demands, then focuses to developer support. Let me know in the comments, why you would (not) pay for professional cards.

Evolving from gamer-compute to server-compute

GPU-developers are not born, but made (trained or self-educated). Most times they start with OpenCL (or CUDA) on their own PC or laptop.

With Nvidia it would be hobby-compute on Geforce, then serious stuff on Titan, then Tesla or Grid. AMD has a comparable growth-path: hobby-compute on Radeon, then upgrade to FirePro W and then to FirePro S or Sky. Intel it is Iris or XeonPhi directly, as their positioning is not clear at all if it comes to accelerators.

Conclusion

Positioning of the graphics cards and compute cards are finally getting finalised at the high-level, but will certainly change a few more times in the year(s) to come. Think of the growing market for home-video editors in 2015, who will probably need a compute-card for video-compression. Nvidia will come with another solution than AMD or Intel, as it has no desktop-CPU.

Do you think it will be possible to have an AMD APU with NVIDIA accelerator? Do people need to buy a accelerator-box in 2015 that can be attached to their laptop or tablet via network or USB, to do the rendering and other compute-intensive work (a “private compute cloud”)? Or will there always be a market for discrete GPUs? Time will tell.

Thanks for reading. I hope the table makes clear how things are now as of 2014. Suggestions are welcome.

Related Posts

4 thoughts on “Market Positioning of Graphics and Compute solutions

  1. dr.next

    Well with introduction of Metal API by Apple
    which I saw you were upset by in twitter but
    I really see it as OpenCL 3.0 just using c++11 with graphics sharders
    and buffers in a shared memory data structure.
    all the types and functions are straight out of OpenCL.

    Even though AMD’s Mantle will only give max 40% gain in FPS for CPU
    bound games, 30% gain in FPS for dual GPU config and only 10% gain in FPS for GPU bound games.
    So Apple’s Metal should be little bit faster,
    It does mean that integrated GPU will kill the discrete GPU
    up to mid level.

    NVidia might by really scared of Metal being included in OpenCL and just
    by passing OpenGL cruft completely.
    But there must be real battle ahead.
    I am sure Apple doesn’t care one bit. It all depends on whether Intel and AMD
    adapt it and force a standard by sheer market power.

    • StreamHPC

      This discussion does not fit the article, but I suppose this is a hint I should put more time into writing. 🙂 Once my company has grown a bit further, I have time again.

      Metal tries to replace OpenGL en OpenCL, *exactly* like Google tried with RenderScript. Seriously, I like languages built on top of OpenCL, as that is the way forward and that is what OpenCL is meant for. But I dislike vendor lock-in which forces developers to write their code in several languages or lets developers fight the platform wars – my tweets were about that.

      I do personally think that OpenGL needs to break with the past and clean out the mess to make the OpenGL Bible a tenth of the number of pages of what it is now. See http://www.reddit.com/r/gamedev/comments/21mbo8/we_are_the_authors_of_approaching_zero_driver/ how this can be done. Understand that most of the other discussions on what’s bad about OpenGL is mostly to promote somebody’s proprietary standard.

      My preference is not to have developers do the quarrel over proprietary APIs, but to have the vendors fight over who has the best implementation of open standards. Who sets the standards, defines the discussion…

      • dr.next

        Apple released OpenCL because it needed GPU vendors to support it.
        Metal is built on work done on OpenCL. It can be the future version of it.

        Apple is trying to solve engineering problem. They don’t
        sit around dreaming of vendor lock-in. That is why Apple gave
        LLVM tech to you guys right.
        There also people on twitter asking for Swift without Apple control.
        Most developer should be using higher level API that has no business being
        platform independent. If you want that then you better stick
        to Web Platform.

        You can’t expect Apple to wait for 2 years to use Metal simply by
        going to have Nvidia’s approval in OpenCL committee.
        There is also question of whether OpenCL should have graphics pipeline
        without plugging in OpenGL.
        OpenGL precisely is poster child of designing API by committee.
        OpenGL worked only because each GPU manufacturer could stick
        the new tech in extensions until every one caught up and another
        version of OpenGL could be released.

        GPU is bifurcating into Integrated which can use shared memory
        and high end. Metal is the answer to the first.

        Nvidia is sitting on their hand and not supporting OpenCl 1.2
        and you want to have open standard without vendor business getting
        in the way.

  2. Adam Glick

    Hi Vincent – I think you’re right, some may be confused by the Titan.

    However, my guess is that most of the pro users who will end up buying Titan Z – i.e. post/broadcast DI/finishing/color correctoin, 3D/VFX artists, GPU compute devs, etc.- are already buying consumer boards today. Perhaps Nvidia didn’t want to confuse these customers -and didn’t see how explicitly branding as a “pro” card would help them sell more Titan Z.

    In my mind, for those hell-bent on buying consumer cards, it seems that for a $3000 spend, dual RADEON R9 290X cards are still a better value for GPU compute (including dual precision), 3D and media processing vs. Titan Z.

    In any case, it will be interesting to see how the different vertical market segments (& related product positioning) are addressed by AMD and NV moving forward.

    -never a dull moment, that’s for sure!

Comments are closed.