AMD vs Nvidia GPU. Probably-daft question

NickHope wrote on 1/5/2016, 9:00 AM
It's generally accepted that AMD GPUs are currently "better" for Vegas Pro than NVIDIA, but if a user doesn't use any GPU acceleration, does it matter?

Comments

john_dennis wrote on 1/5/2016, 9:44 AM
Yes. Idle power.

If your premise holds, then you would buy the simplest card that would display the pixel dimensions of your screen(s). I think it's a good experiment.
relaxvideo wrote on 1/5/2016, 9:58 AM
that's why i use a passive gt210..
it can do hdmi 1.4 out for 3D too :)

#1 Ryzen 5-1600, 16GB DDR4, Nvidia 1660 Super, M2-SSD, Acer freesync monitor

#2 i7-2600, 32GB, Nvidia 1660Ti, SSD for system, M2-SSD for work, 2x4TB hdd, LG 3D monitor +3DTV +3D projectors

Win10 x64, Vegas21 latest

NickHope wrote on 1/5/2016, 10:04 AM
Err... not quite with you John.

To elaborate, I have an AMD HD6970 that was specifically bought for Vegas Pro. It works fine, but was probably an unnecessarily out-of-date choice as I ended up never using GPU acceleration in Vegas Pro, even when editing UHD. Now I need a GPU upgrade to use DaVinci Resolve and I'm wondering if a switch to NVIDIA would inherently harm my Vegas Pro performance, assuming I continue to not use GPU acceleration. In other words, is it a level playing field between AMD and NVIDIA re. Vegas Pro if all GPU-acceleration is turned off?
Chienworks wrote on 1/5/2016, 10:39 AM
It may not be completely level, but the difference should be teensy enough to be unnoticeable. All video cards have performed unaccelerated screen updates faster than the screen refresh rate since, well, pretty much since the first XGA card for the IBM PC. In any case, once the program writes the screen changes to the video card's buffer the program is then free to continue upon it's merry way no matter how long the card takes to display it. Vegas never sits around waiting for the non-GPU functions to complete and return information so the speed of the card is pretty much immaterial anyway.
john_dennis wrote on 1/5/2016, 11:08 AM
"[I]Err... not quite with you John.[/I]"

Kelly made my point very well. If the screen gets updated on time, and none of the internal processors are used in the video card then driver stability, idle power, other uses (other applications, etc.) become the only meaningful things left to measure. Then, there is always whether you like green or red logos better.

The only draw for GPU acceleration for me is preview frame rate. I don't care about rendering. I will admit that I'm put off by the thought of adding the extra power requirement to an activity (my video editing) that has no particular value to society.
astar wrote on 1/5/2016, 3:01 PM
The main difference between AMD and NV is that the chipsets have vastly different performance abilities.


Intel Core i7-5960X (8 core/16threads) - 187 GFLOPS
Intel Core i7-870 (gen1 4 cores/8threads) - 52 GFLOPS


NV TItan Z - 4,061 GFLOPS x2 = 8122 (double chip performance)
NV 980M - 3189 GFLOPS (Laptop Chip)


AMD R9-FuryX - 8602 GFLOPs (Single Chip Performance)
AMD 390X - 5914 GFLOPs
AMD HD7970 - 4000 GFLOPs (released 2012)
AMD M395X - 2961 GFLOPS (Try and find a laptop with this chip installed. Why would a manufacturer install this when the NV980M is so far ahead.)

The bottom line is OpenCL is about compute performance and NV needs to double up just to stay relevant. No amount of optimization in OpenCL is going to curb the hardware deficit.

Sony made a choice a long time ago to support the open standard that should have meant the widest GPU support. NV's support for OpenCL has been not so good, and some would say in an effort to steer people to CUDA. Due to bad press, because NV is all about marketing, only recently have they started to improve their OpenCL support and optimization. This is likely due due mostly to the fact that Final Cut runs OpenCL, and Nv lost the bid to have NV chips in the MAC Pro. The fact that Apple choose AMD in the Mac Pro is a big indicator of performance.

If you really do not understand the benefits of how a properly matched CPU+GPU combo helps, you really should not be commenting on the benefits of OpenCL and Vegas. Most people buy hardware based on price, and non X series AMD GPUs have compute units that are not fully enabled. That's why the card was a good deal. Vegas only accepts the OpenCL compute units on the CPU + the compute units on one selected GPU. All you really need to do is look at the MFLOP CPU performance and compare that to the GFLOPS performance of GPUs. Why would you not want to additively combine the performance of both.

Technically you could have one GPU for display, and another for Compute. Windows will use both, simply do not have any displays plugged into the 2nd GPU. The rub here is that most people again buy cheap hardware and there is not enough PCIe lanes to support both GPUs at full bandwidth. 8X PCIe speeds seem fast, but not when you compare the bandwidth of RAM memory, CPU L2 cache speeds, and GPU memory speeds.

OpenCL assists with the following chain of events in Vegas:

* Decoding certain codecs

* Timeline events

* Encoder calculations that can be sped up.

If your goal is rapid encoding, you want the best performance in all the areas, because one laggard means the rest are waiting for results. This is the reason that most complain about why Vegas is not utilizing their system to 100% during encodes. Clearly there are some wait states in the chain, probably due to hardware choices at the point of payment.
ddm wrote on 1/5/2016, 3:12 PM
>>>I ended up never using GPU acceleration in Vegas Pro

Why?
NormanPCN wrote on 1/5/2016, 3:29 PM
Nvidia has a lot of compute performance. They don't need to "double up" to stay relevant. As with a lot of benchmarking one implementation performs better in one circumstance and visa versa.

In the case of Vegas OpenCL usage, AMD performs better as most have reported.

Here is a review with the Fury X and GTX 980 listed among others. OpenCL benchmarks. Nvidia wins some and AMD wins some and by varying percentage differences. Sometimes significant differences.

http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/24
astar wrote on 1/5/2016, 3:44 PM
The article you posted is from July 2015. Tthe Luxmark and Vegas results are similar because they utilize OpenCL in a very similar fashion. If you are looking for Vegas improvements, because this is a Vegas forum, then you want to pay attention to those results.
john_dennis wrote on 1/5/2016, 3:52 PM
"[I]I ended up never using GPU acceleration in Vegas Pro

Why?[/I]"

I don't need to speak for Nick, but I remember the following GPU related bugs that he reported.

Bug: Sony Sharpen behaves differently with GPU

GPU bug: Pale edges of imported Photoshop layers

I also remember this advice:

"Forget that Vegas has a GPU feature. Just turn it off, everywhere you can."

I think I came to that conclusion independently, but it was nice to see it in writing from someone I revere. Only now that I am retired and profoundly bored am I considering buying a middling GPU.

NormanPCN wrote on 1/5/2016, 4:28 PM
"The Luxmark and Vegas results are similar because they utilize OpenCL in a very similar fashion."

That's a bold statement about implementation details I doubt you are privy to.

It's enough to say that Vegas and Luxmark, as of now, both perform better on AMD implementations. It is a stretch to say anything else.

Yes, the article is from July 2015. Your point?
ddm wrote on 1/5/2016, 6:58 PM
Thanks for the clarification, John. I vaguely remember those threads. I tested my preferred method of sharpening (the convolution kernel method) and that does not seem to add anything unusual when no values are entered. One more reason NOT to use Sony Sharpen, as if I needed another reason.
NickHope wrote on 1/5/2016, 9:32 PM
Thanks for the replies, everyone. I got the answer I was looking for.

"I ended up never using GPU acceleration in Vegas Pro

Mainly because I haven't needed to. With GPU acceleration turned off, my 2014 computer is able to smoothly preview heavily-compressed long-GOP 4K footage with a color curve on it, even at best/full (shrunk to a 1920x1200 monitor). This still surprises me somewhat, and I believe it has improved during the life of this computer. Possibly due to improvements with Windows, or perhaps due to improvements in my mobo BIOS, chipset drivers etc..

I think Vegas Pro's performance in this respect may well be ahead of other NLEs. It seems like doing what I do is "something you just don't do" in many NLEs, where shooting or transcoding to an interframe format, or using proxies, is a given.

Also, besides the 2 specific GPU bugs that John kindly linked to (which aren't really show-stoppers), many reported stability issues with Vegas can be traced back to GPU acceleration. It certainly doesn't seem to improve stability, or consistency of results.

As for GPU-accelerated rendering, I never use those formats. Most of my rendering is via Debugmode Frameserver, either to Magic YUV in VirtualDub, H.264 with x264 in Megui, or MPEG2 in Cinemacraft Encoder.