New Quadro M5000 Very Little Performance Increase

Beatdemon wrote on 12/29/2015, 8:03 PM
Hopefully someone has an answer as to why I am seeing very little performance increase with a brand new Quadro M5000. I just replaced a GTX 670 in hopes of seeing a significant performance increase, particularly since the Quadro is geared toward video editing.

I'm not impressed. Does the problem lie with Vegas? I'm using version version 13 64bit (build 453). I've set my profile in the NVIDIA control panel to "3D Video Editing" and included Vegas. I've set Vegas to use the GPU and it detects it just fine.

I'm using SSD's also.

The render time is exactly the same as the GTX and real time effects using the BCC plug-ins are extremely choppy in the preview window when previewing at Best (Auto). I'm at a loss here. Why is a $2,000 GPU behaving very much the same as my $400 one?

Thank you in advance for any wisdom you can share.

Comments

NormanPCN wrote on 12/29/2015, 8:36 PM
Because the workstation cards are commonly using the same GPU chips as the consumer cards. Workstation cards have huge profit margins compared to consumer cards. Workstation GPUs do have some features enabled that are not available on consumer cards.

Things like 10-bit color. Maybe better 64-bit floating point performance. The later really means nothing for most applications. The drivers are supposedly better validated for stability.

Finally, you can have a honest GPU that is 100% faster, but how much might that help your system. That depends on how much GPU is really being used. If the GPU is 20% of time then 100% does not make that big of a difference.

Really by default GPU is used very little. You have to use various effects and such to begin loading up the GPU. When you look at the old Sony GPU demo project, one sees they really loaded up on many effects to demo what GPU can do for you, if you do similar things.
Jamon wrote on 12/29/2015, 10:01 PM
The GTX 670 and Quadro M5000 do not share the same chips, and the M5000 benchmarks faster, similar to the Radeon R9 Fury X. It is in the top 10 fastest GPU benchmarks today. The M6000 beats it, but costs over double the price.

In old SCS benchmarks a slower AMD card has higher preview FPS than the Quadro. One could suppose that Nvidia's OpenCL implementation is very poor, but in benchmarks I've seen it seems more likely that the fault is with how SCS utilizes it.

Have you tried Catalyst? It seems to perform even worse. We can't simply say that Vegas can't utilize the GPU for most things, because people report their slower AMD cards performing better. I don't know what the problem is, but it's disappointing.
JohnnyRoy wrote on 12/29/2015, 11:21 PM
> "Hopefully someone has an answer as to why I am seeing very little performance increase with a brand new Quadro M5000. I just replaced a GTX 670 in hopes of seeing a significant performance increase,"

Because the GTX 670 has a faster clock speed 915 MHz vs 706 MHz of the Quadro K5000, the GTX it has better Memory Bandwidth of 192 GB/sec vs Quadro's 173 GB/sec, and almost as many CUDA cores 1344 vs 1536 so it's actually a faster card.

> "particularly since the Quadro is geared toward video editing. "

Actually that's not the case. Quadro cards have zero features for video editing. They are designed for 3D and CAD work. They have nothing to offer for video. Video people use them because the drivers are more stable but performance wise they offer nothing and are often slower than their gaming counterparts because their clocks are set slower as you just learned.

> "Why is a $2,000 GPU behaving very much the same as my $400 one?"

Because NVIDIA and AMD charge an insane premium for their workstation class cards which is not worth the money for video editing. If you ask me, it's a scam.

I say this from experience. I owned a Quadro 4000 and would never by one again. My ATI Radeon HD 5870 runs much faster for a lot less money.

~jr
Jamon wrote on 12/29/2015, 11:56 PM
The M5000 has a boost clock speed of ~1,050MHz and 4.3 TFLOPs with 211 GB/s memory bandwidth and 2,048 CUDA cores; it is faster than all those cards.

The Quadro line are optimized for GPU computing, and Vegas supposedly uses the OpenCL API, and there are advantages using a Quadro or FirePro for video editing today.

SCS probably just doesn't utilize it correctly.
Beatdemon wrote on 12/30/2015, 12:47 AM
So before I return the Quadro and re-install my GTX 670, how does anyone edit video in Vegas using effects like BCC, and be able to keep smooth video playback so you can actually see what's happening on the timeline? Everything is so choppy that it's impossible to see places to cut. Do you have to pre-render everything? I can't imagine all other video editing software gets this bogged down using a high end GPU.

I am starting to wonder if Vegas just isn't utilizing the Quadro properly. What is SCS?

Also, could it be that I need to transcode my video if I'm editing MP4 files from my Canon 70D?

Beatdemon wrote on 12/30/2015, 12:48 AM
Jonny, you're referring to the older gen Quadro. You'll notice I'm using the M5000 which is a lot faster than the GTX 670 in every respect.
astar wrote on 12/30/2015, 3:21 AM
I think the M5000 is more like 3.5TFLOPs, which puts in line with an AMD HD 7970, while the 390x is up closer to 6TFLOPs. The 7970 is still a good card in Vegas, the main difference is NV support of OpenCL sucks compared to AMDs. Vegas is OpenCL based and not CUDA. That is not a Sony problem, but more an NVidia screw you, do my way or the highway. NV claims OpenCL support, but they clearly do not measure up when tasked to perform that way. OpenCL is the standard by which the manufacturers should be optimizing, and not the other way around.

NV is more marketing than reality.

The Pro lines of cards are needed if you are using a deep color codec, and the monitor to display it. There is some blurring of this line with 4K supported resolutions and interface standards. But the actual support is gimped at the display driver to maintain product tiers.
OldSmoke wrote on 12/30/2015, 3:26 AM
Beatdemon

You are right,, VP isnt using the OpenCL on newer Nvidia Cards correctly and it doesnt use CUDA at all for anything; only older Fermi based Cards where supported when it comes to Rendering with MC AVC or Sony AVC.

At the Moment, your best choice is a AMD 390X or Fury X or FirePro W9100 if you are willing to spend that Kind of Money and need 10bit.

BCC utilizes OpenCL and will do well with the AMD Card. However, all BCC plugins are very compute intensive and you may still get reduced preview fps at Best/Full; 1080 30p should be fine, 1080 60p and 4K 30p will be tough.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Jamon wrote on 12/30/2015, 7:22 AM
There's a lot of bias against Nvidia around these parts, and a lot of what people learned is outdated from several years ago before OpenCL and GPU-accelerated computation was so prolific; if someone doesn't provide recent data, I wouldn't put much faith into what they say.

SCS is Sony Creative Software.

For previews being choppy, try downloading the free Catalyst Browse and see if it'll transcode to XAVC Intra. I have the M4000, and preview is too choppy with MP4, if it's long-GOP Inter-frame like XAVC S. Once you transcode to XAVC-I it should perform much smoother for preview.

Vegas has a built-in proxy function, so try that also.

The rendering performance varies depending on which codec, and if it's enabled for GPU-acceleration. I didn't fully test Vegas yet, but as I recall MainConcept was faster than Sony, and there's multiple GPU-accelerated options to compare. But I've been more worried about preview smoothness, since renders can be done in the background.

For SCS software specifically it might be true that AMD will perform better than Nvidia right now, but for other applications that same oddity probably won't hold true. While not the exact model you have, here is some more recent data regarding Quadro vs. FirePro in an M6000 review:



"Across all three levels of complexity, the gap between AMD's once-unbeatable FirePro W9100 and the new M6000 is like an abyss. The Hawaii-based card used to be the yardstick, but Nvidia's Maxwell architecture now lands at the top of the OpenCL food chain."

Years ago the 3D-accelerated features from graphics cards would be specifically for 3D tasks. If you weren't doing 3D, but 2D things like Photoshop and Vegas, then a higher performing 3D GPU wouldn't do much for you. But that's all changed, and now the GPU can be utilized for accelerated computation to assist the CPU with an API like OpenCL, which Quadro does perform well with. Other software is using its power, for intensive applications from video effects to machine learning.

If you end up exchanging the Nvidia for an AMD card, collect as many screenshots of benchmarking data as you can so you can compare. You might also download http://sonycreativesoftware.com/vegaspro12benchmark and see what times you get for that too, as SCS provides a PDF with some of their results.
BruceUSA wrote on 12/30/2015, 9:05 AM
We are talking specific brand of video card thats Vegas loves and performing well. In this regards, Nivida cards is simply sucks at the movement. End of story. No bias, no nothing here. No one here can show us any newer Nivida cards can performs well in Vegas. Benchmark chart are great but if it can't perform well in an actually video editing works. Then what is the great chart for?

The OP needs to return the M5000 and pick up AMD 390 Fury X and make your day in Vegas editing happier and your wallet as well. There are many happy AMD cards user here and ALL are pleased with the performance out of it.

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

Beatdemon wrote on 12/30/2015, 1:53 PM
Thank you for the quick responses. I just downloaded Catalyst Production Suite so I'm going to try transcoding all my video to see if that helps. Also, I'm a little confused by the claims that NVIDIA does not support OpenCL because their site states otherwise: https://developer.nvidia.com/opencl

In fact, according to NVIDIA they were the first ones to demonstrate it: http://www.nvidia.com/object/cuda_opencl_1.html

The problems I'm experiencing could simply be related to working with the files directly pulled from my Canon 70D instead of transcoding them first. I do like the idea of saving $1,000 bucks on an AMD, of course if it will actually do the work needed.

I'm going to try a few things and then report back. If I can't get the Quadro to work as expected, I'm sending her back.
astar wrote on 12/30/2015, 3:36 PM
You mean the M6000 has finally caught up with technology released 6 months ago, the GLFOP rating of is on par with the 390X. You would need to look at the specifics of the Luxmark tests, as NV has been known to optimize for benchmark results for marketing reasons. You notice that Luxmark stat compares the w9100(which is a 290X) and not against the Fury-X that exceeds the GFLOP rating of the M6000 by 2.5 TFLOPs. Do not fall for NVs marketing BS. Study the stats on the devices you are buying before making a purchase. I expect ADM Firepro Fury to set a new Bar once the HBM production scales up, if AMD can stay in business that long.
Beatdemon wrote on 12/30/2015, 5:58 PM
OK, I just found out after doing a little research that the M5000 needs an antiquated driver from NVIDIA to use GPU Acceleration for Sony's AVC codec... one that precedes the existence of the card itself. Good hell.

Is it best to transcode all video to AVI before editing in Vegas? Will I see a performance jump by doing this? It says on the Vegas Pro 13 marketing info that my camera's codec is natively supported via their "Multi-format native editing" paragraph. Yet it seems it's exactly the MOV files from my camera that are slowing things down.
BruceUSA wrote on 12/30/2015, 8:36 PM
If you have a decent system with amd card. You don't need to transcode your 70D footage at all. My system chopping thru 5D Mark III, 60D and everything else beautifully. The only time I am transcoding anything is my Samsung camera 4K with .H265 codec. Transcoding is minute for minute from H265-H264 on my end.

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

TVJohn wrote on 1/1/2016, 9:57 AM
I have been at this a very long time. Unfortunately, NLE usage of video cards does not parallel the operation of computer games, and raw frame rate stats usually do not translate into value for $$$ in NLE operations.
Midrange video cards that are not excessively overclocked and properly cooled usually deliver the most stable, cost effective results.
Beatdemon wrote on 1/2/2016, 3:13 PM
To be very clear here, Vegas is smoothly playing through video just fine with the Quadro (at highest quality settings) UNTIL I start adding effects plug-ins. I have to then start previewing the video in lower quality or it just gets choppy.

Is there just no way, regardless of how much you spend on a GPU, to have effects play back smoothly? Do the guys who are doing this professionally have arrays of GPU's to do this kind of work?

I'm guessing that whether it's a Quadro or a Fury, I'm going to get similar results with regards to playback and plug-in effects.
Chienworks wrote on 1/2/2016, 3:28 PM
"the Quadro is geared toward video editing"

There is an important misconception here. Anything that a video card is "geared" to be good at is specific to the design of that card and assumes that any programs making use of it use the functions the card provides. However, most video card accelerators are designed purposefully for game rendering, 3D rendering, etc. and the supplied subroutines on the card must be used to gain this advantage. Video editing really has no use for such accelerators, and, conversely, most video cards do not come with accelerator subroutines that help the way Vegas works. If it's geared for video at all, it's in running programs written specifically to pipe video through it's GPU architecture rather than generalized NLE work.

Without this sort of advantage all the video card brings to the process is a bunch of fast number-crunching cores, so it's up to the program you are running to take advantage of them efficiently ... if it can. However, all the cores on the video card are distinctly less powerful than even mediocre CPU chips. The only advantage the GPU has is redundancy, but this quickly reaches diminishing returns in video so having dozens or hundreds of low-power cores doesn't give much benefit.

There are *some*, very few, video effect plugins that have been optimized to make good use of GPUs, but in this case they are still dependent on and wait for the host program to move forward to the next step before they can process the next frame, so even here there may not be much improvement.

If you want a faster video editing experience, don't worry about the video card. Take half the money a high end card costs and upgrade your CPU instead. Most NLEs, and Vegas in particular, thrive on CPU power more than anything else.
BruceUSA wrote on 1/2/2016, 5:13 PM
See for your self. Here is my AMD cards performs beautifully.


Corrected....new link below.

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

Stringer wrote on 1/2/2016, 9:00 PM
Your link requires logging into Vimeo
BruceUSA wrote on 1/2/2016, 9:11 PM
Sorry about that. Corrected. Try again you should be able to see it now.

https://vimeo.com/116909126

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

OldSmoke wrote on 1/3/2016, 2:39 AM
If you want a faster video editing experience, don't worry about the video card.

Statements like this usually come from users who have not been able to take advantage of Vegas GPU acceleration themselfs and therefore can not see what kind of performance increase GPU acceleration has brought to the Vegas editing experience.

Vegas uses ONLY OpenCL for preview and to 99% for rendering. The ONLY time when CUDA is used is for Rendering with MC AVC and SONY AVC codecs when you set the CUDA option in the render template. However, both codecs where written a Long time ago when the NVIDIA Fermi Cards where just released and they have never been updated to support later NVIDIA architecture.

The implementation of OpenCL is just much better in AMD cards than NVIDIA cards and that will most likely never change.

So all you Need is a R9 290 or higher AMD paired with at least a higher end quad core i7 and native PCIe3.0, preferable a 2011 socket system with 32GB ram.

I would return the Quadro M5000 and get a Fury X which is by far less expensive and you will be a much happier Vegas user. If you need 10bit then you have no choice but to go for a FirePro card W7100 or higher but cost goes up dramatically.

Edit: Here is a showing VP12 Performance with 2x GTX570. It shows rendering the SCS Benchmark Project with MC AVC and CUDA with a FERMI Card.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

BruceUSA wrote on 1/3/2016, 11:25 AM
Oldsmoke A++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

Beatdemon wrote on 1/4/2016, 3:22 PM
Well this is interesting... I'm seeing that both AVID and Adobe support the new M6000, M5000, and M4000 and recommend it for video editing. Unfortunately I don't have either to compare. Sony is just behind the curve on this. Apparently the performance has more to do with how the application is handling the features of the GPU and not the other way around.

Sending the GPU back to New Egg is going to cost me $300 due to it being open box! So, that's out of the question. Wait for Sony to support it?
Jamon wrote on 1/4/2016, 3:40 PM
1. Download http://sonycreativesoftware.com/vegaspro12benchmark
2. Set preview to Best/Full and report the FPS you see while playing
3. Render with different codecs and report the profiles and time