- Do you use a RAID?
- Hardware or Software RAID?
- What RAID level? http://www.larryjordan.biz/an-explanation-of-raid-levels
- What Brand of RAID?
- What size RAID?
- How does your RAID connect? (eSATA, USB3, FW800, etc)
- What do you store on your RAID?
Several things about RAID - generally motherboard offered RAID will be a huge trade-off on your system bus if you have a fairly complex configuration across many board slots. You can make it work for some applications where you do not have competition on the system for speedy lane access across slots. That is the cheapest form of a possible RAID.
Outboard RAID boxes can be great if they have the complexity in their own right to handle RAID chores. THIS config is NOT Cheap.
A good middle ground is a lightweight RAID hardware card - start at about $280/card and again needs to be in the right slot on the Motherboard to work. SATA drives are preferred at 6G (this takes special cards, and special cabling). If you go with mechanical hard-drives instead of SSD, you can get more drive space inexpensively. BUT, not all hard-drives are suited for RAID work, and the few that are sell for about 2x the competing simple drive price.
I'm a big fan of RAID because my workflow has me rendering (2) VEG projects in the background while I burn (2) Blu-ray ISOs and edit (1) VEG in the foreground. I also work with a lot of uncompressed HD & output to 220 Mbps DNxHD. A lot of folks will dump on RAID on this forum but their projects do not require it.
My main editing workstation has a 12-drive 3Ware 9650SE hardware RAID (PCIe) with (4) 2TB hdds in RAID0, (6) 2TB hdds in RAID10, and (2) hdds in RAID1. All drives are in hotswap trays in a CM Stacker case which has (11) 5.25" bays.
The RAID 0 has all source video which is also copied to hotswap drives as a backup. The RAID10 has all edited footage that will be sent to clients. The RAID1 has all VEG files, BR & DVD ISOs, & other customer-related files which is also backed-up on an offsite server.
This system has been running daily for (3+) years. We have had a drive fail in each of the RAIDs including the RAID0. For the RAID10 & RAID1 it was a simple matter of replacing the bad drive with a spare. The day of the failure we continued to edit and allowed the replacement drive to rebuild overnight. When the RAID0 failed, we inserted the hotswap drive with the backup media, continued to edit, and rebuilt the RAID0 with a new drive at the end of the day. We plugged-in the (4) hotswap drives that had a copy of the source video and transferred the files to the new RAID0 overnight. All was good the very next day.
We get 350+ MBps read/writes which means editing uncompressed HD is buttery smooth...
Why use RAID?
Even a modest SATA should be able handle 120 MBps continuous throughput.
Former user
wrote on 3/19/2013, 10:01 PM
We use Raid 0 where I work on our Final Cut Pro systems because we have to a do a lot of 10bit uncompressed High Def files. I end up doing some uncompressed work at home and wish I had a raid, but budget doesn't allow me to have a lot of storage and a raid. so I have learned patience.
The raid we use at work is on a fiber line. But still Final Cut says it can't keep up sometimes. I would love to run Vegas on it and see what it does, but I can't since these are Macs and work computers.
I also have an internal raid system on a Digital Rapids StreamZ encoding computer. We capture uncompressed HD here to encode for theater projection. It has 4 scsi drives at Raid 0.
Raid thruput on a fast USB3.0 conneced (SATA innards) can approach the 145Mbps speed- which is very very fast. Trouble is if you are going to spin off one stream of an uncompressed 10bit file, to your NLE you need to be at 185Mbps to be smooth - which a single mechanical harddrive cannot spin. You might get that off of an SSD in 2 lane slot running at system bus when connected that way....
In order to be smooth with a couple of streams, you really need Raid-0 running at 220Mbps and topnotch hardware for more than that should be running at 320Mbps to be running really well.
If we're going to continue to discuss video demand vs. drive throughput (a red herring at best), let's please stop confusing Mbps (video throughput) with MBps (drive throughput).
8 Mb = 1 MB, so in my example above (a modest mechanical SATA), the conservative sustainable real-world read/write drive throughput (of 120 MBps) is almost 3 times the suggested video demand (of 350 Mbps), or realistically 274% headroom.
It's never been shown to my satisfaction that RAID 0 offers any advantage in a normal editing environment, because the drive is never the data bottleneck. Anyone doing full-toot 4K is already going to have the system to handle it, up front.
With the mathematical failure rate of RAID 0 being 195% over that of a single drive, why use RAID?
For smaller systems i'll use RAID 1 for redundancy. I've never encountered a situation in the last 15 years where RAID 0 was worth the risk or even resulted in a noticeable speed benefit.
At work all our DB and file servers are RAID 6 for redundancy, speed, security, and live hot swap capability. But, as pointed out earlier, it's pricey! Our 16 bay outboard RAID boxes run about $4500 just for the eSata host box, and the WD 2TB black drives about $125 to $175 each, depending on what sales we find. Load it up with 6 drives and it's pushing $5400. We've got one with 15 drives that cost about $7000.*
When i get around to building my huge data storage box at home someday, it'll be RAID 1 using LVM to span huge volumes across multiple drives. It'll be able to take 8 3TB drives and present them to the host as a single 12TB volume. No hot-swap, but my home environment isn't so critical that i can't wait for an opportunity to shut it down for 2 minutes to replace a drive. We don't have that luxury with the work DBs.
*Although, that $7000 was dwarfed by the host box we got to go with it, with 24 8-core Xeon CPUs and 128GB RAM, which came in at about $41,000. Too bad i can't install Vegas on it!
"With the mathematical failure rate of RAID 0 being 195% over that of a single drive, why use RAID?"
Over the years my experience has been that drives are very reliable. I've had more power supplies and MOBOs fail than HDDs The one HDD I had fail was a case of infant failure. The first IDE drives I bought are still working.
I've used RAID 0 either from a Promise RAID controller or from a MOBO RAID controller because that's what BMD recommend. I probably could have gotten by with a single drive but why risk it.
Of course I'm only a small fish, there's one user here cutting multicam from something like 6 RED cameras and I recall he needed lot of drives in RAID 0 to get it to fly.
As for 4K, that's going to be a real challenge because Sony have upped the game to 14 bits per channel packed into 16its. 4K will mean 8 times the data of 1080p, maybe more.
The raid controller is the weakest link on a RAID system.
If you loose your RAID controller/card you're stuck. I don't do RAID just for this reason. I just lost my P4HT motherboard which was providing RAID 0+1 array and now cannot recover anything from that machine. If it was just a standard drive or just a RAID1 I could at least remove the drives from the machine and recover.
Loosing my ASUS P4 motherboard was the last time I'll RAID my workstations.
I've also lost an adaptec RAID cards and a Netcell RAID card both of which have corrupted arrays that had to be rebuilt. The only card that I've used and recommend, that provided excellent stable RAID arrays is 3WARE. I've built a few server systems with these and none of them has failed. I've since move on to SSD where I load my video files to read from and render to SATA. (;
I always back-up original footage to external SATA before editing. (;
last used raid about 8 years ago. however, just working with regular hdv / avchd / mxf regular hd's are plenty fast enough. enough fx would bog down my cpu before the drives.
every day's edit backup - though (touch wood) i haven't had a drive fail in years (seagate)....
I use a 9TB CineRAID CR-H458 RAID 5 (4 x Western Digital Red WD30EFRX 3TB IntelliPower SATA 6.0Gb/s ) connected via eSATA to store all of my captured video waiting for editing and all of my completed projects. I had a RAID 0 for editing on my previous build but decided not to have one in my new build but I miss the speed and will probably go back to a 2TB (2x 1TB) RAID 0 for editing.
No raid here, either. I work with just HDV, which any drive can handle without breaking a sweat, and my renders are CPU-bound anyway.
I, too, have found hard drives to be very reliable, though I do copy the most recent veg file at the end of the day to another drive, so in theory I could lose the work drive and rebuild the entire project from scratch by just re-importing all the raw files.