Good morning everyone!
Who would be the best person to ask about the transfer speeds and use of the new server? It would be interesting to use the server to archive DMS related projects from this committee. I’m working on shooting footage of day-to-day DMS activities for future projects, and I would like to store the footage securely. Cheers!
Adrian
The new server isn’t for storage, it’s only a virtualization node. The member fileserver (\FILES) should be pretty snappy… feel free to test. @lukeiamyourfather is the sysadmin over the fileserver.
1 Like
The file server that members use is an excellent place to store multimedia for space related projects. At this point the data isn’t backed up but there’s parity and snapshots. In other words the data is protected against accidental deletion and malware but not against something like multiple drive failures or complete loss of the machine (fire, theft, etc.).
The RAID Z2 array on the machine is capable of several times the throughput of gigabit Ethernet which is what the space is wired with. You should be able to get 100+ MB/s transfers which is more than enough throughput for several HD streams in real time (typically 5 MB/s or so per stream). The latency should be tolerable but not as good as the local drive. Adding more memory and a SSD for read and write caching would help with the latency and I think these upgrades will happen at some point but not immediately.
Perfect! Thank you. This will be perfect for the editing the footage we’re shooting and hosting the project files temporarily (6,25 MB/s per stream). For archiving finished footage and projects, it would seem prudent to host our own NAS. Do you have any specific recommendations or cost effective solutions?
I would recommend using the file server. To setup a dedicated NAS that’s faster would be in the thousands of dollars range. At some point we’ll be upgrading it with 10GbE at which point it will be just as fast if not faster than local storage. Tonight I’ll be adding a pair of SSD which will do read and write caching which will help a lot with performance (Brooks bought some yesterday).
1 Like
Luke,
Thank you for the information! Is there a way to automate file deletion every 30 days for specific folders in directories? (I want to keep the fill rate low) Is everyone going to manually need to do that?
Adrian
Can you describe the entire workflow you envision? It’s possible to setup scripts that clean out directories based on the age of files but I can’t really think of a good reason for doing that. Ideally files should be organized in such a way that automated deletion isn’t necessary.
I can see what Adrian’s saying… kind of like when we set up the fileserver, we made the Temporary share and we wanted to automatically delete anything over 7 days old… we never did that but every so-often I’ll go in there and do it manually.
Anyways, just make a bash script and cron it
Luke,
The typical optimal video editing workflow involves using 3 distinct drives. One hosts the operating system and programs, one for the footage and audio and one for audio and video previews. The file server would be perfect for hosting video and audio files because of its capacity and speed. This comes at a cost, though, when we expand into higher bit-rate video, we’ll use the capacity of the server more quickly. Is it possible to implement as system that auto-deletes files created in 30 day cycles?
Adrian