Zilla nails it here; too often we are backing up when we should be archiving. We generate so much content which is pretty much Write Once Read Never but it sits there just in case; getting backed-up time and time again whereas it should just go straight into the archive or certainly get moved after a number of days into the archive. Not only will it help with your back-ups, it will save you money.
For example, if you have all your data on expensive filers with expensive software licenses and it is the latter that is the killer especially when the license is based on the capacity of the array not on how much of the licensed feature is used, it would make sense to keep the useage of that array down to the minimum; so get data off it as quickly as possible and onto a lower-cost medium, be it an archive array with minimal features or tape, if you so desire.
But surely this is the promise of HSM or ILM? Surely this is the thing which has been talked about for years but everyone agrees it is too hard, the ROI doesn’t stack up etc? But as Zilla points out, data management doesn’t have to be complicated; it can be as simple as deleting what you don’t use any more and archiving more. We probably need to look at the tools and continue to simplify but data management needs to become something we talk about a lot more.
Actually, I wonder if we are going to sleep walk into another issue with VMs; it is going to get so easy to spin a VM up for a quick piece of testing, development or whatever and then people are just going to keep them hanging around, just in case. So it won’t just be files we have a problem with; it’s going to be whole environments.
Perrhaps we should just stop making things so easy for people!?