I never really noticed before, but external disks formatted as ntfs are really slow to access under linux.
I got a new 2TB external disk to use for backups as all my 1TB disks were around 90% full (I was mirroring all my clonezilla backups across three servers as we all do; so they were all full).
Anyway copying almost 1TB of data from one ntfs disk to another ntfs disk has been going constantly for almost two days now… and is still going. “top” shows two mount.ntfs processes (one per disk I guess) are using all the cpu, nothing else is constraining the data copy. It is so slow.
I had already decided to reformat the origional 1TB disk as ext4 once I got the data off and dedicate it to bacula backups. I will do a few write tests of a couple of hundred GB to it before I do, then a few after, to see what the difference is.
Anyway with the new 2TB one I’m thinking I’ll store all my servers clonzilla backups on that only (and buy a seperate one to use for mirror copies); which will free up the 1TB disks on each server just for mirroring the virtual machine Hdisks and major projects I am working on… except one of those I have already earmarked as a dedicated bacula storage drive that will need to be mirrored and I wasn’t planning on backing up VM disks with bacula… sigh, I need more USB hubs/disks/power points.
Or pheraps I need to rethink what I backup… no all backups of everything must be mirrored onto different physical disks, in the many years I have been playing with PC’s 99% of hardware failures are disk drives. And I definately need to keep more that just the ‘latest’ backup set as I know I have a corrupt ‘latest’ set for the server I had the last hard disk failure on and have been testing rebuild from the prior one.
I just need more disks, which will need more disks to backup, a never ending cycle. It was much easier when PC disk drives were small and there wasn’t much to backup.