I'm currently writing a backup utility based on cpio (main reason for using cpio is the flexibility of selecting files using find and then piping the list into cpio.) The system is RH6 (64-bit) and the cpio is 2.10-9 (I appreciate that this is downlevel and if cpio has been changed recently to fix this problem, I'll be happy to try a newer version.)
Several of the individual files in the tree being backed up are >4G and growing steadily. For this reason, I started to get concerned about the fact that cpio can supposedly only back up files which are <=8G in size. So I decided to run a few experiments, and the results so far are interesting.
I decided to start on the big side, so I created a test file which was 40G in size. I then tried to back it up using cpio (using the ustar format). It completed successfully, and the resulting cpio archive was the sort of size one would expect. I then tried to restore the file from the cpio archive. Again, it completed successfully (or at least it gave an exit code of 0) - but the restored file was 0 bytes.
I then tried backing up and restoring the same file using tar - this time, both backup and restore completed successfully.
I'll try some smaller files with cpio and see if I can establish empirically what is the maximum size which can be archived and restored successfully.
I would have expected that cpio would have complained that it couldn't archive the 40G file in the first place. In my opinion - given that this utility is normally used for unattended backups - the fact that it didn't complain is a disaster waiting to happen.