We have an issue with a folder with hundreds of thousands of tiny files.
There are so many files that performing
rm -rf returns an error and instead what we need to do is something like:
find /path/to/folder -name "filenamestart*" -type f -exec rm -f {} \;
This works but is very slow and constantly fails from running out of memory.
Is there a better way to do this?
=====>>
Using rsync is surprising fast and simple.
mkdir empty_dir
rsync -a --delete empty_dir/ yourdirectory/
another fast choice: Perl! Its benchmarks are faster than rsync -a --delete
.
cd yourdirectory
perl -e 'for(<*>){((stat)[9]<(unlink))}'
Read More