Zsh Mailing List Archive
Messages sorted by:
Reverse Date,
Date,
Thread,
Author
Re: listing sub-drectories with most files in
On Sat, Sep 03, 2011 at 02:59:15PM -0700, Bart Schaefer wrote:
> On Sep 3, 9:23pm, Thor Andreassen wrote:
> }
> } Adding -maxdepth 1 and -type f to find should limit the result
> } correctly:
> }
> } find *(/) -maxdepth 1 -type f | cut -d/ -f1 | uniq -c | sort -n
>
> Unfortunately that's still not quite right. Because you've lost the
> path leading up to the subdirectory name, if two subtrees each contain
> a directory with an identical name, you'll either get two counts with
> no way to distinguish them, or a single count that is the sum of the
> number of files in both of those subdirectories.
My brain is working slower than usual, sorry about the confusion.
After re-rereading the OP question, and properly testing your solution I
now get it.
> Also because find prints in directory scan order, you have to be careful
> or you'll get a few files and then a subdirectory and then a few more
> files and you'll still end up with multiple counts for the same directory.
>
> You can do it this way:
>
> find *(/) -type f -exec dirname {} \; | sort | uniq -c | sort -n
>
> but that seems like an awful lot of work.
Agreed, a slightly improved version, but still a lot of work:
find . -type d | while read dir; do
find $dir -maxdepth 1 -type f | wc -l | tr -d '\n'; print ":$dir"
done | sort -n
But nowhere near as efficient or elegant as the suggested zsh solution,
sorry about the noise and thank you for your patience :).
--
best regards
Thor Andreassen
Messages sorted by:
Reverse Date,
Date,
Thread,
Author