David Gómez wrote [2002/10/02 09:12]:
>
> > > I created a directory with 100000 files to test the new htree patch
> > > for the ext3 filesystem, and found a ¿bug? when I tried to remove all
> > > the files. The command 'rm *' gave the error 'zsh: argument list too
> > > long'. If expansion doesn't support so many parameters, what it's the
> > > supossed way to remove all these files without deleting the
> > > directory?
> >
> > You sure the error wasn't 'zsh: argument list too long: rm' ?
>
> You're right, that was the full error line. I didn't put the rm because
> i thought it wasn't important, my fault.
>
> > internal shell wildcard expansion has no argument limit. execve()
> > does. Either raise your kernel's limit (sorry; don't know how to do it
> > on Linux),
>
> I've been trying to raise some limits, but none of them affect the number of
> parameters execve() supports, maybe is possible to change it in linux through
> some /proc variable, or a kernel recompilation is needed.
>
> I just wanted to know if this was a zsh related problem, now i'll look forward
> to know how to change execve limits.
/usr/src/linux/include/linux> grep ARG_MAX limits.h
#define ARG_MAX 131072 /* # bytes of args + environ for exec() */
Hopefully, changing this value and recompiling your kernel will help...
Ciao,
Thomas
--
Thomas Köhler Email: jean-luc@xxxxxxxxxxxxxxxxx | LCARS - Linux
<>< WWW: http://jeanluc-picard.de | for Computers
IRC: jeanluc | on All Real
PGP public key available from Homepage! | Starships
Attachment:
pgp9UHbTvuTN0.pgp
Description: PGP signature