Zsh Mailing List Archive
Messages sorted by:
Reverse Date,
Date,
Thread,
Author
unlimited file descripters causes problems for zsh-4.0.2
- X-seq: zsh-workers 16082
- From: Matthew Braun <matthew@xxxxxxx>
- To: zsh-workers@xxxxxxxxxx
- Subject: unlimited file descripters causes problems for zsh-4.0.2
- Date: Thu, 18 Oct 2001 19:44:11 -0400
- Mailing-list: contact zsh-workers-help@xxxxxxxxxx; run by ezmlm
Hi Folks-
I ran across a machine that has 'ulimit -n unlimited' in the ksh startup
files and when I run zsh-4.0.2 from that shell, it causes problems.
Even though no one should do this, it shouldn't cause zsh to not
function properly.
On Solaris 2.6 with less than 2GB swap, this causes a seg fault. On
Solaris 8 with plenty of swap (4GB), the program runs, but won't fork
pipe'd commands properly (but will run simple commands). This is all
caused by zsh-4.0.2 doing an memory alloc for the max number of file
descriptors, which in this case is (2^32)-1 (2GB).
I found between two copies of the zsh source I had easily available that
the following code changed somewhere between these two versions:
zsh-3.0.8/Src/init.c:
fdtable_size = OPEN_MAX;
fdtable = zcalloc(fdtable_size);
zsh-4.0.2/Src/init.c:
fdtable_size = zopenmax();
fdtable = zcalloc(fdtable_size);
where zsh-4.0.2/Src/compat.c:zopenmax() does:
long openmax = sysconf(_SC_OPEN_MAX);
return openmax;
(sysconf returns the current "Max open files per process")
Normally, if zsh needs to track more file descriptors than allocated at
initialization, it just doubles the size of the fdtable by doing a
realloc in the zsh-4.0.2/Src/utils.c code. I'm not sure all of what the
zsh-4.0.2/Src/exec.c code does with fdtable, but after a quick glance,
it doesn't look like it would try to use more file descriptors than
would exist already in the fdtable.
One thing I did notice was the global int "max_zsh_fd" is never
initialized before first using its value. It appears we are currently
counting on it being automatically initialized to zero (by the compiler
or linker). This variable should get initialized in the code I assume.
I'm wondering what folks opinions are on making zsh work even when the
number of file descriptors is unlimited. Options I see:
1. Change the code back to "fdtable_size = OPEN_MAX;"
problem I see: fascist
2. Change zopenmax() function to query and honor sysconf up to a either
OPEN_MAX or some other hard coded max like 1024.
problems I see: fascist and...
- what high end limit should we use?
- if we limit the high end, should we limit the low end? zsh
doesn't work if you do 'ulimit -n 1', what should the low end be?
32, 64 or what?
3. other ideas?
My personal thought is to ALWAYS make zsh work, even when people set the
file descriptors to something bogus. Nothing worse than having your
shell not function, especially if it is your login shell. Although, I
also hate fascist code, so I'm torn, but probably in this case I'd
rather see zsh work.
Let me know what you think and if you change the zsh code.
Thanks,
Matthew.
ps. please include me directly on replies, I'm not on the mailing list.
=====
Solaris 8, with 4GB swap:
compass:/# ulimit -n
256
compass:/# /usr/local/pkg/zsh-4.0.2/bin/zsh --version
zsh 4.0.2 (sparc-sun-solaris2.8)
compass:/# ulimit -HSn unlimited
compass:/# ulimit -n
unlimited
compass:/# /usr/local/pkg/zsh-4.0.2/bin/zsh
compass:/# /bin/ls -l /bin
lrwxrwxrwx 1 root root 9 Dec 5 2000 /bin -> ./usr/bin
compass:/# echo hi | cat
zsh: fork failed: not enough space
compass:/#
[1] done echo hi
compass:/#
and new zsh process won't even fork when you have small number of file
descriptors (and the current zsh won't even recover to fork any more zsh
processes even after resetting the ulimit -n higher, which is really a
separate problem):
compass:/# ulimit -n
256
compass:/# /usr/local/pkg/zsh-4.0.2/bin/zsh
compass:/# ulimit -n 5
compass:/# echo $$
7486
compass:/# /usr/local/pkg/zsh-4.0.2/bin/zsh
compass:/# echo $$
7486
compass:/# ulimit -n 256
compass:/# /usr/local/pkg/zsh-4.0.2/bin/zsh
compass:/# echo $$
7486
Solaris 2.6, less than 2GB swap:
root:/# ulimit -n
unlimited
root:/# zsh
Segmentation Fault(coredump)
root:/# ulimit -n 64
root:/# zsh --version
zsh 4.0.2 (sparc-sun-solaris2.6)
rebuilt zsh with debugging and created core dump on Solaris 2.6 machine,
here is backtrace and value of fdtable_size:
sa:zsh-4.0.2/Src> gdb /tmp/zsh-4.0.2-debug/bin/zsh /tmp/core
GNU gdb 4.18
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "sparc-sun-solaris2.8"...
Core was generated by `/usr/local/pkg/zsh-4.0.2-debug/bin/zsh'.
Program terminated with signal 11, Segmentation Fault.
Reading symbols from /usr/lib/libsocket.so.1...done.
Reading symbols from /usr/lib/libdl.so.1...done.
Reading symbols from /usr/lib/libnsl.so.1...done.
Reading symbols from /usr/lib/libm.so.1...done.
Reading symbols from /usr/lib/libc.so.1...done.
Reading symbols from /usr/lib/libmp.so.2...done.
Reading symbols from /usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1...done.
#0 nicezputs (s=0x0, stream=0xed5b0) at utils.c:2878
2878 while ((c = *s++)) {
(gdb) bt
#0 nicezputs (s=0x0, stream=0xed5b0) at utils.c:2878
#1 0x7e43c in zwarn (fmt=0xc6e98 "fatal error: out of memory", str=0x0, num=0)
at utils.c:78
#2 0x7e314 in zerr (fmt=0xc6e98 "fatal error: out of memory", str=0x0, num=0)
at utils.c:49
#3 0x591a8 in zcalloc (size=2147483647) at mem.c:509
#4 0x498c4 in zsh_main (argc=1, argv=0xeffffa5c) at init.c:1189
#5 0x22df0 in main (argc=1, argv=0xeffffa5c) at ./main.c:37
(gdb) p fdtable_size
$1 = 2147483647
(gdb) quit
Messages sorted by:
Reverse Date,
Date,
Thread,
Author