Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

Re: Why large arrays are extremely slow to handle?



On Mar 25,  2:37am, nix@xxxxxxxxxxxxxxxx wrote:
}
} I think there's is a big flaw somewhere that causes the following:
} 
} #!/bin/zsh
} emulate zsh
} TEST=()
} for i in {1..10000} ; do
} TEST+="$i" # append (push) to an array
} done
} 
} --- 10K
} time ./bench
} real    0m3.944s
} 
} --- 50K BOOOM! WTF?
} 
} time ./bench
} real    1m53.321s
} 
} Any ideas why it's extremely slow?

It's not the array, it's the loop interpretation thereof.

TEST=({1..50000})

will populate a 50k-element array almost instantly.  Here's a 500,000
element array on my home desktop:

torch% typeset -F SECONDS
torch% print $SECONDS; TEST=({1..500000}); print $SECONDS
24.9600260000
25.4452710000
torch% 

Put that in a loop instead, and you're interpreting a fetch/replace of the
whole array on every cycle.  This is in part because array assignment is
generalized for replacing arbitrary slices of the array; append is not
treated specially.  [If someone wants to try to optimize this, start at
the final "else" block in Src/params.c : setarrvalue() -- but beware of
what happens in freearray().]

As it happens, you can get much better update performance at the cost of
some memory performance by using an associative array instead.  Try:

typeset -A TEST
for i in {1..50000} ; do
TEST[$i]=$i
done

Individual elements of hashes *are* fetched by reference without the
whole hash coming along, and are updated in place rather than treated
as slices, so this is your fastest option without a C-code change.

You can also build up the "array" as a simple text block with delimiters,
then split it to an actual array very quickly.  Append to a scalar isn't
really any better algorithmically than an array, but it does fewer memory
operations.

torch% for i in {1..50000}; do TEST+="$i"$'\n' ; done
torch% TEST=(${(f)TEST})
torch% print $#TEST
50000



Messages sorted by: Reverse Date, Date, Thread, Author