The basic question is: why do you want to avoid a for loop?
tl;dr
For performance reasons, prefer external-utility solutions to pure shell approaches; fortunately, external-utility solutions are often also the more expressive solutions:
- For large element counts, they will be much faster.
- While they will be slower for small element counts, the absolute time spent executing will still be low overall.
The following snippet shows you how these two goals intersect (note that both commands return the 1-based index of the item found; assumes that the array elements have no embedded newlines):
# Sample input array - adjust the number to experiment
array=( {1..300} )
# Look for the next-to-last item
itmToFind=${array[@]: -1}
# Bash `for` loop
i=1
time for a in "${array[@]}"; do
[[ $a == "$itmToFind" ]] && { echo "$i"; break; }
(( ++i ))
done
# Alternative approach: use external utility `grep`
IFS=$'\n' # make sure that "${array[*]}" expands to \n-separated elements
time grep -m1 -Fxn "$itmToFind" <<<"${array[*]}" | cut -d: -f1
grep's -m1 option means that at most one match is searched for; -Fnx means that the search term should be treated as a literal (-F), match exactly (the full line, -x), and prefix each match with its line number (-n).
With the array size given - 300 on my machine - the above commands perform about the same:
300
real 0m0.005s
user 0m0.004s
sys 0m0.000s
300
real 0m0.004s
user 0m0.002s
sys 0m0.002s
The specific threshold will vary, but:
Generally speaking, the higher the element count, the faster a solution based on an external utility such as grep will be.
For low element counts, the absolute time spent will probably not matter much, even if the external utility solution is comparatively slower.
To show one end of the extreme, here are the timings for a 1,000,000-element array (1 million elements):
1000000
real 0m13.861s
user 0m13.180s
sys 0m0.357s
1000000
real 0m1.520s
user 0m1.411s
sys 0m0.005s