You do have to scan the whole array, and it will take O (n) time to do so. A simple proof is that when you try [2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, ..., 2, 1, 2], half your values will be local maxima, so any algorithm that checks this will take at least n / 2 operations (of some sort).
Here's a fairly simple version that reports all peaks, with separate handling for the first and the last indices, and common handling for the remainder. It assumes you only want true peaks and not plateaus. If you want plateau indices, you'll have to replace some < / > signs with <= / >= ones.
const localMaxima = (ns) => [
... (ns [0] > ns [1] ? [0] : []),
... Array .from ({length: ns.length - 1}, ((_, i) => i + 1))
.filter ((i) => ns [i - 1] < ns [i] && ns [i + 1] < ns [i]),
... (ns [ns .length - 1] > ns [ns .length - 2] ? [ns .length - 1] : [])
]
console .log (localMaxima ([1, 2, 1, 3, 5, 6, 4])) //=> [1, 5]
// ^ ^
console .log (localMaxima ([8, 5, 7, 5, 3, 0, 9])) //=> [0, 2, 6]
// ^ ^ ^
console .log (localMaxima ([2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2])) //=> [0, 2, 4, 6, 8, 10]
// ^ ^ ^ ^ ^ ^
.as-console-wrapper {max-height: 100% !important; top: 0}
Note that the first example returns [1, 5]. Was the [2, 5] in the question simply a typo?
Update
A comment asked to make this "more readable".
There is absolutely one thing I should have done. I inlined a range function here. There was no reason for that. It's much more readable when using a separate function. (range is a simple function to return an integer range. For instance range (3, 12) //=> [3, 4, 5, 6, 7, 8, 9, 10, 11, 12].)
Using that instead leads to what to me is a quite readable version:
const range = (lo, hi) => Array .from ({length: hi - lo + 1}, (_, i) => lo + i)
const localMaxima = (ns) => [
... (ns [0] > ns [1] ? [0] : []),
... range (1, ns.length - 2) .filter ((i) => ns [i - 1] < ns [i] && ns [i + 1] < ns [i]),
... (ns [ns .length - 1] > ns [ns .length - 2] ? [ns .length - 1] : [])
]
However, I don't think that's what's being asked. I think the sort of code requested might look something like this:
const localMaxima = (ns) => {
const includeStart = ns [0] > ns [1]
const includeEnd = ns [ns.length - 1] > ns [ns .length -2]
const intermediates = range (0, ns .length - 2)
.filter ((i) => ns [i - 1] < ns [i] && ns [i + 1] < ns [i])
if (includeStart) intermediates .unshift (0)
if (includeEnd) intermediates .push (ns .length - 1)
return intermediates
}
But I don't like that at all. I try as best I can to work with immutable data, and the unshift and push calls are horrible to my eyes.
Probably the closest I could come to that and still look myself in the mirror is
const localMaxima = (ns) => {
const includeStart = ns [0] > ns [1]
const includeEnd = ns [ns.length - 1] > ns [ns .length -2]
const intermediates = range (0, ns .length - 2)
.filter ((i) => ns [i - 1] < ns [i] && ns [i + 1] < ns [i])
const beginning = includeStart ? [0] : []
const ending = includeEnd ? [ns .length - 1] : []
return beginning .concat (intermediates) .concat (ending)
}
I'm also not much of a fan of these one-off variable assignments, but in languages like JS they are sometimes hard to avoid. At least here they're still const. I would avoid replacing this:
const beginning = includeStart ? [0] : []
with this alternative
let beginning
if (includeStart) {
beginning = [0]
} else {
beginning = []
}
so if you simply don't like conditional operators (ternaries), we're probably never going to see eye-to-eye!
I would very much choose my post-range refactoring over these versions any day. But readability is in the eyes of the beholder, and perhaps this last one would make some happy.