I was reading the following presentation:
http://www.idt.mdh.se/kurser/DVA201/slides/parallel-4up.pdf
and the author claims that the map function is built very well for parallelism (specifically he supports his claim on page 3 or slides 9 and 10).
If one were given the problem of increasing each value of a list by +1, I can see how looping through the list imperatively would require a index value to change and hence cause potential race condition problems. But I'm curious how the map function better allows a programmer to successfully code in parallel.
Is it due to the way map is recursively defined? So each function call can be thrown to a different thread?
I hoping someone can provide some specifics, thanks!
fto an element of the input list is independent from any other application to any other element, so they can all be done independently from each other, i.e. in parallel. a hypotheticalpar_mapwould allocate the storage to back the resulting list, and spark execution of a new thread for each elementein the list, providing it the reference to the place which will need to be updated with the result off e. When there are no more active threads, themaphas finished. Of course you could make each thread work on a block of say 1000es, too.