9

R textbooks continue to promote the use of lapply instead of loops. This is easy even for functions with arguments like

lapply(somelist, f, a=1, b=2) 

but what if the arguments change depending on the list element? Assume my somelist consists of:

somelist$USA
somelist$Europe
somelist$Switzerland

plus there is anotherlist with the same regions and I want use lapply with these changing arguments? This could be useful when f was a ratio calculation for example.

lapply(somelist, f, a= somelist$USA, b=anotherlist$USA) 

Is there are way except for a loop to run through these regions efficiently?

EDIT: my problem seems to be that I tried to use a previously written function without indexes...

ratio <-function(a,b){
z<-(b-a)/a
return(z)
}

which led to

lapply(data,ratio,names(data))

which does not work. Maybe others can also learn from this mistake.

2 Answers 2

16

Apply over list names rather than list elements. E.g.:

somelist <- list('USA'=rnorm(10), 'Europe'=rnorm(10), 'Switzerland'=rnorm(10))
anotherlist <- list('USA'=5, 'Europe'=10, 'Switzerland'=4)
lapply(names(somelist), function(i) somelist[[i]] / anotherlist[[i]])

EDIT:

You also ask if there is a way "except for a loop" to do this "efficiently". You should note that the apply will not necessarily be more efficient. Efficiency will probably be determined by how quick your inner function is. If you want to operate on each elements of a list, you will need a loop, whether it is hidden in an apply() call or not. Check this question: Is R's apply family more than syntactic sugar?

The example I gave above can be re-written as a for loop, and you can make some naive benchmarks:

fun1 <- function(){
    lapply(names(somelist), function(i) somelist[[i]] / anotherlist[[i]])
}
fun2 <- function(){
    for (i in names(somelist)){
        somelist[[i]] <- somelist[[i]] / anotherlist[[i]] 
    }
    return(somelist)
}
library(rbenchmark)

benchmark(fun1(), fun2(),
          columns=c("test", "replications",
          "elapsed", "relative"),
          order="relative", replications=10000)

The output of the benchmark on my machine was this:

    test replications elapsed relative
1 fun1()        10000   0.145 1.000000
2 fun2()        10000   0.148 1.020690

Although this is not a real work application and the functions are not realistic tasks, you can see that the difference in computation time is quite negligible.

Sign up to request clarification or add additional context in comments.

1 Comment

Yeah, it seemed like the most straightforward way to fix the problem. I added some discussion of for vs apply because he asked for that too...
7

You just need to work out what to lapply() over. Here the names() of the lists suffices, after we rewrite f() to take different arguments:

somelist <- list(USA = 1:10, Europe = 21:30,
                 Switzerland = seq(1, 5, length = 10))
anotherlist <- list(USA = list(a = 1, b = 2), Europe = list(a = 2, b = 4),
                    Switzerland = list(a = 0.5, b = 1))

f <- function(x, some, other) {
    (some[[x]] + other[[x]][["a"]]) * other[[x]][["b"]]
}

lapply(names(somelist), f, some = somelist, other = anotherlist)

Giving:

R> lapply(names(somelist), f, some = somelist, other = anotherlist)
[[1]]
 [1]  4  6  8 10 12 14 16 18 20 22

[[2]]
 [1]  92  96 100 104 108 112 116 120 124 128

[[3]]
 [1] 1.500000 1.944444 2.388889 2.833333 3.277778 3.722222 4.166667 4.611111
 [9] 5.055556 5.500000

1 Comment

Too bad, I can't hand out another +1 here. Had another problem, tried to ask on SO but didn't cause the suggestion pointed my to this. Your answers helped again! great.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.