1

Is there anyway to make it work?

func=i_want_it_to_cache_everything(lambda a,b:a+b)

And it has to be done in one line...

Update2:

I figured out the solution (thanks to everyone who replied!). But... There is an interesting phenomenon: caching slows down program?

import functools,datetime,timeit
@functools.lru_cache(maxsize=50000)
def euclidean_distance3(p1,p2):
    return (p1[0]-p2[0])**2+(p1[1]-p2[1])**2+(p1[2]-p2[2])**2+(p1[3]-p2[3])**2
euclidean_distance=(functools.lru_cache(maxsize=50000)(lambda p1,p2: (p1[0]-p2[0])**2+(p1[1]-p2[1])**2+(p1[2]-p2[2])**2+(p1[3]-p2[3])**2))
euclidean_distance2=lambda p1,p2: (p1[0]-p2[0])**2+(p1[1]-p2[1])**2+(p1[2]-p2[2])**2+(p1[3]-p2[3])**2
print(datetime.datetime.now())
def test1():
    for z in range(50):
        for i in range(200):
            for j in range(200):
                euclidean_distance((i,i,i,i),(j,j,j,j));
def test2():
    for z in range(50):
        for i in range(200):
            for j in range(200):
                euclidean_distance2((i,i,i,i),(j,j,j,j));
def test3():
    for z in range(50):
        for i in range(200):
            for j in range(200):
                euclidean_distance3((i,i,i,i),(j,j,j,j));
t1=timeit.Timer(test1)
print(t1.timeit(1))
t2=timeit.Timer(test2)
print(t2.timeit(1))
t3=timeit.Timer(test3)
print(t3.timeit(1))

print(euclidean_distance.cache_info())
print(euclidean_distance3.cache_info())

output:

9.989034592910151
4.936129879313011
10.528836308312947
CacheInfo(hits=1960000, misses=40000, maxsize=50000, currsize=40000)
CacheInfo(hits=1960000, misses=40000, maxsize=50000, currsize=40000)
6
  • Why does it have to be done in one line? Commented Apr 22, 2012 at 17:41
  • Doesn't seem like a good way of timing, you should use the timeit module. Commented Apr 22, 2012 at 18:04
  • 3
    Caching is not free: there is real code involved in managing the cache, checking for cache hits, running your code if there is no cached result, etc. In this case you are doing a simple calculation which can execute faster than the cache implementation. If your function was more expensive caching might be a win. Commented Apr 22, 2012 at 18:06
  • @WichertAkkerman Still cached function is outperformed by simple function, even there are 2M hits and only 4k misses. See my update. Commented Apr 22, 2012 at 18:14
  • 1
    Putting items into a dictionary involves a bit of mathematics (hash functions) as well. Another factor are extra function calls: a functional is not a cheap operation and adding caching adds at least one extra function call for every operation. In general you only want to use this kind of caching for operations take a really long time or that require accessing an external system such as a SQL server Commented Apr 23, 2012 at 8:02

1 Answer 1

2
>>> from functools import lru_cache
>>> x = lru_cache()(lambda a,b:a+b)
>>> x(2,3)
5
>>> x(4,2)
6
>>> x(2,3)
5
>>> x.cache_info()
CacheInfo(hits=1, misses=2, maxsize=100, currsize=2)
Sign up to request clarification or add additional context in comments.

1 Comment

I actually just worked out the same solution myself. But it looks like caching actually slows down the program. See edit.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.