Skip to main content
Question Protected by Mast
Update wording, add tag
Source Link

The following code is an instantiation of wilson's theorem, it. It uses a simple factorial and division theorem to check for primality. My issue is with the performance of the code, and there are two most prominent issues. Firstly, the cumtime for which the remainder is taken far exceeds the rest of the code; Secondly, there is a noticeable time crunch when adding 1 to the numerator due to what I believe is a problem when updating the reference to another object for very large integers. 

Is there any possible way I could fix these issues, except, of course, making this program faster by coding in cpp.?

n = 100000


def absolute(x: int):
    if (x == 2):
        a = Diction[x - 1] * x * (x + 1)
        Diction.pop(x - 1)
        Diction.update({(x + 1): a})
        return 2
    try:
      a = Diction[x-1] * x
      return a
    finally:
        a *= (x+1)
        Diction.pop(x-1)
        Diction.update({(x+1):a})

def main():
    for i in range(3, n+1,2):
        if (((absolute(i - 1) + 1 ) % i) == 0):
            prime.append(i)

if __name__ == '__main__':
    import time
    import cProfile
    start = time.perf_counter()

    Diction = {1: 1}
    prime = [2]
    cProfile.run('main()')
    end = time.perf_counter()
    print(len(prime), end-start)

I exceptexpect that some other possible data holder, which uses a more optimal format, would work better in this instance. Possibly - possibly numpy or andan implementation of numba.

The following code is an instantiation of wilson's theorem, it uses a simple factorial and division theorem to check for primality. My issue is with the performance of the code, and there are two most prominent issues. Firstly, the cumtime for which the remainder is taken far exceeds the rest of the code; Secondly, there is a noticeable time crunch when adding 1 to the numerator due to what I believe is a problem when updating the reference to another object for very large integers. Is there any possible way I could fix these issues, except, of course, making this program faster by coding in cpp.

n = 100000


def absolute(x: int):
    if (x == 2):
        a = Diction[x - 1] * x * (x + 1)
        Diction.pop(x - 1)
        Diction.update({(x + 1): a})
        return 2
    try:
      a = Diction[x-1] * x
      return a
    finally:
        a *= (x+1)
        Diction.pop(x-1)
        Diction.update({(x+1):a})

def main():
    for i in range(3, n+1,2):
        if (((absolute(i - 1) + 1 ) % i) == 0):
            prime.append(i)

if __name__ == '__main__':
    import time
    import cProfile
    start = time.perf_counter()

    Diction = {1: 1}
    prime = [2]
    cProfile.run('main()')
    end = time.perf_counter()
    print(len(prime), end-start)

I except that some other possible data holder, which uses a more optimal format would work better in this instance. Possibly numpy or and implementation of numba.

The following code is an instantiation of wilson's theorem. It uses a simple factorial and division theorem to check for primality. My issue is with the performance of the code, and there are two most prominent issues. Firstly, the cumtime for which the remainder is taken far exceeds the rest of the code; Secondly, there is a noticeable time crunch when adding 1 to the numerator due to what I believe is a problem when updating the reference to another object for very large integers. 

Is there any possible way I could fix these issues, except, of course, making this program faster by coding in cpp?

n = 100000


def absolute(x: int):
    if (x == 2):
        a = Diction[x - 1] * x * (x + 1)
        Diction.pop(x - 1)
        Diction.update({(x + 1): a})
        return 2
    try:
      a = Diction[x-1] * x
      return a
    finally:
        a *= (x+1)
        Diction.pop(x-1)
        Diction.update({(x+1):a})

def main():
    for i in range(3, n+1,2):
        if (((absolute(i - 1) + 1 ) % i) == 0):
            prime.append(i)

if __name__ == '__main__':
    import time
    import cProfile
    start = time.perf_counter()

    Diction = {1: 1}
    prime = [2]
    cProfile.run('main()')
    end = time.perf_counter()
    print(len(prime), end-start)

I expect that some other possible data holder, which uses a more optimal format, would work better in this instance - possibly numpy or an implementation of numba.

edited title
Link
Toby Speight
  • 88.7k
  • 14
  • 104
  • 327

Is there any way to optimize the efficiency of the % operation and also accelerate the time python takes to load a variable and update its reference? Find primes using Wilson's Theorem

Source Link

Is there any way to optimize the efficiency of the % operation and also accelerate the time python takes to load a variable and update its reference?

The following code is an instantiation of wilson's theorem, it uses a simple factorial and division theorem to check for primality. My issue is with the performance of the code, and there are two most prominent issues. Firstly, the cumtime for which the remainder is taken far exceeds the rest of the code; Secondly, there is a noticeable time crunch when adding 1 to the numerator due to what I believe is a problem when updating the reference to another object for very large integers. Is there any possible way I could fix these issues, except, of course, making this program faster by coding in cpp.

n = 100000


def absolute(x: int):
    if (x == 2):
        a = Diction[x - 1] * x * (x + 1)
        Diction.pop(x - 1)
        Diction.update({(x + 1): a})
        return 2
    try:
      a = Diction[x-1] * x
      return a
    finally:
        a *= (x+1)
        Diction.pop(x-1)
        Diction.update({(x+1):a})

def main():
    for i in range(3, n+1,2):
        if (((absolute(i - 1) + 1 ) % i) == 0):
            prime.append(i)

if __name__ == '__main__':
    import time
    import cProfile
    start = time.perf_counter()

    Diction = {1: 1}
    prime = [2]
    cProfile.run('main()')
    end = time.perf_counter()
    print(len(prime), end-start)

I except that some other possible data holder, which uses a more optimal format would work better in this instance. Possibly numpy or and implementation of numba.