I am trying to use a global variable in a program but getting unexpected results. Example code is shown below. I tried to reduce the problem to its simplest components.
File a.py:
class Foo:
def __init__(self, s1: str, s2: str):
self.s1 = s1
self.s2 = s2
def __str__(self):
return f'Foo(s1="{self.s1}", s2="{self.s2}")'
foo = Foo('first', 'second')
def set_foo(new_foo: Foo):
global foo
foo = new_foo
print('Setting foo', foo)
File b.py:
from a import foo, Foo, set_foo
def tfunc():
print('Old foo', foo)
new_foo = Foo('third', 'fourth')
set_foo(new_foo)
print('New foo', foo)
print(foo == new_foo)
tfunc()
When I run the program I get these results:
>python b.py
Old foo Foo(s1="first", s2="second")
Setting foo Foo(s1="third", s2="fourth")
New foo Foo(s1="first", s2="second")
False
Why is foo not getting set to the new value in the function tfunc?
tfunc()explicitly takes afoo: Fooparameter, and returns the newFooobject, and eliminate the global variable entirely? This can get rid of a whole class of mysterious bugs from side effects you weren't expecting, resolve issues where tests fail because of a hidden dependency, and improve future-you's understanding of the code when you come back to look at it later.