I'm creating a class that emulates numeric types so as to be able to use basic arithmetic operators, like +, -, etc on instances of this class. However, I want to be able to handle the operation in different ways depending on what types the operands are. For instance, if I'm creating a class foo_c with __add__() as a function, then I want to be able to handle addition cases where one operand is of type foo_c and the other is of type int or float or numpy.ndarray (or foo_c).
The solution I want to implement is to have a collection of 'adder' functions to switch between based off of the operand type. The different functions are being assigned to a dictionary, as such:
class foo_c:
...
self.addOps = { int: self.addScalar,
float: self.addScalar,
foo_c: self.addFoo }
...
self.addScalar(self, sclr):
...
self.addFoo(self, foo):
...
self.__add__(self, operand):
return self.addOps[type(operand)](operand)
The problem that I'm having is that I can't get the type() function to return the appropriate value to be used as a key for the dictionary. After creating an instance of the class as foo = foo_c(), the built-in function type(foo) returns instanceinstead of foo_c. I assume this is because I'm not creating the object in question; rather I am creating an instance of the object. I've used foo.__class__ as well, but am getting __main__.foo_c as the returned class, which isn't right either...
I don't want to have to use lines of isinstance() checks, so is there a way to get type() to return the class as desired?