I've just started using pytorch and I am trying a simple multi-layer perceptron . My ReLU Activation Function is the following:
def ReLU_activation_func(outputs):
print(type(outputs))
result = torch.where(outputs > 0, outputs, 0.)
result = float(result)
return result
So I am trying to maintain the value which is greater than 0 and change the value to 0 if the value is smaller than 0. And this is a part of the main code where I use the ReLU Function (where I have the error):
def forward_pass(train_loader):
for batch_idx, (image, label) in enumerate(train_loader):
print(image.size())
x = image.view(-1, 28 * 28)
print(x.size())
input_node_num = 28 * 28
hidden_node_num = 100
output_node_num = 10
W_ih = torch.rand(input_node_num, hidden_node_num)
W_ho = torch.rand(hidden_node_num, output_node_num)
final_output_n = ReLU_activation_func(torch.matmul(x, W_ih))
and when I run the code, I get the following error:
RuntimeError:
1 forward_pass(train_loader)
in forward_pass(train_loader)
-----14 W_ih = torch.rand(input_node_num, hidden_node_num)
-----15 W_ho = torch.rand(hidden_node_num, output_node_num)
---->16 final_output_n = ReLU_activation_func(torch.matmul(x, W_ih))
in ReLU_activation_func(outputs)
-----10 print(type(outputs))
---->11 result = torch.where(outputs > 0, outputs, 0.)
-----12 result = float(result)
-----13 return result
RuntimeError: expected scalar type float but found double
Any help?