I would like to use a feedforward neural network as a model to classify samples into 8 classes. Since I'm working with gene expression data I was thinking to implement something like PASNET. They use different ML techniques, but I'm just interested in the permanent pruning of weights between the input layer and the first hidden layer. To be more specific, I want to implement a feedforward neural network with 2 hidden layers, where some specified weights between the input and first hidden layer are deleted w.r.t a given binary (N,M)-Matrix. N describes the number of nodes in the first hidden layer and M the number of nodes in the input layer (e.g the cell matrix[10][3] tells us whether the weight between the third input node and the tenth node of the first hidden layer should exist 1 or not 0). All other layers should be fully connected. I drew it in case I explain it poorly: My drawing
The authors of the paper published their code (they used pytorch), but since their code is mixed up with other ML techniques, I would like to implement it myself. The authors successfully implemented that with pytorch and it looks like for me that they created a fully connected NN and multiplicated (element-wise) the weights between the input and first hidden layer with the binary-matrix before every feedforward and backpropagation iteration. If that is the way to go I would look more into it myself but maybe there is a package/framework which does the job faster and easier. I usually work with sklearn and would like to use it if possible but I couldn't find a way. Big plus if the suggested approach is easily extendable with methods for minimizing overfitting (e.g regularization, drop out).
Thank you!