I am struggling with a challenge in Tensorflow / keras, would be great if someone could help me.
I have build a neural net in Keras with input_dim=3, then 10 Neurons and Output 1.
The input is a 3d-vector with floats, the output should be a simple float value.
My problem is, that I dont know how the floats should be formatted (>1, from 0 to 1?, etc...) and which loss function could work out for this task (nothing binary i guess). I want the neural net to compute out of the 3d vector a simple float value. But it never works out because my outputs are always the same.
If I have forgotten something please let me know, if you have some ideas to it, it would be great!
Greetings
Edit: Im aware of the fact that I need an introduction into the whole topic of machine learning, which I am doing right now. In the mean time I would like to know how to use keras to verifiy/practically use machine learning. I am sorry for asking 'stupid' questions but I hope that maybe someone could help me.
Input: I think the input might be 'wrong' formatted, its not normalized etc., but I transformed the values i get to an interval mentioned below.
This is my simple model:
model = Sequential()
model.add(Dense(10, input_dim=3, init='normal', activation='sigmoid'))
model.add(Dense(1, init='normal', activation='sigmoid'))
model.compile(loss='mse', optimizer='sgd', metrics=['accuracy'])
model.fit(X_Train, Y_Train, nb_epoch=100, batch_size=32, verbose=1)
X_Train and Y_Train are values extracted from a .csv file. For example my input values are [a,b,c,d], where 0 < a,b,c < 1 and -1 < d < 1 (d is output).
Output:
Epoch 500/500
32/32 [==============================] - 0s - loss: 0.0813 - acc: 0.0000e+00
Example (random generated values), all output is nearly the same around 0.43:
[ 0.97650245 0.30383579 0.74829968] [[ 0.43473071]]
[ 0.94985165 0.75347051 0.72609185] [[ 0.43473399]]
[ 0.18072594 0.18540003 0.20763266] [[ 0.43947196]]