I try to use an already trained neuronal network but always stumble upon this error. My input layer has a size of 70 numeric elements and as such the input data xData out of "last.cvs" come with same size:
[0.36 0.44 0.7 0.82 0.72 0.06 0.08 0.32 0.84 0.62 0.08 0.42 0.12 0.08
0.6 0.48 0.52 0.08 0.28 0.2 0.18 0.4 0.68 0.98 0.32 0.06 0.2 0.04
0.76 0.62 0.48 0.8 0.38 0.2 0.14 0.5 0.06 0.64 0.2 0.86 0.06 0.02
0.98 0.7 0.12 0.78 0.24 0.18 0.08 0.04 0.18 0.72 0.94 0.46 0.18 0.04
0.48 0.7 0.56 0.96 0.5 0.16 0.08 0.12 0.9 0.94 0.76 0.58 0.04 0.06]
My program itself looks like this:
import keras
from keras.models import load_model
from numpy import loadtxt, savetxt, reshape
import datetime as dt
import numpy as np
import os
xData= loadtxt('../input/last.csv')
print(xData)
model = load_model("model.dat")
prediction=model.predict(np.array(xData))
print(prediction);
So...any idea why it thinks the input size is 1 and not 70?