1

I followed a tutorial on youtube and I accidentally didn't add model.add(Dense(6, activation='relu')) on Keras and I got 36% accuracy. After I added this code it rised to 86%. Why did this happen?

This is the code

from sklearn.model_selection import train_test_split
import keras
from keras.models import Sequential
from keras.layers import Dense 
import numpy as np
np.random.seed(3)
classifications = 3
dataset = np.loadtxt('wine.csv', delimiter=",")
X = dataset[:,1:14]
Y = dataset[:,0:1]
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.66, 
random_state=5)
y_train = keras.utils.to_categorical(y_train-1, classifications)
y_test = keras.utils.to_categorical(y_test-1, classifications)
model = Sequential()
model.add(Dense(10, input_dim=13, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(6, activation='relu')) # This is the code I missed
model.add(Dense(6, activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(2, activation='relu'))
model.add(Dense(classifications, activation='softmax'))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics= 
['accuracy'])
model.fit(x_train, y_train, batch_size=15, epochs=2500, validation_data= 
(x_test, y_test))
3
  • @ArtemisFowl No, this doesn't ask for a review but for an explanation of why something works the way it does. That's Stack Overflow territory, not Code Review. Commented Apr 14, 2018 at 15:38
  • What made you decide to add that code in the first place? What tipped you off? Commented Apr 14, 2018 at 15:39
  • I saw the tutorial again and found out this was missing Commented Apr 14, 2018 at 16:13

2 Answers 2

1

Number of layers is an hyper parameter just like learning rate,no of neurons. These play an important role in determining the accuracy. So in your case.

model.add(Dense(6, activation='relu')) 

This layer played the key roll. We cannot understand what exactly these layers are actually doing. The best we can do is to do hyper parameter tuning to get the best combination of hyper parameters.

Sign up to request clarification or add additional context in comments.

Comments

1

In my opinion, maybe it's the ratio of your training set to your test set. You have 66% of your test set, so it's possible that training with this model will be under fitting. So one less layer of dense will have a greater change in the accuracy . You put test_size = 0.2 and try again the change in the accuracy of the missing layer.

2 Comments

I don't get it :((
What he ment was ,the drastic change in accuracy because of addition of single new layer, is because of the ratio of train/test data.Its always to preffered to have higer percentage of train data than test.As he mentioned the preffered ratio is .8/.2 . This means you need to provide enough data for the model to learn from and test on smaller data set.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.