0

I am trying to perform binary classification using tensorflow which is on assignment for cs20si. These are pretty straight forward, but i am learning to code from scratch on tensorflow to learn the intricate details such as setting up data pipeline, maintaining checkpoints and as such. I have the code for training and testing and cannot achieve accuracy more than 12% where as sklearn gets 78% on it using the same model. I understand the issue must be in the code i have for tensorflow. The data is taken from here and the jupyter notebook i work on can be seen here. I have posted the variable setting, training and testing code. I cannot find why the loss is always in 4000s.

VARIABLE SETUP

# Step 2: create placeholders for input X (Features) and label Y (binary result)
X = tf.placeholder(tf.float32, shape=[None, 9], name="X")
Y = tf.placeholder(tf.float32, shape=[None,2], name="Y")

# Step 3: create weight and bias, initialized to 0
w = tf.Variable(tf.truncated_normal([9, 2]), name="weights")
b = tf.Variable(tf.zeros([1,2]), name="bias")

# Step 4: logistic multinomial regression / softmax
score = tf.matmul(X, w) + b

# Step 5: define loss function
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=score, labels=Y, name="entropy")

regularizer = tf.nn.l2_loss(w)
loss = tf.reduce_mean(entropy + BETA * regularizer, name="loss")

# Step 6: using gradient descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate=LEARNING_RATE).minimize(loss)

# Step 7: Prediction
Y_predicted = tf.nn.softmax(tf.matmul(X, w) + b)
correct_prediction = tf.equal(tf.argmax(Y_predicted,1), tf.argmax(Y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

TRAINING

import glob, os
for f in glob.glob("/tmp/model.ckpt*"):
    os.remove(f)

saver = tf.train.Saver([w,b])
EPOCHS = 1000

with tf.Session() as sess:
    # Step 7: initialize the necessary variables, in this case, w and b
    sess.run(tf.global_variables_initializer())

    # Step 8: train the model
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    n_batches = int(n_train_data/BATCH_SIZE)
    for epoch in tqdm(range(EPOCHS)): # run epochs
        avg_loss = 0

        for _ in range(n_batches):
            x_batch, y_batch = sess.run([data1_feature_batch, data1_label_batch])
            # Session runs train_op to minimize loss
            feed_dict={X: x_batch, Y:y_batch}
            _, loss_batch = sess.run([optimizer, loss], feed_dict=feed_dict)
            avg_loss += loss_batch/n_batches

        if (epoch+1) % 100 == 0:
            print "avg_loss",avg_loss

    coord.request_stop()
    coord.join(threads)

    # Step 9: saving the values of w and b
    print "weights",w.eval()
    print "bias",b.eval()

    # Add ops to save and restore all the variables.
    save_path = saver.save(sess, "/tmp/logit_reg_tf_model.ckpt")

TESTING

# Step 10: predict
# test the model

saver = tf.train.import_meta_graph("/tmp/logit_reg_tf_model.ckpt.meta")
with tf.Session() as sess:
    # nitialize the necessary variables, in this case, w and b
    sess.run(tf.global_variables_initializer())
    # Add ops to save and restore all the variables.
    saver.restore(sess, "/tmp/logit_reg_tf_model.ckpt")
    print "weights",w.eval()
    print "bias",b.eval()

    total_correct_preds = 0
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    try:
        for i in range(20):
            x_batch, y_batch = sess.run([test_data1_feature_batch, test_data1_label_batch])
            total_correct_preds += sess.run(accuracy, feed_dict={X: x_batch, Y:y_batch})

    except tf.errors.OutOfRangeError:
        print('Done testing ...')
    coord.request_stop()
    coord.join(threads)

    print 'Accuracy {0}'.format(total_correct_preds/n_test_data)

1 Answer 1

1

Normalize your inputs, you can use sklearn's StandardScaler(). The learning rate is large, reduce it say 0.01 and try. The weights regularization is also very large, remove it and add later if required.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.