2

I am actually learn to build a machine learning in nodejs: i choose tensorflow API for this. The goal of this machine learning, it's to give a input of 14 entries and to return a number in relation of thus 14 entries. (I cannot describe more the context because I am in traineeship, and i don't know if i allowed to talk about this.) But the model always predict wrong values, and i dont't know why. I tried different loss/optimizer function, differents layers model configuration, different layer activation... but the model always give me a float value.

I tried to replace the input/output value to 0.3, the prediction return me a value between 0.1 and 0.3. (tested 3 times). But the loss value downcrease during the training, that seem to work better.

I also tried to increase the training epochs to 1000, no results :/

First of all, I create a function to build the model network. My model have a input layer of 14 units, then 2 hidden layers of 5 units and then the output layer with only one unit. (All the layer are in 'sigmoid' activation, and are dense type.)

const get_model = async () => {
    const model = tf.sequential();

    const input_layer = tf.layers.dense({
        units: 13,
        inputShape: [14],
        activation: 'sigmoid',
    });
    model.add(input_layer)

    let left = 3;
    while(left >= 2){

        const step_layer = tf.layers.dense({
            units: 5,
            activation: 'sigmoid',
        });
        model.add(step_layer)

        left --;
    }

    const output = tf.layers.dense({
        units: 1,
        activation: 'sigmoid',
    });

    model.add(output)

    model.compile({
        optimizer: tf.train.sgd(0.01),
        loss: tf.losses.absoluteDifference,
        metrics: 'accuracy',
    })

    return model;
}

To test the model, during the train, I always give a list of 13 number (all the values are 100), and i always give the following value: 100.

const get_output = () => {
    return 100;
}
const get_input = () => {
    return [
        100,
        100,
        100,
        100,
        100,
        100,
        100,
        100,
        100,
        100,
        100,
        100,
        100,
        100,
    ];
}

I have two functions to transform value to tensor value.

const get_input_tensor = (value) => {
    return tf.tensor([value],[1,14])
}
const get_output_tensor = (value) => {
    return tf.tensor(
        [Math.floor(value)],
        [1,1]
    )
}

Then i get the model, i train the model and try the prediction.

(async () => {
    const model = await get_model();

    let left = 20;
    while(left >= 0){
        const input = get_input();
        const output = get_output();

        await model.fit(get_input_tensor(input),get_output_tensor(output),{
            batchSize: 30,
            epochs: 10,
            shuffle: true,
        });

        left--;
    }

    const input = get_input();

    const output = model.predict(get_input_tensor(input));

    output.print();
})();

During the training, the loss value is close to 100. This highlight that the model always return me close a value close to 1.

This is my console during the training:

Epoch 8 / 10
eta=0.0 ====================================================================> 
11ms 10943us/step - loss=99.14 
Epoch 9 / 10
eta=0.0 ====================================================================> 
10ms 10351us/step - loss=99.14 
Epoch 10 / 10
eta=0.0 ====================================================================> 
12ms 12482us/step - loss=99.14

Then when i try the prediction, the model return me a value close to 1.

This is the print tensor of the prediction.

Tensor
     [[0.8586583],]

May you help me ? I don't know what goes wrong. Is it possible to have a prediction more than 1 ?

2 Answers 2

3

Here is a simple model that will predict 100 from an input of 14 values. It is often common to sample the input values to be between 0 and 1. It improves the convergence of steepest descent algorithms.

As for the reason why the model is predicting wrong values; there are general answers here

(async () => {
  const model = tf.sequential({
    layers: [tf.layers.dense({units: 1, inputShape: [14], activation: 'relu', kernelInitializer: 'ones'})]
  });
  model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
  await model.fit(tf.ones([1, 14]), tf.tensor([100], [1, 1]), {epochs: 100})
  model.predict(tf.ones([1, 14])).print();
 })()
<html>
  <head>
    <!-- Load TensorFlow.js -->
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@latest"> </script>
  </head>

  <body>
  </body>
</html>

Sign up to request clarification or add additional context in comments.

Comments

2

I finally solve the problems !

My layers use the following activation: 'sigmoid'. sigmoid is a function where the values are include between 0 and 1, that the reason why I getting the same values. (The activation 'relu' is not really what i expect)

I set the activation to 'linear', but this activation make the loss value to NaN during the training, then I switched the optimizers to adam, and this resolves the problem :)

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.