In the recent times, Deep Learning which is a subset of Artificial Intelligence is the cutting edge technology. When it comes to the ecosystem undoubtedly TensorFlow is leading with 100K starts in Github.

I have started exploring Machine Learning with Text as well as Image-based recommendation Engines and understood how the Tech giants like Amazon, Flipkart will be doing it to show us the recommendations when we search for a product.

Recently I have started exploring TensorFlow.js which does not require any kind of complex environment setup, installing the third party dependencies and so on. It directly runs in the browser which makes it portable. This is the one which has attracted me to start digging into TensorFlow.js since I can run my applications in Mobile Device as well.

In this article, I am going to explain how can we calculate the coefficients of a polynomial function.

- Generating the data to plot the points in the coordinate system.
- Building the model.
- Stochastic Gradient Descent Optimizer.
- Training the model to calculate the coefficients.

A Tensor is a central unit of data in tensorflow.js which can be used to create data from one to n-dimension.

suppose assume that our polynomial function is :

1 |
f(x) = ax³+bx²+cx+d where a = -0.8 , b = -0.2 , c = 0.9 , d = 0.5 |

we need to calculate the coefficients a,b,c & d which will match our data.

**Generating data:**

TensorFlow.js provides API to create random data which can be used to train our model. using the below API you can create 100 x-coordinates which can be used to calculate actual y-coordinates by substituting in f(x).

1 |
xs = tf.randomUniform([100], -1, 1) |

1 2 3 4 5 6 7 8 |
const three = tf.scalar(3, 'int32'); const ys = a.mul(xs.pow(three)) .add(b.mul(xs.square())) .add(c.mul(xs)) .add(d) // Add random noise to the generated data // to make the problem a bit more interesting .add(tf.randomNormal([numPoints], 0, 0.04)); |

we have generated our data. Now Let us build the model which has to be trained using our data.

**Building the model:**

The Advantage of using tidy API is that once the function that is passed as an argument gets executed it will dispose of all the intermediate tensors that were created. General convention that people say to write all the operations inside tidy is because of this reason.

1 2 3 4 5 6 7 8 9 |
function predict(x) { // y = a * x ^ 3 + b * x ^ 2 + c * x + d return tf.tidy(() => { return a.mul(x.pow(tf.scalar(3, 'int32'))) .add(b.mul(x.square())) .add(c.mul(x)) .add(d); }); } |

**Loss Function:**

Before training our model, we need to define the loss function. Since it is a cubic polynomial which is non-linear the suitable loss function should be calculating the Euclidean distance between the network output & expected output. So the most suitable loss function which can perform this operation is undoubtedly Mean Square error.

1 |
MSE(Mean Square Error) = sigma(predictedValue - expectedValue)^2/n |

1 2 3 4 |
function loss(prediction, expectation) { const error = prediction.sub(expectation).square().mean(); return error; } |

**Stochastic Gradient Descent Optimizer:**

Before we start training the model, let us provide some random values to the coefficients, when we start training the model with the generated data, for example for a pair of coordinates it has predicted the values using randomly generated coefficients.

Now there should be a sort of intelligence required for our model whether to increase or decrease the value of the coefficients based on the error returned by our loss function until the error gets as close as to zero.

This sort of intelligence can be provided to the model through the optimizer. Though there are various flavours of optimizers available the one that suits non-linear regression is stochastic gradient descent.

1 2 |
const learningRate = 0.5; const optimizer = tf.train.sgd(learningRate); |

**Backpropagation:**

You might be wondering, how this optimizer will be doing it internally?.

The algorithm behind this is Backpropagation. Let us assume that coefficients are weights.

Weights should be updated after comparing the predicted output and the expected output. This optimizer will randomly select a point on the curve and calculate the gradient of the error with respect to the weights.The Gradient is nothing but the slope of the curve at a particular point.

If the calculated gradient is positive with an increase in weights then it will lead to an increase in the error, so we need to decrease our weights to minimize the error.

If the calculated gradient is negative with increase in the weights then it will lead to a decrease in the error which we want, so we need to increase our weights.

Below is the update formula that is used to calculate the new weights.

1 |
new-weight = (old-weight)-(learningRate)*∇error |

**Training the model:**

Now let us use our optimizer & loss functions that we have created to train our model.

1 2 3 4 5 6 7 8 9 |
function train(xs, ys) { for (let iter = 0; iter < 100; iter++) { optimizer.minimize(() => { // Feed the examples into the model const pred = predict(xs); return loss(pred, ys); }); } } |

More the number of iterations better the prediction, but it is time taking so one more optimization we can do is instead of sending the data sequentially, we can send it in the form of batches so that we can train our data efficiently.

You can use any data visualization framework like such as **Vega **to display the data in the form of a coordinate system and see the variations on how the display would be after each iteration of our training.

I would like to thank the below references which have helped me to gain the knowledge on Deep learning using TensorFlow.js.

**References:**

Thank you for reading. I hope it helped you to do a polynomial regression using TensorFlow.JS.

If you found this article useful, please click on the ? button a few times to make others find the article and to show your support, if you are interested you can follow me to get notified of my upcoming articles on TensorFlow.JS.