0

I'm new to machine learning and I have been learning gradient descent algorithm. I believe this code uses simultaneous update, even though it looks like sequential update. Since the values of partial derivative are calculated before updating either w or b, that is, from the original w and b, the algorithm thus applied on individual w,b is being applied from original values. Am I wrong?


dj_dw=((w*x[i]+b-y[i])*x[i])/m
dj_db=(w*x[i]+b-y[i])/m
w=w-a*dj_dw
b=b-a*dj_db

The language is python3.

x and y are training set.

w and b are parameters that the algorithm is being applied on. I'm using gradient descent algorithm for linear regression.

dj_dw is the partial differentiation of squared mean error cost function with respect to w. Same goes for dj_db.

Apologies for any errors, I'm new.

I tried cross checking using gemini and chatgpt and they said it was sequential, hence the confusion

2
  • I don't think this is answerable with so little code, you did not even mention what programming language is this. Commented Jun 5, 2024 at 15:49
  • Still it is too vague, I tend to say it is parallel but all depends on what the variables mean and how they were computed. Commented Jun 5, 2024 at 18:32

1 Answer 1

0

It's simultaneous because both w and b are updated before the next iteration, meaning w and b both have updated values when the next differentiation occurs.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.