If I understand what you're asking, I think you're looking for some kind of adaptive learning rate like you might see applied to the gradient descent method in an ANN, as well as a way to stop if things are no longer improving.
The basic idea is to slowly decrease the amount you are perturbing your values when you don't see a change in your abs error while maintaining stability in the overall process. If you've reduced your learning rate and you're still not seeing any improvement, you're done (or at least at some kind of local minima). This method can be a little slower, and there are different ways of calculating the variables I put in (e.g., sigErrChange), so you'll have to toy around with it a bit. There are some other caveats I can't think of off the top of my head, but hopefully this gets the general idea across.
e.g.,
lR = 1.0
updatedLR = False # Have we updated the learning rate this iteration?
while abs(error) > 50 and ( sigErrChange or updatedLR ):
sigErrChange = False # Has there been a significant improvement in the error? (Probably shouldn't just use a single iteration for this...)
# Are we adding or subtracting
if C > T:
sign = 1.
else:
sign = -1.
# Should we update the learning rate?
if (~sigErrChange)
updatedLR = True
lR = .95 * lR
# Calculate our values
c = c + sign*lR*.001
C = calcC(c, C, T)
T = calcT(c, C, T)
error = C - T
erroris NOT changing inside thewhileloop. Did you mean to indenterror = C - Tfour more spaces to the right?C = c? Or does the casing matter? Never mind, I skipped over your comments in the code.