0
$\begingroup$

I have a case of costly calculations method like this:

Input -> Output
0 -> 8
8 -> 9
9 -> 9.5
9.5 -> 9.6
9.6 -> 9.65
9.65 -> 9.654

I use the output for the next input so this way im trying to find where the output exactly matches the input or is a tiny fraction smaller. The goal of the algorithm is to find the smallest X=Y or X>Y where (X-Y) > 10^(-10).

However since the method for calculations is costly I'm looking for an efficient way to reduce the amount of time I'm calling the method, but still retain for example the accuracy of 10 fractional digits.

Some additional Caveats: Output is not always increasing by increasing the Input. It might hop around the limit from both sides. Also increasing the Input value more drastically might change the value the Output approaches to which is not the desired outcome.

Since I'm posting here first time i would be glad if someone tagged this question with proper tags, since honestly i don't know which tags to use.

$\endgroup$
2
  • 1
    $\begingroup$ It sounds like you want a (bounded) fixed-point for your function. There are a number of ways to encode « find the fixed-point of this function », but the best of them depend on being able to use functions as values. I have a scala example here; you could modify the check to use your error bounds. $\endgroup$ Commented Jan 25, 2020 at 15:03
  • 1
    $\begingroup$ What can be done will probably depend on the nature of the function. Is the function known, or is it a black box you can only evaluate? Do you know anything about the structure or properties of the function? In the general case there is nothing you can do, so you'll need to make some assumptions about the function to find a better solution. Please edit the question to tell us what is known or can be assumed about the function. Thank you! $\endgroup$ Commented Jan 25, 2020 at 17:25

2 Answers 2

2
$\begingroup$

This is some form of fixed point iteration (you are going $x_{n + 1} = f(x_n)$, and looking for $x^* = f(x^*)$). If you know that the convergence is linear (i.e., $e_{n + 1} \approx C e_n$, where $e_n$ is the $n$-th error), you can accelerate the convergence by Aitken's $\Delta^2$ method. Say you have $x_0$, $x_1$ and $x_2$. A (hopefully much better) approximation is given by:

$\begin{align*} x^+ &= x_2 - \frac{(\Delta x_1)^2}{\Delta^2 x_0} \\ &= x_2 - \frac{(x_2 - x_1)^2}{x_2 - 2 x_1 + x_0} \end{align*}$

Start with $x_0 = x^+$ and compute new values for $x_1, x_2$. Repeat until satisfactory precision.

$\endgroup$
1
$\begingroup$

You appear to be looking for a fixed point of a function $f(x)$. Let $g(x)=f(x)-x$. Then $x$ is a fixed point of $f$, if and only if it is a root of $g$. Therefore, you can use any standard root-finding algorithm with $g$. For instance, if you have the ability to compute the derivative of $f$, you can use gradient descent; if you have the ability to compute the second derivative, you can use Newton's method; if you cannot compute the derivative of $f$, you can use the secant method or Brent's method. Those are just examples; there is a vast literature on root-finding.

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.