I am writing a mandelbrot-calculator with the help of gpu.js and till now, everything works perfectly fine. The only issue I am facing is, that the GPU only wants to compute 32-Bit floats. Or at least this is what the official docs are telling me.
But when doing the same calculation with python and numba - which also runs on the same GPU - is much more precise when rendering the mandelbrot fractal.
With Python, I am able to get nearly around 1e-15 whereas in Javascript, the image becomes blurry at around 1e-7.
Python Kernel:
@cuda.jit(device=True)
def mandel(x, y, max_iters):
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return max_iters
Javascript Kernel:
const recalculateMandelbrot = gpu.createKernel(function(x_start, x_end, y_start, y_end, iters){
let c_re = x_start + (x_end - x_start) * this.thread.x / 1024;
let c_im = y_start + (y_end - y_start) * this.thread.y / 1024;
let z_re = 0, z_im = 0;
let z_re_prev = 0;
for(let i = 0; i < iters; i++) {
z_re_prev = z_re;
z_re = z_re * z_re - z_im * z_im + c_re;
z_im = z_re_prev * z_im + z_re_prev * z_im + c_im;
if ((z_re * z_re + z_im * z_im) >= 4) {
return i;
}
}
return iters;
}).setOutput([1024, 1024]).setPrecision('single');
The algorithms are equal to each other, except that in Python, I can use its built-in complex type.
So I thought about using BigDecimal, where I can achieve arbitrary precision (so I can zoom in as far as I want), but I do not know how to add this to my gpu-kernel.
Update:
The reason why python works more precise, is because the complex type consists of two 64-Bit Floats. So the reason why JavaScript is calculation less precise, is because of JavaScript itself.
So my question now focuses on how to add big.js to my gpu-kernel?