0

I came across this quote on the Outerra blog:

using log2 instead of log: in shaders, log function is implemented using the log2 instruction, so it's better to use log2 directly, avoiding an extra multiply

Quote is from https://outerra.blogspot.com/2013/07/logarithmic-depth-buffer-optimizations.html

Can StackOverflow confirm this statement is true for WebGL and WebGPU?

3
  • The set of hardware instructions implemented in the shader processor is the same no matter which High Level Language the programmer uses. Commented Oct 4, 2024 at 19:07
  • For as far as I know, the OpenGL (and OpenGL-related) specifications state what should happen, and not how it should happen. The implementation (the GPU, its driver or a software renderer) is free to implement log however it wishes. While it may be true that all/most GPUs implemented log using log2 in 2013 (!), there is no guarantee. In general, GLSL applications should be profiled on various targeted hardware to see what the actual performance is (and whether these sort of substitutes are actually faster). Commented Oct 5, 2024 at 7:44
  • Why do you care? Implementations are free to change at any time. It's doubtful that one multiply is going to be the thing slowing down your application. Commented Oct 7, 2024 at 0:54

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.