0
\$\begingroup\$

I was trying to figure out how perspective projection matrix works, and somehow I ended up in this rabbit hole of depth buffer precision.

It seems to be some interest in a depth buffer where the precision is evenly distributed through the visible space, so people started using 32 bits float depth buffers in range [0, 1], where 0 is the farthest and 1 is the closest, as described in this article.

Some old sources also talk about W-buffer, where the visible space is evenly spread in a 16 or 24 bits integer buffer, but they also say that it is inefficient and lacks hardware support.

But in modern OpenGL and hardware, what prevents me from using a 32 bits integer buffer and write in gl_FragDepth some value that, when divided by W, will yield a linear distribution of the visible space in the integer depth buffer? And, if it works, why it is worse than using float?

\$\endgroup\$
3
  • \$\begingroup\$ Writing depth from your shader can prevent some GPU optimizations that allow the depth test to run before the fragment shader. This can be a significant performance impact. \$\endgroup\$ Commented Apr 21, 2020 at 21:03
  • \$\begingroup\$ Compared to using classical integer depth buffer I understand, but what about compared to floating point depth buffer? \$\endgroup\$ Commented Apr 21, 2020 at 21:07
  • \$\begingroup\$ Doesn't matter, it's the act of writing depth that causes it, not the format. \$\endgroup\$ Commented Apr 22, 2020 at 1:07

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.