I was trying to figure out how perspective projection matrix works, and somehow I ended up in this rabbit hole of depth buffer precision.
It seems to be some interest in a depth buffer where the precision is evenly distributed through the visible space, so people started using 32 bits float depth buffers in range [0, 1], where 0 is the farthest and 1 is the closest, as described in this article.
Some old sources also talk about W-buffer, where the visible space is evenly spread in a 16 or 24 bits integer buffer, but they also say that it is inefficient and lacks hardware support.
But in modern OpenGL and hardware, what prevents me from using a 32 bits integer buffer and write in gl_FragDepth some value that, when divided by W, will yield a linear distribution of the visible space in the integer depth buffer? And, if it works, why it is worse than using float?