4

I would like to implement texture filtering (min and magfilter) myself in a GLSL shader (as i want to use image_load_and_store and not a sampler, and also I want to handle undefined pixels in a special way) and I'm looking for an article or so that discusses the filtering process.

I can recall on how to implement texturefiltering from a raytracer I once wrote - I think you need to take into consideration the UV gradients at each pixel and depending on the length of those somehow scale your sampling raster.. but there I used a 49-samples circular kernel, I think that's too heavy.

I think the UV gradients i can get using dFdx() and dFdy() - but I'd like to know how what kind of kernel openGL is using for the filtering and implement it in my glsl shader the same way.

also - how to distinguish whether a minfilter or a magfilter needs to be applied? and is the filtering process different?

last but not least: if its minification filtering, how to choose the appropriate mipmap level based on the UV gradients?

3
  • 1
    Out of curiosity, why are you doing that? Commented Mar 21, 2012 at 18:34
  • Typically, texture filtering and sampling are performed using hardware. This will definitely be hard, not to mention performance intensive if implemented in a shader. Commented Mar 21, 2012 at 20:07
  • it's graphics research so we are simulating features that may be supported by hardware in the future. I do it out of two reasons: first, i need to use image_load_and_store (i don't know actually whether i can use the same texture as sampler which i also use as ImageStore target?) second reason is that I want to handle pixels with the value 'NaN' in a special way (regular filtering will just make the whole pixel NaN if one of the texels is NaN) Commented Mar 22, 2012 at 10:28

1 Answer 1

2

There's an article over at Codeproject which implements different filters (among them bi-linear) in GLSL: link

Sign up to request clarification or add additional context in comments.

5 Comments

this is nice already, thanks for that. However it only discusses rectangular upscaling (which is rather trivial) so it doesn't consider uv gradients and mipMapping...
Mipmaps are normally calculated automatically with a function similar to 0.5*log2(fwidth()) (it's a bit more complicated, there is for example a configurable bias). If you draw a smaller quad, the hardware will normally automatically choose a smaller mipmap, and there is no "special filtering" that you will otherwise need to do. Mipmaps are usually just box-filtered, too. While there exist some sophisticated minification filters, people who are much better at this than you and me seem to agree that there is hardly any visible improvement (but quite possibly ringing).
hey that's good info. so then to do it in the shader properly i will get the mipmap levels floor(0.5*log2(fwidth())) and ceiling(0.5*log2(fwidth())), in each one fetch the 4 closest texels and interpolate them bilinearily and then blend between the two values according to (0.5*log2(fwidth())-floor(0.5*log2(fwidth())))?
That's more or less what the hardware would do if you filter with LINEAR_MIP_LINEAR, give or take a constant or two. But it's really somewhat pointless, because this already works automatically. The only place where filtering in the shader may look better is really in magnification. Though to me, the shader linear mag filter looks 100% identical, and although cubic is somewhat nicer, it still isn't so much better than the built-into-hardware filter as to justify the huge overhead.
it's not pointless because i'm using images and not samplers (which don't support filtering) and also i handle NaN values in a specific way.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.