3

I know that Cesium offers several different interpolation methods, including linear (or bilinear in 2D), Hermite, and Lagrange. One can use these methods to resample sets of points and/or create curves that approximate sampled points, etc.

However, the question I have is what method does Cesium use internally when it is rendering a 3D scene and the user is zooming/panning all over the place? This is not a case where the programmer has access to the raster, etc, so one can't just get in the middle of it all and call the interpolation functions directly. Cesium is doing its own thing as quickly as it can in response to user control.

My hunch is that the default is bilinear, but I don't know that nor can I find any documentation that explicitly says what is used. Further, is there a way I can force Cesium to use a specific resampling method during these activities, such as Lagrange resampling? That, in fact, is what I need to do: force Cesium to employ Lagrange resampling during scene rendering. Any suggestions would be appreciated.

EDIT: Here's a more detailed description of the problem…

Suppose I use Cesium to set up a 3-D model of the Earth including a greyscale image chip at its proper location on the model Earth's surface, and then I display the results in a Cesium window. If the view point is far enough from the Earth's surface, then the number of pixels displayed in the image chip part of the window will be fewer than the actual number of pixels that are available in the image chip source. Some downsampling will occur. Likewise, if the user zooms in repeatedly, there will come a point at which there are more pixels displayed across the image chip than the actual number of pixels in the image chip source. Some upsampling will occur. In general, every time Cesium draws a frame that includes a pixel data source there is resampling happening. It could be nearest neighbor (doubt it), linear (probably), cubic, Lagrange, Hermite, or any one of a number of different resampling techniques. At my company, we are using Cesium as part of a large government program which requires the use of Lagrange resampling to ensure image quality. (The NGA has deemed that best for its programs and analyst tools, and they have made it a compliance requirement. So we have no choice.)

So here's the problem: while the user is interacting with the model, for instance zooming in, the drawing process is not in the programmer's control. The resampling is either happening in the Cesium layer itself (hopefully) or in even still lower layers (for instance, the WebGL functions that Cesium may be relying on). So I have no clue which technique is used for this resampling. Worse, if that technique is not Lagrange, then I don't have any clue how to change it.

So the question(s) would be this: is Cesium doing the resampling explicitly? If so, then what technique is it using? If not, then what drawing packages and functions are Cesium relying on to render an image file onto the map? (I can try to dig down and determine what techniques those layers may be using, and/or have available.)

1 Answer 1

2

UPDATE: Wow, my original answer was a total misunderstanding of your question, so I've rewritten from scratch.

With the new edits, it's clear your question is about how images are resampled for the screen while rendering. These images are texturemaps, in WebGL, and the process of getting them to the screen quickly is implemented in hardware, on the graphics card itself. Software on the CPU is not performant enough to map individual pixels to the screen one at a time, which is why we have hardware-accelerated 3D cards.

Now for the bad news: This hardware supports nearest neighbor, linear, and mapmapping. That's it. 3D graphics cards do not use any fancier interpolation, as it needs to be done in a fraction of a second to keep frame rate as high as possible.

Mapmapping is described well by @gman in his article WebGL 3D Textures. It's a long article but search for the word "mipmap" and skip ahead to his description of that. Basically a single image is reduced into smaller images prior to rendering, so an appropriately-sized starting point can be chosen at render time. But there will always be a final mapping to the screen, and as you can see, the choices are NEAREST or LINEAR.

Quoting @gman's article here:

You can choose what WebGL does by setting the texture filtering for each texture. There are 6 modes

  • NEAREST = choose 1 pixel from the biggest mip
  • LINEAR = choose 4 pixels from the biggest mip and blend them
  • NEAREST_MIPMAP_NEAREST = choose the best mip, then pick one pixel from that mip
  • LINEAR_MIPMAP_NEAREST = choose the best mip, then blend 4 pixels from that mip
  • NEAREST_MIPMAP_LINEAR = choose the best 2 mips, choose 1 pixel from each, blend them
  • LINEAR_MIPMAP_LINEAR = choose the best 2 mips. choose 4 pixels from each, blend them

I guess the best news I can give you is that Cesium uses the best of those, LINEAR_MIPMAP_LINEAR to do its own rendering. If you have a strict requirement for more time-consuming imagery interpolation, that means you have a requirement to not use a realtime 3D hardware-accelerated graphics card, as there is no way to do Lagrange image interpolation during a realtime render.

Sign up to request clarification or add additional context in comments.

5 Comments

No, I'm not concerned about object motion or anything dynamic like that. The question is much more fundamental about how a solitary frame is rendered. I will edit the question, because this comment field is too short...
Thanks for the information, emackey. It's what I feared but kind of expected.
It seems unfathomable to me that the NGA would rule out all realtime rendering apps on these grounds. I can imagine strict requirements on source imagery being fed into rendering systems, but texture rasterization is just a fact of modern graphics hardware. Are you sure the requirements you read specifically include 3D rasterization as within scope? Because you're stuck using software-only rendering in that case, it's completely absurd.
I over generalized the "requirement" a bit when I said "The NGA has deemed that best for its programs and analyst tools". This requirement is for a specific proposal with specific sets of tools and use cases, none of which include "realtime" analysis. Envision image analysts scrutinizing the content of images, poring over pixels. Separate from those tools, we use Cesium to display multiple data sources together, for added context. I figure that as long as we offer Lagrange for the 2D analysis tools then it probably won't matter what interpolation we use for the 3D Cesium multi-int stuff.
That's reassuring. Thanks!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.