Herein VR though I get the wrong positions - because a VR frustum is slightly skewed. (http://doc-ok.org/?p=77)
So now we dont use the code and it works in normalvariables above to determine the Position -
Now should be using the Projection Matrix.
Unitys "OnRenderImage(without VR)" is rendering alternately left and right eye of the VR device. So we "take" the projection matrix from each different eye.
Also the FOV , Near and Far Planes, should be correct.
We have also the screen Resolution. - which is exactly "540x600"
The Big Question here:
How to get that pixel Position with the help of those values.?
This is working for non-VR for those who need this in a computeshader:
Script:
computeShader.SetTexture(_Kernel, "_DepthTexture", depthRenderTexture);
computeShader.SetFloat("_CamFOV", camFOV);
computeShader.SetFloat("_CamAspect", camAspect);
computeShader.SetFloat("_CamNear", camNear);
computeShader.SetFloat("_CamFar", camFar);
computeShader.SetFloat("_ScreenWidth", Camera.main.pixelWidth);
computeShader.SetFloat("_ScreenHeight", Camera.main.pixelHeight);
computeShader.SetVector("_CamWorldMatrix0", camWorldMatrix.GetRow(0));
computeShader.SetVector("_CamWorldMatrix1", camWorldMatrix.GetRow(1));
computeShader.SetVector("_CamWorldMatrix2", camWorldMatrix.GetRow(2));
computeShader.SetVector("_CamWorldMatrix3", camWorldMatrix.GetRow(3));
computeShader.Dispatch(_Kernel, (int)(Camera.main.pixelWidth*Camera.main.pixelHeight)/128, 1, 1);
ComputeShader:
[numthreads(128, 1, 1)]
void CSMain(uint3 id : SV_DispatchThreadID)
{
if (id.x >= (uint)(_ScreenWidth*_ScreenHeight))
return;
int y = id.x / int(_ScreenWidth);
int x = id.x % int(_ScreenWidth);
// the depth-information. We get _DepthTexture from a pixelshader.
float4 depthInfo = _DepthTexture[uint2(x, y)];
float depthValue = depthInfo.azz * _CamFar;
// world X and Y components of our target vector
float tanFov = tan(radians(_CamFOV / 2));
float screenDimY = tanFov * _CamNear;
float screenDimX = screenDimY * _CamAspect;
// normalize screenpos from range 0..1 to range -1..1
float4 normPos = depthInfo * 2 - 1;
float screenPosX = screenDimX * normPos.x;
float screenPosY = screenDimY * normPos.y;
float screenPosZ = -_CamNear;
float4 objInEyeSpaceVector;
objInEyeSpaceVector.xyz = float3(screenPosX, screenPosY, screenPosZ) * depthValue / _CamNear;
objInEyeSpaceVector.w = 1;
float4x4 camWorldMat = float4x4(_CamWorldMatrix0, _CamWorldMatrix1, _CamWorldMatrix2, _CamWorldMatrix3);
float4 pixelPositionobjInWorldSpace = mul(camWorldMat, objInEyeSpaceVector);
}
in VR though I get the wrong positions - because a VR frustum is slightly skewed. (http://doc-ok.org/?p=77)
So now we dont use the variables above to determine the Position -
Now should be using the Projection Matrix.
Unitys "OnRenderImage()" is rendering alternately left and right eye of the VR device. So we "take" the projection matrix from each different eye.
Also the FOV , Near and Far Planes, should be correct.^
We have also the screen Resolution. - which is exactly "540x600"
The Big Question here:
Howbut this has to get that pixel Position with the help of those valueswork now in VR too.?