Poor precision in depth texture

Sage
Posts: 1,199
Joined: 2004.10
Post: #1
I'm trying to perform, in glsl, an opacity calculation by comparing the value in a depth texture with the depth value of the current fragment. My plan is to lerp between a captured color texture ( the scene ) and the color of my water volume by how "thick" the water is for that pixel.

Here's my glsl -- all it's doing is trying to translate the value in the depth texture into world coordinates and then subtracting the z value of the incoming fragment from that to determine the thickness. Since I'm just messing around here, it's nothing complex.

My fragment shader:
Code:
uniform sampler2DRect reflection;
uniform sampler2DRect refraction;
uniform sampler2D normalmap;
uniform sampler2DRect depthmap;
uniform float nearPlane;
uniform float farPlane;

float convertZ( in float near, in float far, in float depthBufferValue )
{
    float clipZ = ( depthBufferValue - 0.5 ) * 2.0;
    return -(2.0 * far * near) / ( clipZ * ( far - near ) - ( far + near ));
}


void main(void)
{
    const float waterOpacityDepth = 4.0;
    const vec4 waterColor = vec4( 0.3, 0.45, 0.4, 1.0 );
    
    float sceneDepth = texture2DRect( depthmap, gl_FragCoord.st ).z;
    float thickness = convertZ( nearPlane, farPlane, sceneDepth ) - gl_FragCoord.z;

    vec4 sceneColor = texture2DRect( refraction, gl_FragCoord.st );
    vec4 color = mix( sceneColor, waterColor, smoothstep( 0.0, waterOpacityDepth, thickness ));
    color.w = 1.0;

    gl_FragColor = color;  
}

Now, here's how I set up my depth texture ( and color texture ):
Code:
void createRefractionAndDepthTextures( void )
{
    if ( refractionTexture )
    {
        glDeleteTextures( 1, &refractionTexture );
        refractionTexture = 0;
    }

    if ( depthTexture )
    {
        glDeleteTextures( 1, &depthTexture );
        depthTexture = 0;
    }
    
    glGenTextures( 1, &refractionTexture );
    glBindTexture( GL_TEXTURE_RECTANGLE_EXT, refractionTexture );
    glTexParameteri( GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
    glTexParameteri( GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
    glTexImage2D( GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA, screenWidth, screenHeight,
                  0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );

    glGenTextures( 1, &depthTexture );
    glBindTexture( GL_TEXTURE_RECTANGLE_EXT, depthTexture );
    glTexParameteri( GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
    glTexParameteri( GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
    glTexImage2D( GL_TEXTURE_RECTANGLE_EXT, 0, GL_DEPTH_COMPONENT, screenWidth, screenHeight,
                  0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );    
            
                            
    glError();
}

And here's how I grab them:

Code:
void getRefractionAndDepthTextures( void )
{
    if ( !refractionTexture || !depthTexture ) createRefractionAndDepthTextures();

    glBindTexture( GL_TEXTURE_RECTANGLE_EXT, depthTexture );
    glCopyTexSubImage2D( GL_TEXTURE_RECTANGLE_EXT, 0,
                         0,0, // xoffset, yoffset
                         0,0, // x,y
                         screenWidth, screenHeight );

    glBindTexture( GL_TEXTURE_RECTANGLE_EXT, refractionTexture );
    glCopyTexSubImage2D( GL_TEXTURE_RECTANGLE_EXT, 0,
                         0,0, // xoffset, yoffset
                         0,0, // x,y
                         screenWidth, screenHeight );


    glBindTexture( GL_TEXTURE_RECTANGLE_EXT, 0 );    
    
    glError();
}

What I see when I run the app is -- what appears to be at least -- that the precision of the depth buffer isn't up to the task. I see a solid green, as if the thickness passes the threshold, but if I bring the camera very close to an edge I can see a smooth interpolation across depth, just as I expect.

I know that the depth buffer is logarithmic, reserving precision for near fragments, rather than far. What I don't know is why it's failing so badly -- I'd expect *some* transition, even if it's not accurate.

Here's a couple screenshots:

Looking from a distance:
[Image: far.png]

And looking close ( where it sort of works like I'd expect )
[Image: near.png]

Any idea how I can increase the precision of the depth buffer? Or, failing that, how I can work around it? Perhaps ( in fact, almost certainly ) my math is wrong.

Finally, I asked GLUT to give me 32 bit depth:

Code:
    glutInitDisplayString("depth=32 double rgb");
Quote this message in a reply
Member
Posts: 30
Joined: 2006.04
Post: #2
I'm not sure if this will help you. I haven't tried doing this in OpenGL, but this code is Cg code that works on the PS3s RSX and should in theory work in OpenGl. The actual code is slightly different as we obtain the depth value from the texture in a different way, I just didn't want to over complicate things.

I actually pass in the values for a and b so I don't have to do the calculations per fragment, I've just shown the calcs for simplicity here.

float a = zFar / ( zFar - zNear );
float b = zFar * zNear / ( zNear - zFar );

float depth = tex2D( depthTexture, in_texcoord );
float dist = b / ( depth - a );

We use this code in in our depth of field effect, to blend between the back buffer and a blured version of it, based on the world space distance of the fragment from the camera. So it should work in your situation too.
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #3
I'll have to give that a shot.

But, that said, I had no idea the PS3 uses Cg... that's cool. Does the PS3 use OpenGL then?
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #4
Just tested it -- it produces essentially the same result. I think this may be an issue with precision in my depth buffer, but I'm not certain.
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #5
glGetIntegerv(GL_DEPTH_BITS, &foo) on your framebuffer and glGetTexLevelParameteriv(...GL_TEXTURE_DEPTH_SIZE...) on your texture.
Most likely your texture was created as 16 bit and you are dropping bits during the copy.
Try explicitly requesting a sized internal depth format, like GL_DEPTH_COMPONENT24.

If you use an FBO instead of copying, you can just query GL_DEPTH_BITS after binding the FBO, it updates the framebuffer-dependent state.
Quote this message in a reply
Member
Posts: 30
Joined: 2006.04
Post: #6
Ulp! I thought for a second there I had inadvertently violated my NDA. But that information is quite public now.

Yes we use Cg and there is a version of OpenGl ES for PS3, but I have not used it.

Sorry the code didn't help. I have a few ideas but I don't want to send you on a wild goose chase, so if I get the chance later I'll look into it as I want to support similar functionality myself on the Mac.

You could check that GL_TEXTURE_COMPARE_MODE is set to NONE for the depth texture, just in case you are implicitly getting a depth compare lookup? Look at the GL_ARB_shadow spec for more info on that.
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #7
arekkusu Wrote:glGetIntegerv(GL_DEPTH_BITS, &foo) on your framebuffer and glGetTexLevelParameteriv(...GL_TEXTURE_DEPTH_SIZE...) on your texture.
Most likely your texture was created as 16 bit and you are dropping bits during the copy.
Try explicitly requesting a sized internal depth format, like GL_DEPTH_COMPONENT24.

If you use an FBO instead of copying, you can just query GL_DEPTH_BITS after binding the FBO, it updates the framebuffer-dependent state.

To request a sized internal format, would I call something like so?

EDIT: I tried this, and got errors from GL. GL seems only to accept the 'internalFormat' param as GL_DEPTH_COMPONENT. The 'format' param as GL_DEPTH_COMPONENT24 works, but produces no difference.

Code:
glTexImage2D( GL_TEXTURE_RECTANGLE_EXT, 0, GL_DEPTH_COMPONENT24,
     screenWidth, screenHeight, 0, GL_DEPTH_COMPONENT24, GL_FLOAT, NULL );

What about the GL_FLOAT in there? Should I ask for some other data type? That said, I'd rather stay away from using FBOs to render a depth pass, just because I'm already performing an extra render ( into FBO ) for the reflection. I'd be happiest to just be able to grab the values already in the ( I assume ) 24 or 32-bit depth buffer.

iklefrelp Wrote:Ulp! I thought for a second there I had inadvertently violated my NDA. But that information is quite public now.

I'm sorry! Rasp

And, don't worry about wild good chases, I can use any pointers you've got.

One thing I managed to realize ( since I'm reading the Orange Book as I go ) is that gl_FragCoord.z is already in the depth-buffer space, so I need to convert it using my convertZ ( or your conversion ) method. That doesn't fix it, tho.
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #8
TomorrowPlusX Wrote:To request a sized internal format, would I call something like so?
From the spec:
Code:
Accepted by the <internalFormat> parameter of TexImage1D, TexImage2D,
    CopyTexImage1D and CopyTexImage2D:

    DEPTH_COMPONENT
    DEPTH_COMPONENT16_ARB    0x81A5    (same as DEPTH_COMPONENT16_SGIX)
    DEPTH_COMPONENT24_ARB    0x81A6    (same as DEPTH_COMPONENT24_SGIX)
    DEPTH_COMPONENT32_ARB    0x81A7    (same as DEPTH_COMPONENT32_SGIX)

    Accepted by the <format> parameter of GetTexImage, TexImage1D,
    TexImage2D, TexSubImage1D, and TexSubImage2D:

    DEPTH_COMPONENT

Quote:What about the GL_FLOAT in there? Should I ask for some other data type?
Depth data is defined to be in the range [0..1], but is typically stored internally in an integer format. There isn't really a good 24 bit int format you can request (except via EXT_packed_depth_stencil) so FLOAT is as good as you can do. It doesn't really matter since you aren't providing any data here; no format conversion cost.



Quote:That said, I'd rather stay away from using FBOs to render a depth pass, just because I'm already performing an extra render ( into FBO ) for the reflection. I'd be happiest to just be able to grab the values already in the ( I assume ) 24 or 32-bit depth buffer.
I see. Yes, unfortunately there is no way to share the window's depth buffer with an FBO, so you need to copy. Just make sure you aren't dropping bits in the process Wink
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #9
It looks to me like my depth texture is losing precision. You can see the banding here in these screenshots:

Here the camera's near an intersection with the water plane -- looks OK, aside from banding due to aliasing. The water is completely transparent where its depth == depthtexture depth for the fragment. That's what I want:
[Image: near2.png]

And here it's a bit farther away. Looks horrid, due to loss of precision in the depth texture ( depth texture appears to be 8-bit. WTF?):
[Image: far2.png]

Here's the GLSL. Note, the code is hacky, I'm trying to figure things out as I go, so I have three different methods for converting depth values to world z, and I have another method DepthRange that converts those back to 0-1, but linearly.

Code:
uniform sampler2DRect reflection;
uniform sampler2DRect refraction;
uniform sampler2D normalmap;
uniform sampler2DRect depthmap;
uniform float nearPlane;
uniform float farPlane;

float ConvertDepth1( float d )
{
    float clipZ = ( d - 0.5 ) * 2.0;
    return -(2.0 * farPlane * nearPlane ) / ( clipZ * ( farPlane - nearPlane ) - ( farPlane + nearPlane ));
}

float ConvertDepth2( float d )
{
    float a = farPlane / ( farPlane - nearPlane );
    float b = ( farPlane * nearPlane ) / ( nearPlane - farPlane );
    return b / ( d - a );
}

float ConvertDepth3(float d)
{
    return (nearPlane*farPlane)/(farPlane-d*(farPlane-nearPlane));
}

// transform range in world-z to 0-1 for near-far
float DepthRange( float d )
{
    return ( d - nearPlane ) / ( farPlane - nearPlane );
}

void main(void)
{
    const float threshold = 0.01;
    const vec4 waterColor = vec4( 0.3, 0.45, 0.4, 1.0 );

    float a = DepthRange( ConvertDepth3( texture2DRect( depthmap, gl_FragCoord.st ).r ));
    float b = DepthRange( ConvertDepth3( gl_FragCoord.z ) );
    
    vec4 sceneColor = texture2DRect( refraction, gl_FragCoord.st );
    vec4 color = mix( sceneColor, waterColor, smoothstep( 0.0, threshold, a - b ));
    color.w = 1.0;

    gl_FragColor = color;       
}

Now, for the interesting part. I wrote a simple bit to write out black-to-white based on the reported depth of the fragment, and of the depth texture.

It looks like so:
Code:
void main(void)
{
    float a = DepthRange( ConvertDepth3( texture2DRect( depthmap, gl_FragCoord.st ).r ));
    float b = DepthRange( ConvertDepth3( gl_FragCoord.z ) );

    //gl_FragColor = vec4( a,a,a, 1.0 );
    gl_FragColor = vec4( b,b,b, 1.0 );
}

The commented out line toggles whether I'm drawing the value in the depth texture or the incoming fragment.

Here's the depth texture drawn in greyscale:
[Image: depth_tex.png]


And here's the fragment depth in greyscale:
[Image: frag_depth.png]

Can you spot the difference? Wacko

This makes is painfully clear that I'm losing my depth precision in the depth texture. What I don't understand is why.

My depth texture is being created as such:
Code:
    glGenTextures( 1, &depthTexture );
    glBindTexture( GL_TEXTURE_RECTANGLE_EXT, depthTexture );
    glTexParameteri( GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
    glTexParameteri( GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
    glTexImage2D( GL_TEXTURE_RECTANGLE_EXT, 0, GL_DEPTH_COMPONENT32, screenWidth, screenHeight,
                  0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );

And is grabbed this way:
Code:
    glBindTexture( GL_TEXTURE_RECTANGLE_EXT, depthTexture );
    glCopyTexSubImage2D( GL_TEXTURE_RECTANGLE_EXT, 0,
                         0,0, // xoffset, yoffset
                         0,0, // x,y
                         screenWidth, screenHeight );

I don't really know what I can do to maintain depth precision...

Cry
Quote this message in a reply
Moderator
Posts: 1,140
Joined: 2005.07
Post: #10
What do you have for your near and far planes? Those will determine how precise your depth buffer will be. But seriously, if you want to have fog, instead of using GLSL I would recommend making sure the floor is sub-devided, then using per-vertex fog. For most scenes, it won't make that much of a difference in terms of visual quality, but it will save you a lot of problems including this, and speed.

BTW, I don't think ATI supports 32 bit depth buffers, and for FBOs they don't support above 16 bits. Of course, you can always use plain old GL_DEPTH_COMPONENT to be safe.
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #11
I'm aware of how near and far affect precision! That said, I'm not actually doing fog -- I'm doing something like fog, but based on the thickness of the water comparing the solid geometry's depth ( by the depth texture ) to the incoming fragment depth.

The thing is, I'm certain that 16 bit precision would be enough, my problem is that what I'm getting looks like 8!
Quote this message in a reply
Oldtimer
Posts: 832
Joined: 2002.09
Post: #12
Isn't it just that requesting the non-existant 32 bit depth causes you to fall back to eight? It sounds stupid, but http://www.beyond3d.com/forum/showthread.php?t=21773 suggests requesting 16-bit. This is way out of my depth (pun intended) so please do disregard me if I'm talking rubbish.
Quote this message in a reply
Member
Posts: 30
Joined: 2006.04
Post: #13
So far I have had no luck with this either. I'm creating a 24 bit depth buffer but when I create the depth texture, for some reason I loose precision and only get a 16bit depth texture Sad I would however have thought that 16 bits of precision would be fine, especially with the scene depth ranges I am using, so there could be another problem regarding the shaders accessing of the depth texture.
Quote this message in a reply
Moderator
Posts: 1,140
Joined: 2005.07
Post: #14
Sorry, but since it wasn't mentioned, I know how easy it is to forget the simplest things. What happens if you use different formats, such as GL_UNSIGNED_INT or GL_UNSIGNED_SHORT? It could be just a problem where even though it's using GL_FLOAT, it's still clipping the values as if it were GL_UNSIGNED_BYTE. It may end up doing that for every format, not just float, but AFAIK support for that format is rather new, so it would probably have the most problems. It's just a guess, but it would certainly explain the loss of precision.
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #15
I gave that a stab but all it did was to slow the app down from 20fps to 6 -- profiling revealed it to be in glCopyTexSubImage implying to me that a conversion was occurring. So it seems that GL_DEPTH_COMPONENT24 and GL_FLOAT result in no conversion.

So then why am I seeing poor precision?

And, when I sample the depth texture, are r,g,b and a all the same value? Am I sampling the incorrect one?

I know that I'm supposed to use shadow2D samplers for GL_DEPTH_COMPONENT ( at least according to the orange book ) but there's no Rect variant.

And a final thought... what if I used a different internal format than GL_DEPTH_COMPONENT... what if I used a luminance texture or something. Is that an option? Would that blow up? Obviously, I'll just try and see, but I'm curious if anybody has any actual suggestions for me.
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Normal Mapping Precision on iOS OptimisticMonkey 6 11,018 Apr 13, 2011 11:35 PM
Last Post: OptimisticMonkey
  Poor quality Maya PLE Render. Marjock 0 2,761 Jan 2, 2006 05:02 PM
Last Post: Marjock
  Higher color precision CobraMantis 2 3,115 Aug 20, 2005 03:49 PM
Last Post: arekkusu