Poor precision in depth texture

Sage
Posts: 1,199
Joined: 2004.10
Post: #31
icklefrelp Wrote:The black area would be the focal point, and white at and beyond the near and far depth values. If I had a larger scene to play with I could show it over a larger range. However as I am not rendering the depth value directly, I am possibly hiding the evidence of any banding. This is running on a Mobility Radeon 9600 which slows down quite a lot with this shader probably due to the branching. I have not yet tried it on my Geforce 6600.

How are you generating your depthmap? Is it render-to-texture, or glCopyTexSubImage?
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #32
PowerMacX Wrote:MacBook results:
...

You ran this on an macbook? I thought the gma 950 didn't support fbo's?
Could you post a stack trace of the debug build? ( you might need to manually set the build architecture in xcode to x86, since I think on my machine it defaults to ppc ).

I'm using GLEW to bind opengl functions, and since this is a test ( not serious work ) I'm not actually testing for the presence of various methods. If you post a stack trace from the debug build I could see which gl call causes the crash ( if it is a gl call ).
Quote this message in a reply
Member
Posts: 30
Joined: 2006.04
Post: #33
TomorrowPlusX Wrote:How are you generating your depthmap? Is it render-to-texture, or glCopyTexSubImage?

Currently I'm using glCopyTexSubImage the same as you. And I am creating the texture
in the same way too. I was supprised, I was expecting to see banding as I'm not doing anything differently. Could it be down to different graphics cards if not my shader.
Quote this message in a reply
Member
Posts: 30
Joined: 2006.04
Post: #34
I'm willing to bet the problem you are having with banding is when running on an NVIDIA graphics card! Am I right?

I tried my depth of field code on my G5 at home with a Geforce 6600 and I am now seeing banding. So it's looking like it's a problem or "feature" that only NVIDIA cards exhibit, wonderful Sad
Quote this message in a reply
Moderator
Posts: 1,140
Joined: 2005.07
Post: #35
Damn, is there no end to the difficulties between video cards? You get banding on some, sometimes the same thing works one time and not another, even on the same video card! Such as your depth texture working, but not TomorrowPlusX's. Also, how my FBOs don't work correctly on NVidia cards. For my problem, I'll try having different depth renderbuffers for my two FBOs next to see if that's why it doesn't like clearing, but seriously, it's getting downright annoying.
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #36
I think we can all agree -- this sucks. What we need to do is see if we can't figure out how to make this work on NVIDIA, and make it easy for the code to run one path or the other.

Actually, that begs the question -- I know how to check for the presence of a particular extension, but how would my code figure out it's running on an ATI, as opposed to NVIDIA.

And, actually, now I'm wondering why the default build I linked to earlier showed a white-stripe on the right instead of the depth-texture...

Criminy.
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #37
glGetString(GL_VENDOR)

Be wary of tying things to vendor though, for example ATI's depth texture capabilities changed radically with the X1600 and X1900 from what they were on the X1300 and X1800...
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #38
TomorrowPlusX Wrote:You ran this on an macbook? I thought the gma 950 didn't support fbo's?
GMA950 does support FBO.
Quote this message in a reply
Moderator
Posts: 770
Joined: 2003.04
Post: #39
TomorrowPlusX Wrote:Could you post a stack trace of the debug build? ( you might need to manually set the build architecture in xcode to x86, since I think on my machine it defaults to ppc ).

Well, the architecture in the debug build is set to $(NATIVE_ARCH) and it is producing an Intel binary. The stack trace returns the same as I posted above, but running it in the debbuger shows this line in main.cpp [294]:
Code:
glActiveTexture( i );

in this function:
Code:
void renderTexturedUnitCube( Texture *texture, int repeat )
{
    for ( int i = GL_TEXTURE1; i < GL_TEXTURE8; i++ )
    {
        glActiveTexture( i );
        glDisable( GL_TEXTURE_2D );
    }
    ...
(called form renderScene(), line 556, called from display(), line 647) to be the exact point of the crash.
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #40
So... is there any resolution on this? Between here and mac-opengl mailing list I've seen no actual response from anybody who knows the internals to tell me if NVIDIA cards ( or mine ) downsample gl_depth_component textures to 8-bit.

In principle, my water rendering looks good enough ( now that I've done some more refactoring and fixing up of normal handling ) but I'd sure like to add this feature.
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #41
No, they don't downsample depth textures. Plenty of games use depth textures successfully (Halo, Myst V), so I'm sure there's nothing wrong with the basic implementation. Whether they work with GLSL is, of course, a slightly different question.
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #42
Is that to imply that if I used old style ARB shaders, instead of GLSL it would work?
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #43
No, it's just to imply that I don't know.

ARB_fragment_program_shadow isn't supported on Mac OS X, so it may not be possible with ARB shaders, even.
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #44
Bummer.

My water's looking OK now, but I wish I could do per-fragment depth lookups without having to render to a dedicated depth texture.

That said, while I was on vacation I read through the orange book, and re-implemented my water adaptation. Primarily, the improvements are a better approach to the normalmap ( making for better wave interference ) and little bits like scaling the reflection/refraction distortion by distance from the camera.

[Image: better_water.png]

It's time for me to start integrating this into my game.

... well... once I figure out why my lighting is inverted in my reflection Rasp
Quote this message in a reply
Member
Posts: 30
Joined: 2006.04
Post: #45
TomorrowPlusX Wrote:So... is there any resolution on this? Between here and mac-opengl mailing list I've seen no actual response from anybody who knows the internals to tell me if NVIDIA cards ( or mine ) downsample gl_depth_component textures to 8-bit.

My current plan on this is to try it on a PC to see if I get the same results on an NVIDIA card in a PC as I do on my 9600 in my G5. However my home PC is in bits and I was off work last week, so I didn't get a chance to start porting my engine to the PC again. It shouldn't take me that long, most of the code is portable, I just need to redo the PC specific parts.

Depending on the results whether I can get it working on PC in GLSL, or Cg or whether I just get the same results as on the Mac should indicate where any possible problem lies.
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Normal Mapping Precision on iOS OptimisticMonkey 6 11,062 Apr 13, 2011 11:35 PM
Last Post: OptimisticMonkey
  Poor quality Maya PLE Render. Marjock 0 2,769 Jan 2, 2006 05:02 PM
Last Post: Marjock
  Higher color precision CobraMantis 2 3,125 Aug 20, 2005 03:49 PM
Last Post: arekkusu