z-fighting when using a GLSL shader

Sage
Posts: 1,199
Joined: 2004.10
Post: #1
I've imeplemented simple multipass light rendering, where I render a single ambient pass, then render successive lighting passes with my blend mode set to be additive and ambient lighting turned off. I've got my depth test set to GL_LEQUAL, and glDepthMask( false ) set when rendering lit passes.

In fixed function rendering, this works beautifully.

[Image: happy.png]

BUT: With no changes but having a GLSL shader bound when rendering, I get this:
[Image: sad.png]

Why would having a GLSL shader bound affect the depth testing and cause all this z-fighting?

The only thing I can guess is that since I'm using fixed-function rendering to render my ambient pass, the shader might be running at a different precision. But... in my vertex shader I'm calling:
Code:
gl_Position = ftransform();

Which according to various sources, is supposed to produce exactly what the fixed-function pipeline should produce.

Is the answer here to write an ambient lighting shader so I can "guarantee" that my fragments have the same depth? Or am I barking up the completely wrong tree?
Quote this message in a reply
Moderator
Posts: 1,140
Joined: 2005.07
Post: #2
I use a similar method in my Monkey3D demo for the shadows, and I have a shader for the ambient draw (which actually is a low diffuse), and the depth buffer seems to line up correctly. It uses assembly shaders, but the results should be the same.
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #3
I tried writing an ambient shader, and lo, it fixed the problem. I guess the moral of the story is that ftransform() doesn't really produce the same result as fixed function rasterization.

Rasp
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #4
File a bug report; ftransform() is supposed to ensure the same results as the fixed-function pipeline.

I assume you're on an NVidia card?
Quote this message in a reply
Sage
Posts: 1,234
Joined: 2002.10
Post: #5
ftransform() should ensure that the GLSL vertex transform matches bit-for-bit with the fixed function vertex transform*.


*Assuming both passes are being done by the same renderer (i.e. in hardware.) If you hit software fallback for one of the passes, then it is practically impossible to ensure bit-for-bit identical results (how can the G3/G4/G5/CD/C2D cpu match the R200/R300/R400/R500/NV20/NV25/NV34/NV40/G70 gpu in every single case?!?)
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #6
OneSadCookie Wrote:File a bug report; ftransform() is supposed to ensure the same results as the fixed-function pipeline.

I assume you're on an NVidia card?

Actually -- I'm on an x1600 on a MacBook Pro. I will file a bug report -- I was suggested to do so from the mac-opengl list as well.

Also, I don't think I was hitting software rendering, because I was getting 60 fps even with multisampling. The important part to me is that I found a workaround ( which isn't expensive, either, the ambient shader is pretty simple ). I'll post a bug report.
Quote this message in a reply
Moderator
Posts: 1,140
Joined: 2005.07
Post: #7
The ambient shader may even be faster, since it should be doing a lot less than the fixed-function equivalent.
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  ios/mac shader - shared glsl source OptimisticMonkey 2 5,446 Jun 17, 2011 08:59 AM
Last Post: OptimisticMonkey
  passing values from vertex to fragment shader Sumaleth 6 11,401 Feb 18, 2011 01:54 AM
Last Post: Holmes
  Changing Uniform Variables for a Single Shader reapz 3 5,717 Jul 15, 2010 01:29 AM
Last Post: dazza
  Shadow Mapping - Self-Shadowing Z-Fighting Artifacts Bachus 16 22,701 Feb 11, 2009 12:24 PM
Last Post: arekkusu
  Vertex shader particle billboarding question TomorrowPlusX 3 5,862 Sep 15, 2008 06:46 AM
Last Post: TomorrowPlusX