Reading the depth buffer? Plus gluUnProject

Member
Posts: 269
Joined: 2005.04
Post: #1
So I "borrowed" some code from Mesa to replace gluUnProject. I tried out the Mesa code in an old project on OS X that used gluUnProject and it worked fine, so I know that bit of code is okay. Normally for converting a mouse click to world coordinates I use this pseudo-code (just ignore all the float/double/int issues):

Code:
glReadPixels(windowX, windowY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &pixelDepth);

glGetFloatv(GL_VIEWPORT, viewport);
glGetFloatv(GL_MODELVIEW_MATRIX, mvmatrix);
glGetFloatv(GL_PROJECTION_MATRIX, projmatrix);
        
gluUnProject(windowX, windowY, pixelDepth, mvmatrix, projmatrix, viewport, &x, &y, &z);

Only GL_DEPTH_COMPONENT doesn't exist on the iPhone. I tried GL_DEPTH_COMPONENT16_OES, but that doesn't work. I get 0.0 every time, and I'm willing to bet I'm getting an OGL error there I'm forgetting to check for. Skimming through the OGL ES spec it looks like it doesn't support reading the depth buffer in this way.

Is there some other way to read the depth buffer to get the window Z value for gluUnProject? Or is there some other way of doing this? All the code I've ever seen on gluUnProject (even the faq at opengl.org) uses glReadPixels.
Quote this message in a reply
Moderator
Posts: 3,579
Joined: 2003.06
Post: #2
Do you have a depth buffer attached? You can check with glGetIntegerv(GL_DEPTH_BITS, &bits); ... it's funny we just discussed finding out if the depth buffer is attached like yesterday...
Quote this message in a reply
Member
Posts: 269
Joined: 2005.04
Post: #3
AnotherJake Wrote:Do you have a depth buffer attached? You can check with glGetIntegerv(GL_DEPTH_BITS, &bits); ... it's funny we just discussed finding out if the depth buffer is attached like yesterday...

Yup, it's attached. I'm using the standard Apple sample code to set up OGL. Although the test returns 32 bits and I'd expect either 16 or 24 based on the constant names (GL_DEPTH_COMPONENT16_OES and GL_DEPTH_COMPONENT24_OES). /shrug
Quote this message in a reply
Sage
Posts: 1,234
Joined: 2002.10
Post: #4
ES explicitly forbids reading back the depth buffer.

This is because some embedded GPUs use different rendering techniques than a traditional desktop GPU. Tile based deferred rendering theoretically eliminates the need for a depth buffer, because the GPU is sorting geometry per tile.

Of course you can still generate Window Z per pixel (albeit, at 8-bit precision) and read it back if that's really what you want to do... but, why? Why do you want to use gluUnproject?
Quote this message in a reply
Member
Posts: 269
Joined: 2005.04
Post: #5
arekkusu Wrote:ES explicitly forbids reading back the depth buffer.

This is because some embedded GPUs use different rendering techniques than a traditional desktop GPU. Tile based deferred rendering theoretically eliminates the need for a depth buffer, because the GPU is sorting geometry per tile.

Of course you can still generate Window Z per pixel (albeit, at 8-bit precision) and read it back if that's really what you want to do... but, why? Why do you want to use gluUnproject?

The user is dragging objects around in a 3D space (camera's been rotated/translated/etc), so I need to know where the finger is pointing so that I can place the object properly. A close analogue would be placing buildings in Warcraft 3. You need to set the building where the mouse is pointing, but the ground isn't flat and the camera isn't static relative to the ground.

I can probably design my way around it, or do the math manually, but gluUnProject is simple so I was seeing if I could get that working first. What do you mean by generating the window z per pixel? Run through all the drawing on a single pixel to get the depth value? Why would it be limited to 8-bit precision?
Quote this message in a reply
Sage
Posts: 1,234
Joined: 2002.10
Post: #6
Bachus Wrote:What do you mean by generating the window z per pixel?

I'm not suggesting that you do this for your situation-- but for the sake of explanation:

Window Z is normally calculated by MVP * vertex, scaled by glDepthRange to produce values in [0, 1]. Typically this value is written to the depth buffer, but you can't get at that in ES.

But, you can do the same calculation yourself using the texture matrix, and produce a color that you can get at.

For each object, take the Z column of the MVP and put it into the X column of the texture matrix. Pre-scale and translate it from [-1, 1] to [0, 1] to compensate for glDepthRange.

Then, when you draw an object, enable texturing and point glTexCoordPointer at your vertex position data.

This is basically equivalent to using glTexGen(OBJECT_LINEAR) on the desktop-- you're converting object position values into texture coordinates, in this case the S coordinate will end up having the window Z value.

Now that you have Window Z per vertex, you can use it to look up into a texture-- like a simple passthrough ramp from 0 to 1. Write the results into the framebuffer, and you have Window Z, per pixel. You can read that 8-bit value back to the CPU, if you really want to. Or use it as a texture, for nefarious purposes. Ninja

There are a couple other ways to achieve the same result in ES-- an even simpler way is to just use fog to blend between black and white based on Z. But the texture matrix lets you put arbitrary transforms in there, not limited to the current camera position.




For coarse object picking based on the user's finger position, I would probably do something entirely different. Like keep a bounding sphere per object, and transform each sphere position in software TCL (again-- MVP * vertex, scale and bias by viewport.) The one closest in X,Y to the finger position, and closest to the camera in Z taking into account the projected sphere diameters is the one being touched.
Quote this message in a reply
Member
Posts: 269
Joined: 2005.04
Post: #7
That's pretty interesting, though yea, might not be the best idea in this situation.

Quote:For coarse object picking based on the user's finger position, I would probably do something entirely different. Like keep a bounding sphere per object, and transform each sphere position in software TCL (again-- MVP * vertex, scale and bias by viewport.) The one closest in X,Y to the finger position, and closest to the camera in Z taking into account the projected sphere diameters is the one being touched.


This would probably work. I was able to get gluUnProject partially working by just approximating the z value manually, but that's obviously just a band-aid. The final product will likely be snapped to a grid, so it should be fairly forgiving on inexact values.

For the visual types:

[Image: screenshot2.jpg]

Forgive the done-in-30-seconds programmer art. Sneaky
Quote this message in a reply
Nibbie
Posts: 1
Joined: 2009.03
Post: #8
I'm also trying to get a working glUnProject method working on the iPhone. I found the mesa code and switched vars to GLfloats, but it still appears to be giving bad results.

I'm testing with this code:

Quote: gluUnProject(point.x, viewport[3] - point.y, -10.0f,
model_view, projection, viewport,
&pos3D_x, &pos3D_y, &pos3D_z);

which should simply be giving me the object coords for a click at point x,y 10 units into the screen.

Instead I'm getting values very close to dead-center, depth 0:
x: 0.000863030436
y: 19.9929943 (my camera is set at 20)
z: -0.00579830538

As I tap I see the values change, but they are always very small differences. Is there anything else you changed in the glUnProject methods and the couple sub-methods it calls to get it working correctly?

BTW, my view is set up like so:

Quote: GLfloat xmin, xmax, ymin, ymax;
GLfloat aspect = (GLfloat) backingWidth / backingHeight;
GLfloat zNear = 0.1f;
GLfloat zFar = 100.0f;
ymax = zNear * tan(45.0f * M_PI / 360.0);
ymin = -ymax;
xmin = ymin * aspect;
xmax = ymax * aspect;
glFrustumf(xmin, xmax, ymin, ymax, zNear, zFar);
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Depth Buffer woes in OpenGL ES 2.0 Macmenace 5 8,099 Mar 8, 2010 10:45 AM
Last Post: Macmenace
  Do you activate depth buffer when doing 2D games and opengl es? riruilo 13 10,088 Apr 3, 2009 11:02 AM
Last Post: miq01