Alpha Blending - Printable Version
+- iDevGames Forums (http://www.idevgames.com/forums)
+-- Forum: Development Zone (/forum-3.html)
+--- Forum: Graphics & Audio Programming (/forum-9.html)
+--- Thread: Alpha Blending (/thread-2654.html)
Alpha Blending - lbtori - May 1, 2008 03:07 AM
I'm making a little UI where the user can open an image and then draw over it by clicking and dragging. When the image is opened it's displayed in the background (it's made into a texture and applied to a quad). A (not very accurate) depth map is available for the image and the aim is to make the brush strokes (created by the user 'drawing') appear like the follow the shape of the object. In essence, open a picture and a paint a new material on the depicted object -
In this case I'm trying to simulate fur.. Each hair is drawn as 3 line segments and there's one hair per pixel covered by the user's brush stroke. Each hair starts at 0.6 alpha and ends with 0 alpha. Also, a bit of randomness is involved when generating each hair to make it looks more realistic as real fur/hair isn't perfect.
Currently I'm initialising with
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_ALWAYS); // The Type Of Depth Testing To Do
However, as the depth test is set to always, new hairs cover large parts of the previous ones (apart from the root) so in the middle bits of each brush stroke only the root bits are visible so it just looks like random pixels. The effect I'm trying to achieve is the way it looks at the end of each brush stroke - ie furry and fluffy
If I set the depth test to LEQUAL it goes all horrible because (from what I understand at least) even if things under the transparent parts of the hair should be visible they're not rendered because they fail the depth test -
The only possible ways I can think of so far are rendering just the fur in a separate buffer and blending the new with the old so that the alpha of the new is 1-alpha_old or something along these lines so that new hairs overlapping with the old are covered by the old.
Or keep a tree with all the hairs sorted by depth in memory and when a new one is drawn, redraw the area it affects in the correct order (based on depth) which sounds sensible but in most cases line segments of the old and the new hair would intersect so I'm not sure how I should handle that.
Any suggestion on how I could start with either of the above approaches or any other possible solutions/hacks would be greatly appreciated.. The end result doesn't have to be accurate as long as it looks right...
Apologies for the long post
Alpha Blending - TomorrowPlusX - May 1, 2008 01:35 PM
I've been hesitant to reply since this is more complex than it initially sounds.
My gut instinct is that you need to depth sort your "hairs". You're drawing an astonishing number of hairs, so if I were doing this, I'd put them in a std::set with a comparator function which sorts them by depth. That way, hairs are inserted at the correct spot for the distance from the camera, and you can iterate and draw from back to front.
That being said, you mention that individual hairs may (will) still intersect. I'm willing to bet that if you sort per-hair, the error factor here will be small enough that it may not be visible on casual inspection.
If it turns out to be trouble, there's a trick I learned on this forum for drawing grass without depth sorting. You draw it in two passes. The first pass is drawn with depth buffer writes enabled, and with the alpha func set to GL_LESS and the threshold at some pinion value like 0.5. Then the second pass is drawn with depth writes disabled and the alpha func set to GL_GREATER.
What happens is that the the first pass draws the opaque fragments with depth clipping, and the second pass draws the fringe ( the antialiased bits ). The second pass draws very few pixels, and while there is error, the overall effect is pretty decent looking.
It's hard to say if this approach will be of any use to you since it looks like all your hairs are at partial opacity in the first place, so there's no opaque pixels to fill and block into the depth buffer.
Anyway, this is one of the fundamental problems with raster 3d -- there's no single good algorithm for general purpose non-convex transparency rendering. There are only a bunch of approaches which have different applicability in context.
One final thing you might want to look at is "Depth Peeling" where the transparent bits are rendered in N passes, and the near and far planes are set up to constrict rasterization to a narrow plane parallel to the camera's near plane. Each pass is constricted to a narrow plane, and rendered N times with that plane being shifted forward. It's worth googling.