Blending and transparent overlays

Feanor
Unregistered
 
Post: #1
In the application I am working on (not a game, but whatever), I display images using textures mapped onto polygons in the usual way. Now I need to highlight certain irregular pixels (called segments), and I want to do this with a semi-transparent, coloured overlay. A secondary goal is to allow dynamic alteration in the colour of the overlay.

The red book is a bit sparse on techniques, so I'm looking for advice. I'm thinking that I could do a two-pass rendering process. I want to use a bitmap to define the overlay, but I am not sure how to do this. I can draw the initial polygon in the colour I want the overlay to be, and then combine the image texture and reverse the normal source and destination blending functions so that it looks like the overlay was placed on top. How do I define the bitmap so that only the segment region is coloured?

I was thinking that I could use the stencil buffer, but you have to draw into the colour buffer first, apparently (or use glDrawPixels). If I use the bitmap to create a texture, what internal format should I use? I am thinking I should use only an alpha channel. Can I just use GL_ALPHA? I want the most efficient storage, since this program stores a lot of texture data already.

Given a texture made of the bitmap, I should be able to set the colour I want (with full opacity), then draw the target polygon using the bitmap texture. One problem is that the texture may not be correctly sized to fit the polygon, because the image it is overlaying can be different scales depending on user preference. The bitmap overlays will be packed together into the actual texture objects.

(The program actually displays deep field images which contain thousands of galaxies. A custom view shows the various galaxies in separate "stamp" views. It is over these cropped sections of the original image that the overlays will go. The bitmaps for the overlays are created using another "image" file containing object index values in place of pixel intensities.

Instead of the light blue rectangles, pixels identifed as belonging to the galaxy will be highlighted. Since every galaxy is a different size, the stamp images are all different sizes. I'm going to settle on 32x32, 64x64 and 128x128 sizes for the bitmap/overlays and then pack them together into some large textures. In order to allow for large borders, I will ensure that each galaxy's segment data is stored in a much larger area than necessary. Here are some screen shots:

[Image: SCatalogueSources.jpg]

[Image: SCatalogueOverview.jpg]
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #2
Why not just use an 8 bit alpha texture as your second pass. Once you figure out which pixels are "in the segment", render them to the alpha texture in your favorite way (e.g. create new texture from view, or DMA glTexSubImage2D with client storage, rectangle textures, etc.) Then use the alpha texture as a mask for your colored selection quad, overlapping the entire regular texture.

You can probably do it in one pass with multitexturing, a constant color, and the right combine modes.
Quote this message in a reply
Feanor
Unregistered
 
Post: #3
It sounds perfect, but I have never tried single pass multi-texturing. The red book does not seem to discuss it, but maybe I missed it. Are there some resources you know of?

I am also unfamiliar with the specifics of an alpha texture, although the concept is fairly intuitive, and is basically what I am doing. I mean, I don't have a "favourite way", because I've never done it before! :ohmy:
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #4
Multitexturing stuff is in the 1.2.1 spec, so it's probably in the hardcopy red book.

http://oss.sgi.com/projects/ogl-sample/r...ombine.txt

I second Arekkusu's suggestion, whatever you do use a texture, and probably use multitexture.
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #5
For multitexturing, Apple's "TexCombine Lab" sample code is instructive, showing the various modes between two texture units.

[Edit: you should consider TexCombine Lab a starting point, because for your application you probably need to look at the separate RGB and A combine modes, and it doesn't show that.]

You can create 8 bit textures by specifying GL_ALPHA instead of GL_RGB in glTexImage2D. See also GL_RGBA, GL_LUMINANCE, etc.

The generic way of creating/updating the texture is fine; just malloc a chunk of ram, draw your selection pixels into it and pass to GL. But, if you find yourself updating the alpha texture over and over and performance becomes a problem, then you should look at the fast methods to update textures. See Apple's "TextureRange" sample code.
Quote this message in a reply
Feanor
Unregistered
 
Post: #6
OK, so far I'm not doing it right.

Code:
// create the texture data
    glGenTextures(1, &(texInfo->textureObject));
    glBindTexture(GL_TEXTURE_2D, texInfo->textureObject);
    glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_STORAGE_HINT_APPLE , GL_STORAGE_CACHED_APPLE);
    glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, 1);
    glTexImage2D( GL_TEXTURE_2D, 0, GL_LUMINANCE8, myFrame.size.width, myFrame.size.height,
                  0, GL_LUMINANCE, GL_MAP_TYPE, (GLvoid *)texInfo->texels );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );

I am allocating sixteen textures, 1024x1024, and *hopefully* at 8 bpp, not 32 -- it's black and white data, so I'm just going for luminance. But I don't know if it's actually being stored that way, or if that ruins the fast path, or what.

If it's not 8bpp, I am screwed, because my card only has 32MB (stock GF4MX) and a third of that is being used by Quartz Extreme already. What I notice in running the app is that the texture creation is fast, but when I actually go to display the view with the texture data, there is an enormous pause -- I'm guessing it's while the texture data is uploaded to the card, meaning the texture hint is being ignored.

Maybe I should take this problem to the GL mailing list. I may as well do that anyway.
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #7
What's GL_MAP_TYPE? What's the alignment on your data pointer? Why are you using TEXTURE_2D rather than TEXTURE_RECTANGLE_EXT? Why are you using TexImage2D rather than TexSubImage2D (or is the only the initial upload)?
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #8
Ok, here is some code that may help. It is one NSOpenGLView subclass.

Code:
/* MyOpenGLView */

#import <Cocoa/Cocoa.h>
#import <OpenGL/OpenGL.h>
#import <OpenGL/gl.h>
#import <OpenGL/glu.h>
#import <OpenGL/glext.h>

@interface MyOpenGLView : NSOpenGLView {
    NSOpenGLContext     *ctx;
    GLint         texture[2];    // image and overlay
    unsigned char    *selection;    // 128x128 buffer
}

@end
Code:
#import "MyOpenGLView.h"

@implementation MyOpenGLView

- (id)initWithFrame:(NSRect) frameRect {
    int x, y;
    float color[4] = {0.0, 0.5, 1.0, 0.5};    // the selection color
    NSOpenGLPixelFormatAttribute attr[] = {
        NSOpenGLPFADoubleBuffer,
    NSOpenGLPFAAccelerated,
    NSOpenGLPFAColorSize, 24,        // don't need destination alpha for this
        0
    };

    NSOpenGLPixelFormat *nsglFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:attr];
    if (self = [super initWithFrame:frameRect pixelFormat:nsglFormat]) {
        ctx = [self openGLContext];
        // do any extra setup here... in this case, make two (rectangle) textures.

        glEnable(GL_BLEND);
        glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
        glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
        glGenTextures(2, &texture[0]);

        // for base texture, I'll just use the app icon...
        glActiveTexture(GL_TEXTURE0);
        NSImage *icon = [[NSApplication sharedApplication] applicationIconImage];
        NSBitmapImageRep *icon_rep = [NSBitmapImageRep imageRepWithData:[icon TIFFRepresentation]];
        // which is 32 bit RGBA...
        glBindTexture(GL_TEXTURE_RECTANGLE_EXT, texture[0]);
        glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA,
                [icon_rep pixelsWide], [icon_rep pixelsHigh], 0,
                GL_RGBA, GL_UNSIGNED_BYTE, [icon_rep bitmapData]);
                
        // for the selection texture, I'll just make an 8 bit checkerboard...
        glActiveTexture(GL_TEXTURE1);
        selection = malloc(128*128);
        for (x = 0; x < 128; x++) {
            for (y = 0; y < 128; y++) {
                *(selection + x + y*128) = ((x%16)<8?127:255)*((y%16)<8?0:1);
            }
        }
        glBindTexture(GL_TEXTURE_RECTANGLE_EXT, texture[1]);
        glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_ALPHA,
                128, 128, 0, GL_ALPHA, GL_UNSIGNED_BYTE, selection);

        // and here I set up two texture units to use the selection texture as a mask of
        // the GL constant color, on top of the base texture...
        glActiveTexture(GL_TEXTURE0);
            glEnable(GL_TEXTURE_RECTANGLE_EXT);
            glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,  GL_MODULATE);    // normal texturing
        glActiveTexture(GL_TEXTURE1);
            glEnable(GL_TEXTURE_RECTANGLE_EXT);
            glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, color);        // selection color
            glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,  GL_COMBINE);
            glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB,   GL_PREVIOUS);        // tex unit 0 output
            glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB,   GL_INTERPOLATE);    // mask with
            glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB,   GL_CONSTANT);        // selection color
            glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE2_RGB,   GL_TEXTURE);        // using tex as RGB mask
            glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_ALPHA, GL_PREVIOUS);        //
            glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);        // keep base alpha

        // let the window background show through the GL surface, around the app icon
        long opaque = 0;
    [ctx setValues:&opaque forParameter:NSOpenGLCPSurfaceOpacity];    
    }
    [nsglFormat release];

    return self;
}


- (void)dealloc {
    free(selection);
}


- (void)reshape {
    NSRect bounds = [self bounds];
    [ctx makeCurrentContext];
    glViewport(0, 0, bounds.size.width, bounds.size.height);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    gluOrtho2D(0, 128, 128, 0);
}


- (void)drawRect:(NSRect)rect {
    [ctx makeCurrentContext];

    // clear Quartz and GL back buffers, so zero alpha shows the window background
    // ...if your texture will always be opaque, no need to do this...
    [[NSColor windowBackgroundColor] set];
    NSRectFill([self bounds]);
    glClearColor(0.0, 0.0, 0.0, 0.0);
    glClear(GL_COLOR_BUFFER_BIT);
    
    glColor4f(1.0, 1.0, 1.0, 1.0);    // set primary tint
    glBegin(GL_QUADS);            // draw one quad with two textures
        glMultiTexCoord2f(GL_TEXTURE0, 0,   0);
        glMultiTexCoord2f(GL_TEXTURE1, 0,   0);
        glVertex2f                (0,   0);
        glMultiTexCoord2f(GL_TEXTURE0, 0,   128);
        glMultiTexCoord2f(GL_TEXTURE1, 0,   128);
        glVertex2f                (0,   128);
        glMultiTexCoord2f(GL_TEXTURE0, 128, 128);
        glMultiTexCoord2f(GL_TEXTURE1, 128, 128);
        glVertex2f                (128, 128);
        glMultiTexCoord2f(GL_TEXTURE0, 128, 0);
        glMultiTexCoord2f(GL_TEXTURE1, 128, 0);
        glVertex2f                (128, 0);
    glEnd();

    [ctx flushBuffer];
}

@end


This will draw the selection color over the base texture using the selection texture as a mask, in one pass.

Unfortunately, this does not met your requirement of having a translucent selection color... I don't see a way to do that in one pass without going to three texture units, which would work fine on my Radeon but not on your GF4MX. :/

Maybe OSC or someone else sees a more clever way to do it in two units?

[Edit:] of course, this will make translucent selection colors if you make the mask texture values grey. But you would have to recalc the mask texture if you care about changing the selection alpha dynamically.

If you use mask texture values of 0, 127, and 255 then you get something like this:

[Image: multitexture_mask.jpg]
Quote this message in a reply
Feanor
Unregistered
 
Post: #9
Quote:Originally posted by OneSadCookie
What's GL_MAP_TYPE? What's the alignment on your data pointer? Why are you using TEXTURE_2D rather than TEXTURE_RECTANGLE_EXT? Why are you using TexImage2D rather than TexSubImage2D (or is the only the initial upload)?

GL_MAP_TYPE will be either GL_UNSIGNED_INT or GL_FLOAT, depending on a compiler flag -- currently the data is floating point format.

Why should I use TEXTURE_RECTANGLE_EXT? I have never figured out why it is supposed to be better? All of my textures are sized in powers of 2...

This is the initial upload, yes. My pointer is not aligned--should it be on a 16-byte paragraph boundary? I have no information that it should be.
Quote this message in a reply
Feanor
Unregistered
 
Post: #10
arekkusu, I think you meant that the selection colour can't be changed, not that it can't be translucent, right? I mean, you've got alpha and all. Smile

If I have to re-build the textures for the overlays in order to update the colour, I guess that's not terrible. Probably the user might fiddle with it once and never again. But what if the overlay texture is white, and the colour set for drawing is some other colour? Oh, wait, but that would still need two passes because I'd have to draw the overlay first with the colour set and then draw the image texture with the colour as white. Hmm.

Thanks very much for the code.
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #11
Quote:Originally posted by Feanor
arekkusu, I think you meant that the selection colour can't be changed, not that it can't be translucent, right? I mean, you've got alpha and all. Smile


No, to change the color just change the GL constant color in the color[] variable and redraw the view. No need to fiddle with the texture data.

But to change the alpha with this texture unit setup, you do need to change the texture. The more flexible way (for pulsating selections, etc) would need one additional multiply in the texture stage. A third texture unit with another constant would do it.

Quote:Thanks very much for the code.


No problem. "Share and enjoy."
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #12
Quote:Originally posted by Feanor
Why should I use TEXTURE_RECTANGLE_EXT?

For optimal texture updating performance, you need to do all the things bulleted in the texture range demo:

* use TEXTURE_RECTANGLE_EXT
* use client storage
* use BGRA format

Additionally, I've found during testing that the target texture has to have rowbytes of a multiple of 16 (4 RGBA pixels.) But that might just be the Radeon, it isn't documented.

Now, since your source data is float, you're immediately on the slow path because the CPU will have to do the conversion. And your target format is 8 bit... I'm not sure, but that probably also puts you on the slow path. It'd be nice to know exactly which formats can be accelerated (YUV??)

So, because of the float conversion you might as well forget about rectangle textures. They have some dumb limitations (no repeating or mipmaps) that POT textures don't suffer from, anyway.

If your UI is fairly static (i.e., view changes are user driven and not 60fps timer animation!) then the texture updating is probably not a huge bottleneck anyway (especially compared to any AGP swap involved when you have huge datasets...) On the other hand, for a game copying between offscreen render-to-texture contexts, it is a big deal. Smile
Quote this message in a reply
Feanor
Unregistered
 
Post: #13
Well for the overlays, texture updating does not matter at all unless the colour/transparency are being fiddled with interactively by the user. Probably the easy out in that case (and the other problem, below) is to use preview window and then update the real data at the end.

I would like fast texture updating for the images, because there are more than 256 levels of grey in the original CCD data -- the user can change the range of values which are visible, so they can "zoom" in and out on the energy scale of then images, and also "scroll" along the energy scale. This requires re-building all of the textures.

Basically, what you're saying is that I should try setting the internal format to BGRA and to start with integer texel data, and the texture creation will be a lot faster. Hopefully. Well that should be easy to test, actually, because I've written the image processing code with both an integer and a floating point path -- I thought, from reading the red book's discussion of the rendering pipeline, that floating point would be faster. It says that everything gets converted to floating point. How annoying.
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #14
rectangular textures are supposedly faster than power-of-two textures. I certainly got a good speedup from switching to them for rendering QuickTime movies.

Align your image pointer to 32 bytes (G4) or 128 bytes (G5). 32-byte alignment gave me a huge performance improvement doing movie stuff.

Use an 8-bit-per-channel pixel format, client storage and the AGP hint. Experiment with the format; GL_ALPHA, GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_RGBA, GL_BGRA/GL_UNSIGNED_INT_8_8_8_8_REV are worth a shot. My guess is that only the last three will be fast, and the last one will be fastest.

If you have a recent graphics processor, you may be able to do funky stuff on the card to increase your precision beyond eight bits.
Quote this message in a reply
Feanor
Unregistered
 
Post: #15
I have found a number of papers today about how to get high precision dynamic range further down the pipeline. You can get the links from the Mac-GL mailing list. You can make images look dramatically better on a computer display with some of the techniques that are being worked on. This is very current research, too.

Now, the concern I have with the formats you list is that those are not the internal format, but are the data format that you provide, no?

I start with a CCD image with floating point values, and convert it to either float or integer intensity values. Turns out that, yes, integers are noticeably faster. Assuming I can get the dynamic range ops into hardware, I'll be saving a load of time. What I'm still butting my head against is the sheer texture memory load. The source view shown above references portions of every texture. The overview references whatever fits in the view. If I get around to implementing MIPMAPing, I will ease the memory saturation in the overview, but not the source view.

Are you suggesting that I could convert my data to GL_LUMINANCE_ALPHA (how many bits is that? how do I find out?) it will likely be faster to convert?

Why is client storage killing my performance? It seems that the textures are created faster, but then take much longer to upload to the card -- I have to wait up to five seconds for the view to render the first time!!!! It's horrendous.

What is the best way to align my pointers? Can I force malloc to align itself, or do I have to allocate a slightly larger memory region and then round the pointer value up?
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Transparent textures kordeul 2 2,441 Aug 27, 2007 01:38 AM
Last Post: kordeul
  general blending versus texture blending questions WhatMeWorry 2 4,902 Dec 7, 2006 02:43 PM
Last Post: arekkusu
  Blending mode for Transparent fog? JeroMiya 2 2,597 Jan 30, 2004 08:09 AM
Last Post: MattDiamond
  Transparent Color? Josh 6 5,612 Apr 22, 2002 05:27 AM
Last Post: Josh