Problems with transparency/blending, .pngs, and SDL_image

Sage
Posts: 1,066
Joined: 2004.07
Post: #1
Here is my attempt at loading the bits for a PNG with transparency. Now I'm trying to figure out whether the reason there is no transparency is this code or my actual PNGs not having any transparency set as an alpha channel. If anyone knows of some error checking code to see if this is actually working I would appreciate it. Until then I'm going to assume it's my PNG files and work on making them with the transparency.

Code:
SDL_GetRGBA( theTextureSurfacePixels[ position ],
                             theTextureSurface->format,
                             &r,
                             &g,
                             &b,
                             &a );
                rPercent = ((( double )r / 255.0 ) * 100.0 );
                gPercent = ((( double )g / 255.0 ) * 100.0 );
                bPercent = ((( double )b / 255.0 ) * 100.0 );
                aPercent = ((( double )a / 255.0 ) * 100.0 );
                r = ( Uint8 )( rPercent * .31 );
                g = ( Uint8 )( gPercent * .31 );
                b = ( Uint8 )( bPercent * .31 );
                a = ( Uint8 )( aPercent * .31 );
                position = ( x + ( theTextureWidth * y2 ));
                theTextureImage[ position ] = 0;
                theTextureImage[ position ] |= (( r << 11 )
                                                | ( g << 6 )
                                                | ( b << 1 )
                                                | ( a << 16 ));
Quote this message in a reply
Moderator
Posts: 699
Joined: 2002.04
Post: #2
Because you haven't enabled blending? Because you haven't specified a blending method?

Mark Bishop
--
Student and freelance OS X & iOS developer
Quote this message in a reply
Moderator
Posts: 699
Joined: 2002.04
Post: #3
Code:
theTextureImage[ position ] |= (( r << 11 )
                            | ( g << 6 )
                            | ( b << 1 )
                            | ( a << 16 ));
Intriguing, but that code will do nothing like what you expect or want; assuming you have four bits per component, you should change the packed pixel format from GL_UNSIGNED_SHORT_5_5_5_1 to GL_UNSIGNED_SHORT_4_4_4_4

Code:
[i]component bits[/i] = ( Uint8 )( [i]component[/i]Percent * 31 );
to
Code:
[i]component bits[/i] = ( Uint8 )( [i]component[/i]Percent * 15 );

Code:
theTextureImage[ position ] |= (( r << 11 )
                            | ( g << 6 )
                            | ( b << 1 )
                            | ( a << 16 ));
(which will give you a packed pixel with the content ???RRRRGGGGBBBB?, with the bits of the alpha falling of the end...)
to
Code:
theTextureImage[ position ] |= (( r << 12 )
                            | ( g << 8 )
                            | ( b << 4 )
                            | ( a ));
(which will give you a packed pixel with the content RRRRGGGGBBBBAAAA.)

Mark Bishop
--
Student and freelance OS X & iOS developer
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #4
I already wrote about this, but let's go over it in more detail.

This code doesn't work because it throws away your alpha channel. It's that simple. Step through the code and watch what happens to your data:

* you pass in 32bpp RGBA. Let's make a concrete example and say the first pixel is translucent orange: 0xFFCC0044 (full red, most yellow, no blue, some alpha.)
* then you convert each channel into a floating point percentage. In this case, R:100% G:80% B:0% A:26%
* then you scale each percentage by 0.31, to convert from 8 bits per channel (0-255) to 5 bits per channel (0-31). And the floating point values are cast to 8 bit unsigned ints, so we have R:0x1F G:0x19 B:0x00 A:0x08.
* then, in the original code "(r<<11) | (g<<6) | (b<<1)" you bitshift and or the RGB channels into a packed 5551 pixel: 0xfe40. Note that the alpha is zero, it's just thrown away.
* or, in your new code here "(r<<11) | (g<<6) | (b<<1) | (a<<16)" you're putting the alpha channel into the high 16 bits of a 32 bit word: 0x0008fe40. But then you store it into an array of GLushorts, so it's still thrown away. And that's not a valid pixel format, anyway.

So, that should answer the question "why is there no transparency".

Now for the question "how do I get transparency". Again, the solution is easy. If you have a PNG file with transparency (which you can test by just opening it in Photoshop) then the alpha channel always has 8 bits. You need to keep all 8 bits when you give it to OpenGL as a texture. So use a pixel format that has 8 bits for alpha, like RGBA8. Don't bother with this 32bpp to 16bpp conversion at all, just throw all that code away.



For sealfin, in the rare case that you need to do 32bpp to 16bpp conversion yourself, this code is very inefficient. You've got eight casts, eight muls, four divides, four shifts, and three ors per pixel. The casts and divides in particular are slow, so let's simplify a bit:
Code:
Uint32 input = 0xFFCC00AA;
Uint32 red   = input & 0xF8000000; // high 5 bits of RGB
Uint32 green = input & 0x00F80000;
Uint32 blue  = input & 0x0000F800;
Uint32 alpha = input & 0x00000080; // high 1 bit of A
Uint16 output= ((red>>16) | (green>>13) | (blue>>10) | (alpha>>7));
This is straighforward scalar code, which gives the same result using four ands, four shifts, and three ors.

But as always the fastest way to do anything is DON'T DO IT. If you're converting texture depth for the purposes of uploading a texture to OpenGL just let the OS handle it with glTexImage. It will do the same conversion, and chances are that somebody smart at Apple has spent time to write this conversion in vectorized Altivec code, which will be much faster than the scalar version I give here.

[Edit: changed code sample to explicitly show we're only keeping 1 bit of alpha.]
Quote this message in a reply
Moderator
Posts: 699
Joined: 2002.04
Post: #5
Quote:For sealfin, in the rare case that you need to do 32bpp to 16bpp conversion yourself, this code is very inefficient.

As I posted in the previous thread where you tore into my coding, I'm fully aware of how inefficient that code is now Blush Happily, that code isn't indicative of my current coding Blush Before you ask "then why did you give it out", it is still a reasonable example of getting OpenGL texturing to work (i.e. it works, but not much more Blush)

Mark Bishop
--
Student and freelance OS X & iOS developer
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #6
Humans aren't perfect, writing bad code and making mistakes is part of life. So is learning! Smile
I'm more worried about people that copy code without understanding it-- you won't learn unless you step through it to see how it works.
Quote this message in a reply
Sage
Posts: 1,066
Joined: 2004.07
Post: #7
arekkusu if you could explain your code so that I know how it works I would appreciate it so that I can implement it in here. Here is the original code from sealfin before I messed it up with all my different attempts at mipmapping and alpha channel:

Code:
signed int CreateTexture2D( GLuint &theTextureId,
                            Uint16 &theTextureWidth,
                            Uint16 &theTextureHeight,
                            GLushort *&theTextureImage,
                            const char *theTextureImagePath )
{
    SDL_Surface *theTextureSurface;
    Uint32      *theTextureSurfacePixels;
    
    if(( theTextureSurface = IMG_Load( theTextureImagePath )) == NULL )
    {
        fprintf( stderr,IMG_GetError());
        return -1;
    }
    
    {
        SDL_Surface *temporarySurface = SDL_DisplayFormat( theTextureSurface );
        if( temporarySurface == NULL )
        {
            fprintf( stderr,
                     IMG_GetError());
            return -1;
        }
        SDL_FreeSurface( theTextureSurface );
        theTextureSurface = temporarySurface;
    }
    
    SDL_LockSurface( theTextureSurface );
    
    theTextureWidth = theTextureSurface->w;
    theTextureHeight = theTextureSurface->h;
    theTextureSurfacePixels = ( Uint32* )theTextureSurface->pixels;
    theTextureImage = ( GLushort* )malloc(( sizeof( GLushort ) * ( theTextureWidth * theTextureHeight )));
    
    {
        Uint16 x,
        y2;
        Sint16 y;
        Uint32 position;
        Uint8 r,
            g,
            b,
            a;
        double rPercent,
            gPercent,
            bPercent,
            aPercent;
        
        
        
        for( y = ( theTextureHeight - 1 ),
             y2 = 0;
            
             y >= 0;
            
             y --,
             y2 ++ )
        {
            for( x = 0;
                 x < theTextureWidth;
                 x ++ )
            {
                position = ( x + ( theTextureWidth * y ));
                SDL_GetRGBA( theTextureSurfacePixels[ position ],
                             theTextureSurface->format,
                             &r,
                             &g,
                             &b,
                             &a );
                rPercent = ((( double )r / 255.0 ) * 100.0 );
                gPercent = ((( double )g / 255.0 ) * 100.0 );
                bPercent = ((( double )b / 255.0 ) * 100.0 );
                aPercent = ((( double )a / 255.0 ) * 100.0 );
                r = ( Uint8 )( rPercent * .31 );
                g = ( Uint8 )( gPercent * .31 );
                b = ( Uint8 )( bPercent * .31 );
                a = ( Uint8 )( aPercent * .31 );
                position = ( x + ( theTextureWidth * y2 ));
                theTextureImage[ position ] = 0;
                theTextureImage[ position ] |= (( r << 11 )
                                                | ( g << 6 )
                                                | ( b << 1 ));
            }
        }
    }
    
    SDL_UnlockSurface( theTextureSurface );
    SDL_FreeSurface( theTextureSurface );
    
    glGenTextures( 1, &theTextureId );
    glBindTexture( GL_TEXTURE_2D, theTextureId );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
    glTexImage2D( GL_TEXTURE_2D, 0, 4,
                  theTextureWidth, theTextureHeight,
                  0, GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1,
                  theTextureImage );
    
    return 0;
}
Quote this message in a reply
Sage
Posts: 1,066
Joined: 2004.07
Post: #8
I'm going to take a guess, but does that simply replace the
Code:
SDL_GetRGBA( theTextureSurfacePixels[ position ],
                                                         theTextureSurface->format,
                                                         &r,
                                                         &g,
                                                         &b,
                                                         &a );
                                rPercent = ((( double )r / 255.0 ) * 100.0 );
                                gPercent = ((( double )g / 255.0 ) * 100.0 );
                                bPercent = ((( double )b / 255.0 ) * 100.0 );
                                aPercent = ((( double )a / 255.0 ) * 100.0 );
                                r = ( Uint8 )( rPercent * .31 );
                                g = ( Uint8 )( gPercent * .31 );
                                b = ( Uint8 )( bPercent * .31 );
                                a = ( Uint8 )( aPercent * .31 );
                                position = ( x + ( theTextureWidth * y2 ));
                                theTextureImage[ position ] = 0;
                                theTextureImage[ position ] |= (( r << 11 )
                                                                                                | ( g << 6 )
                                                                                                | ( b << 1 ));
                        }
part? I read your post a few times and I think I understand what's going on but I'm not sure where to place it just yet.
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #9
Sure, there are basically three steps:
* load the image data from disk
* set the texture parameters
* upload the texture

The second two are plain OpenGL, but the first step is OS dependent. For example in Cocoa you might do this:
Code:
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[NSData dataWithContentsOfFile:@"path/to/myfile.jpg"]];
What's important is that you can get a pointer to the pixel data, and some information about the pixel format, like width, height, bitdepth, etc, which you need to pass to OpenGL.

You're using SDL_Image, so let's modify your existing code:

Code:
int CreateTexture2D( GLuint &theTextureId,
                     Uint16 &theTextureWidth,
                     Uint16 &theTextureHeight,
                     const char *theTextureImagePath ) {
                    
        // load the image from disk (decompressing, etc)
        SDL_Surface *theTextureSurface;        
        if(( theTextureSurface = IMG_Load( theTextureImagePath )) == NULL ) {
                fprintf( stderr,IMG_GetError());
                return -1;
        }
        
        // at this point, theTextureSurface contains some type of pixel data.
        // SDL requires us to lock the surface before using the pixel data:
        SDL_LockSurface( theTextureSurface );

        // SDL's surface struct has the width, height, and data pointer:
        theTextureWidth = theTextureSurface->w;
        theTextureHeight = theTextureSurface->h;
        GLvoid *theTextureImage = theTextureSurface->pixels;

        // we could sanity check those values, to make sure they aren't zero,
        // and the dimensions are powers of two, if needed.

        // check the pixel format, since it could depend on the file format:
        GLenum internal_format;
        GLenum img_format, img_type;
        switch (theTextureSurface->format->BitsPerPixel) {
            case 32: img_format = GL_RGBA;          img_type = GL_UNSIGNED_BYTE;
                     internal_format = GL_RGBA8;    break;
            case 24: img_format = GL_RGB;           img_type = GL_UNSIGNED_BYTE;
                     internal_format = GL_RGB8;     break;
            case 16: img_format = GL_RGBA;          img_type = GL_UNSIGNED_SHORT_5_5_5_1;
                     internal_format = GL_RGB5_A1;  break;
            default: img_format = GL_LUMINANCE;     img_type = GL_UNSIGNED_BYTE;
                     internal_format=GL_LUMINANCE8; break;
        }

        // now create a new texture ID and bind to it.
        // note that this ID should be deleted when it is no longer needed.
        glGenTextures( 1, &theTextureId );
        glBindTexture( GL_TEXTURE_2D, theTextureId );

        // set some parameters for this texture.
        // these settings depend entirely on how you intend to use the texture!
        glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
        glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
        glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
        glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );

        // upload the texture data, letting OpenGL do any required conversion.
        glTexImage2D( GL_TEXTURE_2D, 0, internal_format,
                      theTextureWidth, theTextureHeight, 0,
                      img_format, img_type, theTextureImage );

        // unlock and clean up the texture surface;
        // OpenGL has its own copy of the pixel data now
        SDL_UnlockSurface( theTextureSurface );
        SDL_FreeSurface( theTextureSurface );

        // catch any GL errors for debugging purposes
        glError();
        return 0;
}

You'll notice that I changed the signature of the function by removing the "GLushort *&theTextureImage" parameter. That was bogus, because it assumed the image data was 16bpp, and that the pointer was valid after the texture function returns, both of which are not safe.

If you do need to keep the pixel data pointer around, for texture animation or whatever, then you probably should break this function into two parts-- loading the image, and making a texture from it, so that you can better handle the memory management.

Likewise, there are a few assumptions I made that might not be right for you:
* I assume non-mipmapped POT textures. It's easy to add mipmaps, a bit more complicated to handle NPOT.
* I assume image bitdepths are 8, 16, 24, or 32. This code won't handle your 48bpp Photoshop files.
* I assume 8bpp images are to be used as greyscale (i.e. detail) textures. You might want to use them as masks though; use GL_ALPHA and GL_ALPHA8 in that case.
* I assume you're going to be scaling and rotating the texture; if not you could use different parameters.

You can see that you'll also need to change your function arguments around if you want to allow any of those assumptions to be variable-- maybe you need both alpha and detail textures, for example. I don't know.

Also, I should stress that I don't have the SDL SDK installed-- I just wrote this code in Safari. So test it Wink

[Edit: cleaned up code formatting]
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #10
And this is a handy glError() macro:
Code:
#define glError() { \
    GLenum err = glGetError(); \
    while (err != GL_NO_ERROR) { \
        printf("glError: %s caught at %s:%u\n", (char *)gluErrorString(err), __FILE__, __LINE__); \
        err = glGetError(); \
    } \
}

You should really only define it for debug targets.
Quote this message in a reply
Sage
Posts: 1,066
Joined: 2004.07
Post: #11
Ok. I finally tried out this code. All I get is white boxes. There are no textures at all. I'm using 1 TGA file and 4 JPG files. I'm going to change the TGA into a PNG but that is beside the point. I'm not sure what's going wrong but I'll work on it.
Quote this message in a reply
Sage
Posts: 1,066
Joined: 2004.07
Post: #12
Can somebody help me get this code (or the other code with transparency) working? I've tried various things with no successes yet.
Quote this message in a reply
Member
Posts: 184
Joined: 2004.07
Post: #13
Nick, I think it would help for you to do a little bit more debugging on your own; simply saying that a piece of code is just not working isn't really helpful, especially when you only have a snippet of your entire codebase- have you checked the buffer that you pass to glTexImage2D() to make sure the image data is correct? If that's correct, have you checked that the GL state is correct by making sure you have done all the right glEnables, etc. so that your code looks like the texturing code from an OpenGL code sample?
Quote this message in a reply
Sage
Posts: 1,066
Joined: 2004.07
Post: #14
I know the problem lies in arekkusu's code (because he just wrote it in Safari) because if I replace it with the original code, I get textures again. All code outside of this function work fine. I'll work on it but I was hoping that maybe arekkusu would be able to help a little bit.
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #15
<obi>Use the debugger, Nick</obi>

Seriously, it will be to your benefit to step through the code line by line and watch what happens. Put glError() everywhere.

For reference, this code works perfectly (if inefficiently), for POT textures:
Code:
    // texture init
    glGenTextures(2, tex);
    GLenum internal_format;
    GLenum img_format, img_type;    
    
    // image one
    glBindTexture(GL_TEXTURE_2D, tex[0]);
    NSImage *icon = [NSImage imageNamed:@"MyTexture.png"];
    NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[icon TIFFRepresentation]];
    switch ([rep bitsPerPixel]) {
        case 32: img_format = GL_RGBA;        img_type = GL_UNSIGNED_BYTE;          internal_format = GL_RGBA8;      break;
        case 24: img_format = GL_RGB;        img_type = GL_UNSIGNED_BYTE;          internal_format = GL_RGB8;       break;
        case 16: img_format = GL_RGBA;        img_type = GL_UNSIGNED_SHORT_5_5_5_1; internal_format = GL_RGB5_A1;    break;
        default: img_format = GL_LUMINANCE; img_type = GL_UNSIGNED_BYTE;          internal_format = GL_LUMINANCE8; break;
    }
    glTexImage2D(GL_TEXTURE_2D, 0, internal_format, [rep pixelsWide], [rep pixelsHigh], 0, img_format, img_type, [rep bitmapData]);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

This uses Cocoa to load the image (could be png, tga, jpg, tiff, gif, whatever). The code I posted above is the same thing, trying to shoehorn it into your SDL stuff.
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Sprite transparency in OpenGL? Guest! 26 29,598 Feb 17, 2012 09:24 AM
Last Post: Skorche
  2d shadow blending problems tesil 1 5,714 Mar 17, 2011 10:12 AM
Last Post: Skorche
  Trouble using SDL_image with XCode 3.2.3 Snow Leopard code4fun 1 4,335 Sep 22, 2010 10:47 PM
Last Post: OneSadCookie
  OpenGL ES 2.0, 2D Alpha Transparency Artifacts Macmenace 3 7,914 Mar 28, 2010 11:18 PM
Last Post: AnotherJake
  SDL_image and bmp loading Duane 13 8,533 Dec 21, 2009 09:14 AM
Last Post: Skorche