Texture Color is wrong

Post: #1
Greetings -

I am porting my linux/win game demo to my mac. Everything is working great (OpenGL/SDL/GLEW are cross-platform beauties :-) ). Except.....
My texture colors are wrong...
The greens are ok but the reds are blue. My hunch is that it is an endian issue. I am using SDL_image, and I optimistically thought it handled that. After some googling, I have discovered SDL_Endian.h, but I do not understand how to use it to convert my textures. How do you use endian in SDL to convert surfaces?

Thanks in advance for any help/tips/advice....
Quote this message in a reply
Posts: 834
Joined: 2002.09
Post: #2
It's very unlikely to be endianness. Most file formats define pixels as being single bytes in a long row. Instead, I'd try replacing GL_RGB in glTexImage2D with GL_BGR. Smile
Quote this message in a reply
Posts: 5,143
Joined: 2002.04
Post: #3
if they're "ARGB" you'll need something like:

#if defined(__BIG_ENDIAN__)

Remember, Macs come in both endianness' these days Wink
Quote this message in a reply
Posts: 834
Joined: 2002.09
Post: #4
Hey Keith, just because you made the jump doesn't mean we need to respect the way you eat your eggs. Wink
DesertPenguin, word of advice: when posts contradict each other on this site, go with whatever OneSadCookie says. Wink
Quote this message in a reply
Posts: 1,234
Joined: 2002.10
Post: #5
Quote this message in a reply
Post: #6
Thank you all for your help...I am making progress.

I want to flip my surface before rendering (it is upside down). I have crafted the following. It is simple code that loops through each pixel in a surface and copies it to an identical surface while inverting the y-values.

It works - I am happy, but I do not understand why the >> 2 is needed.(The code is based off similar code from the net).

Uint32* srcbuff32;
  Uint32* destbuff32;
  Uint32 x,y;

  if (image->format->BytesPerPixel==4)
  for(x = 0; x < (Uint32)srcimage->w; x++)
        for(y = 0; y < (Uint32)srcimage->h; y++)
        srcbuff32 = (Uint32 *) srcimage->pixels;
        srcbuff32 += ((y*srcimage->pitch) >> 2) + x;
        destbuff32 = (Uint32 *) destimage->pixels;
        Uint32 hh = static_cast<Uint32>(destimage->h-1);
        destbuff32 += (((hh-y) * destimage->pitch) >> 2) + x;

Thanks again for any feedback.
Quote this message in a reply
Posts: 776
Joined: 2003.04
Post: #7
"->pitch" is width measured in bytes, not pixels, and since you are using 32 bit pixels aka 4 bytes, you need to divide by 4.

(">> 2" is equivalent to "/ 4')
Quote this message in a reply
Post: #8
Ahhh...that makes complete sense. I knew it was dividing...but couldn't think of why...

Thanks again for your help.
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  ? Find color value of 'pixel' in color buffer? Elphaba 1 6,224 Jul 22, 2009 01:23 PM
Last Post: Bachus
  What's wrong with my glutSolidTeapot? TomorrowPlusX 4 8,227 Aug 7, 2008 10:27 AM
Last Post: arekkusu
  Lighting a sphere with OpenGL lights (normals wrong?) cecilkorik 3 10,966 Dec 27, 2007 02:40 PM
Last Post: _ibd_
  What could be wrong here? Jones 5 4,437 Jul 1, 2006 08:52 PM
Last Post: Jones
  Can someone help a 'noob'? What's wrong with this? RagingAvatar 7 6,846 Jun 4, 2006 11:54 PM
Last Post: Fenris