death by glGenTextures(), with TGA

Moderator
Posts: 698
Joined: 2002.04
Post: #1
I've been slowly going through the GLUT ports of NeHe's tutorials, recoding/extending them in Carbon to comprehend better the mechanics behind them; but I've hit a wall with tutorial 6 : a recoding of the TGA texture loader originally coded by Anthony Parker kills the app every time the glGenTextures() statement is reached (or on any statements past it if glGenTextures is commented out) - any suggestions would be appreciated! thanks!


p.s. the relevant bit is right at the bottom - sorry!


Code:
typedef struct sSkin
{
  GLubyte    x,
            y,
            depth,
            *skin;
  GLuint    id;
} tSkin;


...



//
//
signed int ISkinFile_tga(tSkin *outSkin,
                         char *srcFilePath)
{
  GLubyte        TGAHeader[12],
                TGAComparisonHeader[12] = {0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0},
                TGAFileHeader[6],
                bytesPerPixel;
  GLuint        skinMemorySize,
                swap,
                depthType = GL_RGBA;
  FILE            *file;
  unsigned int    loop;


  if((file = fopen(srcFilePath, "rb")) == NULL)
    return kErr_FileOpenFailed;
  if(fread(TGAHeader, 1, sizeof(TGAHeader), file) != sizeof(TGAHeader))
    return kErr_TGAHeaderSizeWrong;
  if(memcmp(TGAHeader, TGAComparisonHeader, sizeof(TGAHeader))  != 0)
    return kErr_TGAHeaderContentWrong;
  if(fread(TGAFileHeader, 1, sizeof(TGAFileHeader), file) != sizeof(TGAFileHeader))
    return kErr_TGAFileHeaderSizeWrong;
  if((outSkin->x = TGAFileHeader[1]*256+TGAFileHeader[0]) <= 0)
    return kErr_SkinXZeroOrNegative;
  if((outSkin->y = TGAFileHeader[3]*256+TGAFileHeader[2]) <= 0)
    return kErr_SkinYZeroOrNegative;
  if(((outSkin->depth = TGAFileHeader[4]) != 24) && (outSkin->depth != 32))
    return kErr_SkinWrongDepth;
  bytesPerPixel = outSkin->depth/8;
  skinMemorySize = outSkin->x*outSkin->y*bytesPerPixel;
  outSkin->skin = (GLubyte *)malloc(skinMemorySize);
  if(fread(outSkin->skin, 1, skinMemorySize, file) != skinMemorySize)
    return kErr_SkinMemorySizeDidNotEqualReservedMemorySize;
  for(loop = 0;
      loop < (int)skinMemorySize;
      loop += bytesPerPixel)
  {
    swap = outSkin->skin[loop];
    outSkin->skin[loop] = outSkin->skin[loop+2];
    outSkin->skin[loop+2] = swap;
  }
  fclose(file);
  

    //    dies here (or at any other statement onwards if I comment this out) : I've tried _lots_ of variartions on this statement, with no difference.
  glGenTextures(1, &outSkin[0].id);
  glBindTextures(GL_TEXTURES_2D, outSkin[0].id);
  glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
  glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
  if(outSkin[0].depth == 24)
    depthType = GL_RGB;
  glTexImage2D(GL_TEXTURE_2D, 0, depthType, outSkin[0].x, outSkin[0].y, 0, depthType, GL_UNSIGNED_BYTE, outSkin[0].skin);
  return 0;
}

Mark Bishop
--
Student and freelance OS X & iOS developer
Quote this message in a reply
Member
Posts: 304
Joined: 2002.04
Post: #2
I dont think its endian issues since you are dealing with all glubytes.

what I would do is printf or NSLog (or use the debugger) to look at all the variables right before you get to:

outSkin->skin = (GLubyte *)malloc(skinMemorySize);

so print out skinMemorySize, bytesPerPixel, outSkin->depth, outSkin->x, outSkin->y - etc.

Im guessing that something is screwing up skinMemorySize and you arent allocating enough memory and then when you go swapping bytes you are actually screwing with your code and not the picture data.
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #3
It could also be that this function is being called before a valid OpenGL context has been created. Windows is far more relaxed about such issues than the Mac.
Quote this message in a reply
Moderator
Posts: 698
Joined: 2002.04
Post: #4
thanks for the suggestion codemattic, but all those return's output to a log file, so I know it isn't them (plus I've commented out one at a time every 'if() return' error pair.)

um, I've just moved a few lines, with the result that I now get a white screen but no crash; but I'm certain what I've changed should have made no difference whatsoever, just made the code more readable... too late to do any more right now.

Mark Bishop
--
Student and freelance OS X & iOS developer
Quote this message in a reply
kainsin
Unregistered
 
Post: #5
I've had this problem before as well. Just as OneSadCookie said, make sure you have an OpenGL context all set up before calling glGenTextures. There are also a couple other functions that will crash the application if no context is created.
Quote this message in a reply
Moderator
Posts: 698
Joined: 2002.04
Post: #6
problem is, I've already created an AGLContext (and as I've posted above, I've got past the 'crash at glGenTextures()' problem, and I'm now at the 'textures are all white' problem instead Rolleyes )

Mark Bishop
--
Student and freelance OS X & iOS developer
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #7
glEnable(GL_TEXTURE_2D) usually fixes that one for me Wink

If not, it's most likely to be some kind of VRAM issue -- what sort of context are you creating (color size? depth size?), what size textures are you creating, and what video card are you running on?
Quote this message in a reply
Moderator
Posts: 698
Joined: 2002.04
Post: #8
(lots of self-derisive laughter...)

yes, I'd forgotten to glEnable(); thanks OneSadCookie! (still can't believe I overlooked that...Rolleyes )

Mark Bishop
--
Student and freelance OS X & iOS developer
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Pointer Warning in glGenTextures Pixelated 1 2,543 Apr 6, 2008 01:28 PM
Last Post: OneSadCookie
  glGenTextures() dies from EXC_BAD_ACCES? O.o wyrmmage 8 4,714 Apr 7, 2007 07:23 PM
Last Post: wyrmmage
  glGenTextures question Jake 5 3,825 Sep 1, 2003 08:27 AM
Last Post: Jake
  glGenTextures and EXC_BAD_ACCESS BobbyWatson 1 4,726 Aug 31, 2002 01:48 PM
Last Post: BobbyWatson