Is a 24-bit texture really 24 bit on the GPU?

Sage
Posts: 1,199
Joined: 2004.10
Post: #1
Simple question. I'm trying to be efficient about how much I store on the GPU and how much I can stream ( when dealing with geometry ). Recently it occurred to me that my jpeg loading code was actually uploading it as 32-bit RGBA, so I made some changes and upload it at 24-bit RGB. Of course, my PNGs ( with alpha ) still upload as 32-bit.

Now, when checking it out in GL Profiler, I see it say that the internal format is RGB, which is great, but it says the source format is RGBA. So, I'm wondering, does GL simply convert all 24-bit texture data to 32-bit on upload?

I can certainly see why that would make sense from a performance standpoint, and if that's the case, I might as well go back to my old code which uploads it as 32-bit, so I can skip the conversion hit.

Anybody know? Or is this one of those things which differs by hardware/driver and the programmer has no control?
Quote this message in a reply
Sage
Posts: 1,234
Joined: 2002.10
Post: #2
This differs by hardware and you have no control over it.

However, it is a good bet that internally the hardware wants pixels on native data size boundaries (so 8 bit for A, I, L textures, 16 bit for LA, and 32 bit for RGB or RGBA.)

As one concrete example, check out table 4.6 in the Nvidia GPU Programming Guide. You can see that they don't natively support RGB8, but do support XRGB8 where "X" is "unused".
Quote this message in a reply
Sage
Posts: 1,199
Joined: 2004.10
Post: #3
Good to know. Thanks. I'm going to drop my 24-bit conversion, then, since I was manually stripping the alpha channel, and it struck me as likely being a good candidate for endianness mistakes Rasp

I need to buy an MBP.
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #4
Also a good candidate for going *really slow*, since the chances are that the OpenGL framework is re-adding a fourth channel before sending the data to the GPU Smile
Quote this message in a reply
Post Reply