Display massively large image data

Feanor
Unregistered
 
Post: #1
I want to use OpenGL to display a very large image, with dimensions of 4096x4096. It is not for a game, but for viewing images of space.

I will probably just break it up into sixteen 1024x1024 tiles and just live with that, but if anyone has a more efficient or slicker solution, I would be interested. The viewing feature of the progam allows scrolling and zooming around the image and highlights "sources" -- galaxies and stars.

I already have something working with limited interactivity, but without the scrolling or free zooming, so I'm concerned the simple solution might bog down. I'll be working on it and check back.

One more thing: the texel data is gray scale, generated raw by me from a FITS image, a format used by astronomers. What I might need is a means of compressing the data. It all has to be done in the app -- texels are regenerated from the original FITS data each time the application launches. (FITS data is all floating point and the energy levels vary by as much as a factor of 64k -- straight from the CCD on the Hubble, if you're curious).
Quote this message in a reply
Member
Posts: 469
Joined: 2002.10
Post: #2
CocoaBlitz' texture tiling handles very large images by slicing up images bigger than the max texture size. On my 450Mhz G3 (Rage128) I don't get great performance, but it works pretty nice. Rectangular images also work splendidly.

---Kelvin--
15.4" MacBook Pro revA
1.83GHz/2GB/250GB
Quote this message in a reply
Member
Posts: 446
Joined: 2002.09
Post: #3
Grab this sample code: http://developer.apple.com/samplecode/Sa..._Image.htm

It can tile large images, scale/zoom/rotate etc...
Quote this message in a reply
Feanor
Unregistered
 
Post: #4
Quote:Originally posted by Frank C.
Grab this sample code: http://developer.apple.com/samplecode/Sa..._Image.htm

It can tile large images, scale/zoom/rotate etc...

Now I remember exactly why I like Cocoa so much better than Carbon (and Obj-C over C). This code is gross. Thanks for the link, though. I'll try to see through the implementation to the algorithms inherent.

For the record, a jpeg conversion of the image data I'm starting with is rendered by this sample at 9fps during rotation on my G4-dual867. Fast enough, but not as fast as I was hoping/expecting from an example of "how to do it". Oh well, I can't expect them to write my code for me.
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #5
* Use Luminance as the internal texture format if you're using grayscale images. That should halve or quarter the VRAM usage of the textures (assuming the Mac drivers work the same as the PC ones Rolleyes ).

* Tile the image into parts small enough that paging one part isn't a huge performance hit (probably 512x512 or 256x256).

* Use mipmaps, so that when you're zoomed out you use a lower res image.

* Test using frustum culling to avoid drawing some of the tiles.

* Investigate whether APPLE_texture_range and/or APPLE_client_storage provide performance benefits.
Quote this message in a reply
Member
Posts: 114
Joined: 2002.08
Post: #6
Dr. Light puts on a helmet
You could make a virtual screen in METAL, then just fiddle with pieces of the image rather easily. Not good enough if your using a special library to handle special data pieces though.

"Most nutritionists say that Twinkies are bad. But they're not, they're very very good."
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #7
OneSadCookie pummels Dr. Light until he realises that if Fëanor can only get 9FPS using OpenGL, METAL is only going to get one frame every few seconds.

Dr. Light's helmet shatters into a million tiny pieces
Quote this message in a reply
DoG
Moderator
Posts: 869
Joined: 2003.01
Post: #8
That would be 16MB of grayscale data. Hmm. I think the fact that you have high dynamic range images means you must first apply some acute gamma correction to shrink it to 256 grayscale levels.

What is your setup anyhow? Just reading those 16MB of texels at 10fps means 160MB/s if you have grayscale (luminance) textures and 640 MB/s if it's 32bit textures. If you calculate all the overhead bla bla bla you get quite some memory usage alone. If this doesn't fit your video memory, then AGP speeds will hugely bog it down.

In case you have it in RGBA, i suggest using luminance, you might see quite a speedup. Also, what filtering do you use?

I am thinking...you should have a lot of smaller textures, in the range of 128x or 256x and really only put on the screen what can be seen.
Quote this message in a reply
Feanor
Unregistered
 
Post: #9
I wrote a custom class to filter the original data, with an included square root (using a hybrid of inio's and other fast square root assembly as found in the thread on that) function to scale it as part of a special class to map the floating point data onto 0->255, with the option to use varying ranges (90-100% or specified max/min). The data filter works fine.

To make the grayscale useful in OpenGL I do this:

Code:
glPixelTransferf( GL_RED_SCALE, 16777216);
  glPixelTransferf( GL_BLUE_SCALE, 16777216);
  glPixelTransferf( GL_GREEN_SCALE, 16777216);

which seems to do the trick.

Now for contrast/brightness correction I am not there yet -- I intend to come up with something but for now I want to get the image overview working. Because this is scientific data the filter mostly has to be uniform -- no histograms needed, although I might add one for completeness if I get time. Maybe I can wire a brightness controller to the pixel transfer function and re-build the textures.

I am using luminance internally although I've switched it back and forth to RGB in a test program (granted not using the full image) and saw no memory/performance variations.

OSC, are you suggesting that OpenGL won't frustum cull for me anyway? I mean, how long does it take to specify a few vertices? If they are invisible, I trust the driver to not download the textures at all -- I suppose that is naÔve.

Keep in mind that this is an image viewer, not a game or anything. Even 5fps is probably fast enough, but obvious I don't want to saturate the bus/memory if I don't have to, especially as the primary user has a Cinema Display and might have a pile of other apps open, some of which might also have big images, giving Quartz Extreme quite a headache.

The overview window which displays the entire image is a secondary window. The main window will show postage stamps from all over the image, so probably there will still be a fair bit of memory traffic if the user scrolls the stamps view a lot.

Edit: Oh, my "setup" meaning my rig: G4-d867. The end user has a dual 1.42. We both have the default gfx (gf4 for me and I think radeon 9000 for end user).
Quote this message in a reply
Member
Posts: 114
Joined: 2002.08
Post: #10
16 MB, Holy Cow!
Where may I attain one such image? May not matter if quicktime can't open it.Sad

"Most nutritionists say that Twinkies are bad. But they're not, they're very very good."
Quote this message in a reply
DoG
Moderator
Posts: 869
Joined: 2003.01
Post: #11
By filtering I meant bilinear/trilinear/none? I would assume bilinear filtering on a mipmapped texture would be the fastest, or mipmapped without filtering at all. And check with the OpenGL profiler if the textures are kept in the VRAM, if not, it might explain the bad performance.
Quote this message in a reply
henryj
Unregistered
 
Post: #12
What's the problem here. 4096x4096 isn't a big image. Quicktime and openGL should handle this without problems . Your video hardware might not be able to handle this in one peice so tile it. The apple sample code does this.

All apple sample code is ugly. It's not carbons fault entirely.

glPixelTransferf

This is a killer. It will kick you off the fast path. Try not to use this. This also implies you are using DrawPixels??? Don't. Use textures. They are MUCH faster.

If you use linear filtering you may be able to get a speed increase for showing thumbnails. Some testing seems to indicate that gl wont copy lines it doesn't need.

Check out the texture range sample code also.
Quote this message in a reply
Feanor
Unregistered
 
Post: #13
Quote:Originally posted by DoooG
By filtering I meant bilinear/trilinear/none? I would assume bilinear filtering on a mipmapped texture would be the fastest, or mipmapped without filtering at all. And check with the OpenGL profiler if the textures are kept in the VRAM, if not, it might explain the bad performance.

We use "nearest" for minification -- this is a scientific app, the pixels must look accurate. For magnification, it doesn't matter. Aesthetics not matter, accuracy matter. Pardon my grammar sarcasm.

I'm not using anything but textures. henryj, you pre-judge me. Further, there is no "problem". I asked for suggestions. What's annoying currently is that the textures take time to load -- perhaps that's the transfer function?

Anyway, I see right away thought that you make a good point. My data filter could immediately convert to the full integer range instead of first going 0->255 and then re-scaling from 0->2^32. I will fix that right now and see how it helps. Thanks.

EDIT: Oh well, no difference.
Quote this message in a reply
Member
Posts: 469
Joined: 2002.10
Post: #14
Well all this talk of huge images prompted me to go and do some giant-image-optimizations for CocoaBlitz.

My current build has no problem moving around 4096x4096 images at 60fps (granted your window/viewport is small enough. (640x480 runs 60fps on my 450Mhz G3 16MB Rage128) I haven't done any MIPmap optimizations yet though.

If you want to try to use this early build send me an email. I won't be posting it because I'm currently revamping the asset management objects.

---Kelvin--
15.4" MacBook Pro revA
1.83GHz/2GB/250GB
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Consistent "look" between small and large objects WhatMeWorry 1 2,616 Oct 10, 2005 12:26 AM
Last Post: Fenris
  Large Texture Images Nick 1 3,038 Feb 15, 2005 08:18 PM
Last Post: OneSadCookie
  Large particles and fillrate limitations TomorrowPlusX 13 7,210 Oct 18, 2004 12:58 PM
Last Post: arekkusu