GLUT_ACCUM slowing down application 10x! - Printable Version
+- iDevGames Forums (http://www.idevgames.com/forums)
+-- Forum: Development Zone (/forum-3.html)
+--- Forum: Graphics & Audio Programming (/forum-9.html)
+--- Thread: GLUT_ACCUM slowing down application 10x! (/thread-5989.html)
GLUT_ACCUM slowing down application 10x! - WhatMeWorry - Jan 6, 2005 12:17 AM
I've got a openGL (guess that's obvious) program with about
50 objects rotating, moving etc. Fairly zippy.
All I do is add a GLUT_ACCUM to the glutIntiDisplayMode() function.
Nothing else. Don't even enable the accumulation buffer.
And then the program moves like molasses. Painfully slow.
I've heard some disparaging remarks regarding GLUT. And i've
got an old iMac DV running OS 9.
This whole thing started because I wanted to do scene antialiasing
with the accumulation buffer. But this issue is sort of a
non-starter. Short of buying a new G5 when Tiger comes out,
GLUT_ACCUM slowing down application 10x! - arekkusu - Jan 6, 2005 04:13 AM
The accumulation buffer uses 64 bits per pixel. Therefore, it can not be supported in hardware on anything older than a Radeon 9600 or GeForce 5200. On your iMac, every time you accumulate, the framebuffer has to be read back across the bus and accumulated by the CPU. This is very slow.
Buying a G5 won't help much either, since even on the new cards the accumulation functions are not accelerated yet by the drivers (but you can achieve the same results using float textures, which are accelerated.)
You don't really have a lot of options on a Rage 128 iMac... multisampling isn't supported. One thing you could do is render the scene 4x larger than you want (keeping in mind the 1024x1024 viewport limit) and then scale the result down with bilinear filtering. That is probably the best way to do general 3D scene antialiasing on Rage128 hardware. There may be better ways in special cases (like 2D) such as drawing antialiased 1 pixel wide lines around your model edges.
GLUT_ACCUM slowing down application 10x! - WhatMeWorry - Jan 6, 2005 08:28 PM
I came across this quote on inside mac gamers web site today:
The Radeon X800 XT's memory is about 50% faster than the Radeon 9800 Pro, which will help when full scene anti-aliasing (FSAA) and anisotropic filtering (AF) are enabled
I did some googling on FSAA, and am i correct in assuming that FSAA is done
automatically by the card. So there is really no programming associated with it?
Also, am I correct in assuming that the accFrustum() and accperspective() functions
in Chapter 10 of the Red Book will soon be just a historical teaching example.
As an aside, since I'm just going through the Red Book teaching myself a smattering
of openGL (going for breadth and not depth, here) I'm wondering what will become
obsolete the quickest in openGL? When I read about video cards with 256MB, I
thinking that color-index display mode might be on the way out, if not already.
But then, I'm just a serious hobbyist with no real world knowldege of the game
Have meds. Will stop rambling.
GLUT_ACCUM slowing down application 10x! - arekkusu - Jan 7, 2005 02:53 AM
The trend is definitely towards FSAA. I don't think any games rely on accumulation, because consumer hardware hasn't accelerated it until very recently (the story is a bit different on million dollar SGI Reality Engines.)
FSAA is a pixel format attribute, so the only programming involved is to ask for it, and to enable multisampling. Very easy. On nvidia hardware there is also a filtering hint. The downside of FSAA is the resulting quality (typically 2 subpixel bits, 4 shades of alpha) is very poor compared to 2D or GL_SMOOTH antialiasing (up to 8 subpixel bits, 256 shades.) But it works with general 3D scenes.
Indexed color mode is not supported at all on OS X. nvidia hardware used to support indexed textures, but they are deprecated now. ATI hardware never supported them.