GLSL dilemma (in my Objective-C application)

Apprentice
Posts: 18
Joined: 2006.06
Post: #16
arekkusu Wrote:Try running your shader in GLSLEditorSample which comes with Xcode 2.3. It will tell you in the preview window if you fall back to SW while you interactively edit the code.

That's another GREAT tip! Thank you so much Smile I'll try it tomorrow, it's been a long day today...
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #17
Perhaps the GLSL Showpiece still forces software rendering? It certainly used to.

The GLSLEditorSample just uses the same calls I posted earlier in this thread to determine whether the shader is being accelerated. I'm personally not 100% convinced that those calls don't lie; I've seen them return true for simple shaders running at single-digit FPSes before now.

You're using a loop in GLSL, which is something to be careful with -- many cards don't support looping in GLSL, and the loop must be completely unrolled, resulting in a very large shader. The X1600 does support looping, but that could easily cause a large slowdown in execution (though not a spike in CPU activity...). Anyway, I'd make the loop limit a constant, and declare it outside main(), and see if that helps any.
Quote this message in a reply
Sage
Posts: 1,234
Joined: 2002.10
Post: #18
No, GLSLShowpiece no longer forces SW rendering. There are some shaders that will fall back to SW depending on your HW, though (for example VertexNoise will always fallback since it used the noise function.)
Quote this message in a reply
Apprentice
Posts: 18
Joined: 2006.06
Post: #19
Thank you, both arekkusu and OneSadCookie, for your help so far. The shader is now running much faster and it doesn't use as much CPU as earlier. I switched the for-loop against a while-loop and swoosh - from 50% CPU usage to 10% (which is the normal usage when not running anything in particular on my system).

What is strange is that it's still slow compared to how it used to run on a PowerMac G5 with a nVidia 6800 GT... and that card isn't as "good" as the X1600 in my MBP from what I understand from the specs?
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #20
The 6800 should be faster at most things than an X1600. The exception would be dynamic branching and looping in a fragment shader, where the X1600 should win handily. You're probably not dynamically branching or looping any more, so...
Quote this message in a reply
Apprentice
Posts: 18
Joined: 2006.06
Post: #21
Ok... I wonder what specs I looked at?

To make a long story short; the shader above was originally used within a simple GLUT-application, and then we got like 40-50 fps, on a PowerMac G5 with the above mentioned nVidia 6800 GT. Now, when rewritten in Objective-C, the same application runs at 10 fps... which is sooo strange; it's the same shader, so I guess I have to conclude that it is something else that is "wrong" with the code.

As a matter of fact, the application runs at 10 fps at both the MBP and the G5 right now, so something is weird...
Quote this message in a reply
Apprentice
Posts: 18
Joined: 2006.06
Post: #22
Maybe some of you guys have a simple Objective-C application that creates an OpenGL-context and then runs a shader on it? I am tempted to say that the error within my application lies outside the shaders themselves - more probably is that I'm doing something stupid when creating my context etc... Blink

So, if you do have a (even so) simple Objective-C / GLSL setup, I'd love to see it and test my shader on it.
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  GLSL or Cg in 3D graphics application for Mac OS X pileofnuts 3 3,308 Oct 16, 2008 12:29 AM
Last Post: Bachus
  GL_DEPTH_TEST dilemma Wheatie 8 5,686 Sep 28, 2004 08:26 AM
Last Post: NYGhost