iDevGames Forums
Decoupling rendering loop from simulation - Printable Version

+- iDevGames Forums (http://www.idevgames.com/forums)
+-- Forum: Development Zone (/forum-3.html)
+--- Forum: Graphics & Audio Programming (/forum-9.html)
+--- Thread: Decoupling rendering loop from simulation (/thread-9568.html)



Decoupling rendering loop from simulation - sau_ers - Nov 23, 2011 06:39 PM

I am totally new to Cocoa programming, so my understanding here might be completely wrong.

I have been looking at implementing a game-loop in Cocoa (OS X) with the following properties:
  1. The graphics are rendered (updated) at some fixed rate (i.e. 60Hz)
  2. The simulation (high frequency physics etc.) is run at some higher rate

I've come to the conclusion that for [1] I should be looking at a CVDisplayLink based system. Now, would I be able to achieve [2] by setting an additional NSTimer (in the main thread) in order to fire off game updates at an independent rate (provided of course that the two threads are appropriately synchronised and do not interfere with the same data at the same time, presumably the physics loop needs to account for waiting on a lock occasionally)? My logic is that the CVDisplayLink will allow the graphics to be rendered nearly independently of the simulation being carried out. If this is the case, is this a sensible design? How else could this be carried out?

Bonus Question: If I am rendering my scene in the CVDisplayLink callback function, I need not override/call drawRect, correct?

Thanks for any help!


RE: Decoupling rendering loop from simulation - OneSadCookie - Nov 23, 2011 06:48 PM

I recommend avoiding CVDisplayLink. It doesn't work well in multimonitor situations.

You can easily spin off a second thread of your own to render in your GL context, or keep your GL on the main thread and use a second thread for the simulation.

If you're dealing with GL and threads, beware that each OpenGL context may only be used simultaneously by one thread, and it's up to you to manage whatever mutual exclusion scheme you use to achieve this. Also beware that NSOpenGLView calls CGLUpdateContext on the main thread when the window resizes. You can use CGLLockContext() to help manage the mutual exclusion if you like.


RE: Decoupling rendering loop from simulation - imranhabib - Jul 9, 2012 09:55 AM

(Nov 23, 2011 06:39 PM)sau_ers Wrote:  I am totally new to Cocoa programming, so my understanding here might be completely wrong.

I have been looking at implementing a game-loop in Cocoa (OS X) with the following properties:
  1. The graphics are rendered (updated) at some fixed rate (i.e. 60Hz)
  2. The simulation (high frequency physics etc.) is run at some higher rate

I've come to the conclusion that for [1] I should be looking at a CVDisplayLink based system. Now, would I be able to achieve [2] by setting an additional NSTimer (in the main thread) in order to fire off game updates at an independent rate (provided of course that the two threads are appropriately synchronised and do not interfere with the same data at the same time, presumably the physics loop needs to account for waiting on a lock occasionally)? My logic is that the CVDisplayLink will allow the graphics to be rendered nearly independently of the simulation being carried out. If this is the case, is this a sensible design? How else could this be carried out?

Bonus Question: If I am rendering my scene in the CVDisplayLink callback function, I need not override/call drawRect, correct?

Thanks for any help!
Could you kindly give me an idea what literature do i have to read about if i have decouple simulation from visualization. Firstly i want to understand the concept and then i want to decouple the simulation from visualization. i tried to stop rendering while simulation is going on at the back end I don't know if it is correct. Can you guide plz.?


RE: Decoupling rendering loop from simulation - Blacktiger - Aug 2, 2012 07:02 AM

Also, generally you want the simulation to run at a constant rate and the framerate to vary (although perhaps your game is different). If the simulation rate varies you could run into odd bugs with the simulation differing based on the current rate of simulation on different hardware.

For example imagine you have an object moving in a straight line that intersects the corner of a wall. Depending on when you calculate the position of the object, you may miss the fact that it should have collided with the corner and as a result the object will just continue on it's path instead of "bouncing". If the simulation rate varies, this could happen one way in one set of circumstances and one way in another set of circumstances.