iDevGames Forums
OpenGL, core video and live video - Printable Version

+- iDevGames Forums (http://www.idevgames.com/forums)
+-- Forum: Development Zone (/forum-3.html)
+--- Forum: Graphics & Audio Programming (/forum-9.html)
+--- Thread: OpenGL, core video and live video (/thread-4950.html)



OpenGL, core video and live video - kensuguro - Oct 2, 2005 02:01 PM

I'm still a total newbie so I'm not trying to change world quite yet, but I want to know the big picture so I can maintain focus as I figure out the basics. My goal right now is to get a full screen output of live video. So, it's just a full screen preview. After that, I'd like to make use of the new core image filters and see if I can do some real time effects using the hardware accelerated effects.

Here's my wild guess:
I'm guessing that the video is going to come in through sequence grabber component, have that become an openGL texture at some point, project that to a plane in OpenGL, and wala I see it on screen. Is this the basic picture? I've read a bunch of tutorials and documentations but they're too detailed and I can't put together the "big" picture..

I'm not at all clear on how I'd apply the core image filters. Once the incoming video frames are OpenGL textures, do I just feed them to a set of filtering methods? Not really sure how all this stuff integrates together.

So nevermind the details, can anyone show me the big picture?


OpenGL, core video and live video - OneSadCookie - Oct 2, 2005 02:18 PM

apple has sample code of applying coreimage filters to quicktime video, I suggest you start there.


OpenGL, core video and live video - kensuguro - Oct 28, 2005 06:51 AM

to get this straight... forget live video input for now. I'm trying to start a project that loads a bitmap, does raster pixel crunching type effects, and then throws that on the screen.

So, how would a knowledgable cocoa programmer take this apart? I'm still having problems understanding what part is best for what.. Ok, from what I know, OpenGL is good for putting things on screen, so everything will happen under OpenGL. To get access to individual pixels, I use a coreimage context, yes? So, my basic understanding is that I do all the pixel calculations and effects within coreimage, and then send it to openGL to display or buffer the output. Sound just about right? Or is coreimage solely a required context to gain access to the coreimage filters?