Need help with pixel oriented framebuffer

Apprentice
Posts: 13
Joined: 2011.02
Post: #1
Hello. My first time here. Let me introduce myself. I am have done some PSP programming, which was easy because of the free homebrew compiler and the ease of getting a framebuffer to plot your pixels (and poke your sounds). However, when it comes to Apple iPhone development, there is this huge layer of API between you and the framebuffer.

Can ANYONE point me to an opensource barebone sourcecode that can compile right away and allow you to plot ONE pixel where you tap? Something that is very efficient because the whole game is going to be pixel oriented. It can be extremely small sample. Just: register where the finger is, plot pixel on screen.

If someone can provide that, I would appreciate it very much. If you can provide it as LOW level as possible the better. I know it may be asking too much for direct framebuffer access, but there must be an API that provides it.

I heard that the lowest layer you can go is OpenGL, and the rest go on top of it. Is that true? If yes, then the lowest is an array of pixel points and then sending them to the opengl engine.

Or maybe there is something lower, like a bitmap that you can plot to in memory, and then send the bitmap to an OpenGL API to render to the screen on 1-1 pixel to screen pixel ratio. But that is assuming the OpenGL is the lowest layer.

So here is the request:

barebones XCode source code providing plot a pixel and reading a finger. Anyone willing to contribute? I am sure this would help out the developer community lots.
Quote this message in a reply
Member
Posts: 142
Joined: 2002.11
Post: #2
(Feb 22, 2011 02:26 AM)edepot Wrote:  Hello. My first time here. Let me introduce myself. I am have done some PSP programming, which was easy because of the free homebrew compiler and the ease of getting a framebuffer to plot your pixels (and poke your sounds). However, when it comes to Apple iPhone development, there is this huge layer of API between you and the framebuffer.

Can ANYONE point me to an opensource barebone sourcecode that can compile right away and allow you to plot ONE pixel where you tap? Something that is very efficient because the whole game is going to be pixel oriented. It can be extremely small sample. Just: register where the finger is, plot pixel on screen.

If someone can provide that, I would appreciate it very much. If you can provide it as LOW level as possible the better. I know it may be asking too much for direct framebuffer access, but there must be an API that provides it.

I heard that the lowest layer you can go is OpenGL, and the rest go on top of it. Is that true? If yes, then the lowest is an array of pixel points and then sending them to the opengl engine.

Or maybe there is something lower, like a bitmap that you can plot to in memory, and then send the bitmap to an OpenGL API to render to the screen on 1-1 pixel to screen pixel ratio. But that is assuming the OpenGL is the lowest layer.

So here is the request:

barebones XCode source code providing plot a pixel and reading a finger. Anyone willing to contribute? I am sure this would help out the developer community lots.

On iPhone there is no direct access to the frame buffer, at least, not if you want to publish the application in the app store (it is a private framework and hence forbidden to use). If you want efficient drawing on iPhone you need to use Core Animation or OpenGL ES.

Sending a bitmap to OpenGL is painfully slow and you'll find that it is not an option (you would get at most a few frames per second with this method). Sending individual points to OpenGL may be an option if there are only a few thousand of them, 10s of thousands at the very most.

Consider not fighting the platform and using OpenGL ES or Core Animation instead.
Quote this message in a reply
Apprentice
Posts: 13
Joined: 2011.02
Post: #3
Thanks, but you did not specify which is lower (faster). OpenGL ES or Core Animation. Is Core Animation built on top of Core Graphics, which is in turn built on OpenGL ES? Or are they below OpenGL? I am interested in the lowest layer, fastest possible when just plotting pixels.

From your answer above, you are saying that the best way is to send individual points via OpenGL. But by which method? Drawing tiny triangles to represent a pixel? How do you set it up so that you can plot individual pixels? I believe OpenGL makes everything 3D.

Does Core Animation require that you make an animation? What if you just want to plot a screen, erase, and then plot another screen?

Also, are there any free samples that shows simple plotting onto the screen "pixel by pixel?" The sample in the devkit plots a large circle, but I am interested in pixel per plot.
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #4
To plot pixels, make a memory buffer, fill it directly, use glTexSubImage2D to send it to OpenGL ES, and draw the texture full-screen.

Pixel-by-pixel plotting is pretty silly on any remotely modern hardware. Unless you desperately need it for an effect (maybe screensaverly image decay?) you shouldn't be contemplating it.

OpenGL ES is the lowest level of graphics on iOS. CoreAnimation sits atop it.
Quote this message in a reply
Member
Posts: 166
Joined: 2009.04
Post: #5
(Feb 22, 2011 12:22 PM)edepot Wrote:  Thanks, but you did not specify which is lower (faster). OpenGL ES or Core Animation. Is Core Animation built on top of Core Graphics, which is in turn built on OpenGL ES? Or are they below OpenGL? I am interested in the lowest layer, fastest possible when just plotting pixels.

From your answer above, you are saying that the best way is to send individual points via OpenGL. But by which method? Drawing tiny triangles to represent a pixel? How do you set it up so that you can plot individual pixels? I believe OpenGL makes everything 3D.

Does Core Animation require that you make an animation? What if you just want to plot a screen, erase, and then plot another screen?

Also, are there any free samples that shows simple plotting onto the screen "pixel by pixel?" The sample in the devkit plots a large circle, but I am interested in pixel per plot.


Depending what you are doing , you may be able to utilize pixel shaders which essentially give you direct access to individual pixels.
Quote this message in a reply
Apprentice
Posts: 13
Joined: 2011.02
Post: #6
Ok, from the replies above, I am thinking there is NO WAY POSSIBLE to go below OpenGL ES without jailbreaking your iphone 4. Also, there are only two ways to do pixels directly in OpenGL (not going through an even slower layer on top like Core Animation).

1) Make memory buffer for this amount of bytes: 960x640x4(RGBA)
2) Make two triangles that looks like a rectangle
3) Make the view show the rectangle exactly
4) Use glTexSubImage2D to transfer the memory to the rectangle


The second way:
1) Do the same thing above, but substitute step 4 with below:
2) Use a pixel shader function to shade individual pixels of the rectangle

I am assuming a pixel shader will be a function that pulls individual pixels from the memory buffer?

Am I going over my head? Or is what is stated above correct? Also, from what samples can I use (from the devkit) that provides a good starting point for doing each of the above?
Quote this message in a reply
Luminary
Posts: 5,143
Joined: 2002.04
Post: #7
Correct. The pixel shader may not work for you, since you have to work backward (what color is pixel at (x,y)?) rather than forward (set pixel at (x,y) to color).

I think there was an OpenGL ES sample with a spinning square or something. Start there.
Quote this message in a reply
Apprentice
Posts: 13
Joined: 2011.02
Post: #8
Thanks a lot. I hope this gltexsubimage2d() is fast enough. Can you get 30 frames per second with it? Or is there a ceiling?

Now for the final question to make this thread complete... What is the exact hidden API that jailbroken people get to use to access the framebuffer? How do you go about using it?
Quote this message in a reply
Moderator
Posts: 3,574
Joined: 2003.06
Post: #9
(Feb 24, 2011 10:01 AM)edepot Wrote:  Thanks a lot. I hope this gltexsubimage2d() is fast enough. Can you get 30 frames per second with it? Or is there a ceiling?

There is no "ceiling". You can update the texture up to the display refresh rate of ~60 Hz, just like anything else. The bottleneck will be how fast you can render your "pixel buffer" (aka the texture) in RAM before you send it to the GL.

(Feb 24, 2011 10:01 AM)edepot Wrote:  Now for the final question to make this thread complete... What is the exact hidden API that jailbroken people get to use to access the framebuffer? How do you go about using it?

We don't talk about hidden APIs here, sorry.

That said, I don't know what the internals of the hardware are, but one way or another the pixels have to be pushed to it, so I imagine that doing it via an OpenGL texture should be just as (or nearly) good/fast as any other method, if there is one. ... of course, if you could theoretically render directly into the output video buffer, yeah, that'd be faster, but I've never heard of doing that on iOS.
Quote this message in a reply
Member
Posts: 166
Joined: 2009.04
Post: #10
(Feb 24, 2011 11:38 AM)AnotherJake Wrote:  That said, I don't know what the internals of the hardware are, but one way or another the pixels have to be pushed to it, so I imagine that doing it via an OpenGL texture should be just as (or nearly) good/fast as any other method, if there is one. ... of course, if you could theoretically render directly into the output video buffer, yeah, that'd be faster, but I've never heard of doing that on iOS.

GLES textures need to be swizzled (rearranged internally for better caching behavior) before being ready for the GPU which is the main reason why uploading textures tends to be slow.

In any case, direct pixel/framebuffer access is impossible if the app is supposed to end up in the app store.
Quote this message in a reply
Moderator
Posts: 3,574
Joined: 2003.06
Post: #11
(Feb 24, 2011 01:01 PM)warmi Wrote:  GLES textures need to be swizzled...

That's right, I had forgotten about that.
Quote this message in a reply
Apprentice
Posts: 13
Joined: 2011.02
Post: #12
Wait, this changes everything. If textures need to be swizzled, then wouldn't it be faster if you sent them pixel by pixel (shader instructions or triangles) instead of doing a whole screenful of a texture? I am assuming this swizzle operation on the whole screenful of data at each refresh will somehow slow it down to a crawl. I am assuming it won't reach 30fps.

As for pixel shader, will that be faster? Can you make a pixel shader that simply ignores the current color (don't OR with a value, but AND with a value) and just poke it from memory? Would this be faster? This would bypass the swizzle operation right, thereby saving on the per pixel swizzle of each pixel of the screen.

How about getting a copy of the swizzled texture? Can you simply dump it to the opengl without it swizzling by stating it is already swizzled? You could then use that as the background, and then plot individual pixels/triangles?

Also, what is the exact name of that "spinning square" sample that contains a working gltexsubimage2d()? Maybe someone can help out and point to the source, or even a simple sample showing how to do it.
Quote this message in a reply
Moderator
Posts: 3,574
Joined: 2003.06
Post: #13
(Feb 25, 2011 12:15 AM)edepot Wrote:  I am assuming it won't reach 30fps.

One thing I've learned after all these years of graphics/game development is not to assume anything when it comes to performance. Your imagination is often your worst enemy.

In this particular case, the glTexSubImage2D route is so simple that you should have already tried test cases to know if it'll get the performance you really need. Start simple and then move to the more complex routes later, like shaders. Wink
Quote this message in a reply
Apprentice
Posts: 13
Joined: 2011.02
Post: #14
Thanks for your help. I'll take a look at glTexSubImage2D(). Another related question is, what is the difference between OpenGL ES 1.1 and OpenGL ES 2.0 in regards to this function? does using glTexSubImage2D() tie you into a particular version of OpenGL? Or is there a OpenGL ES 2.0 version called differently? Does using one version of OpenGL mean that you can't use features in a higher version? Or can you mix and match?
Quote this message in a reply
Member
Posts: 166
Joined: 2009.04
Post: #15
(Feb 26, 2011 12:29 AM)edepot Wrote:  Thanks for your help. I'll take a look at glTexSubImage2D(). Another related question is, what is the difference between OpenGL ES 1.1 and OpenGL ES 2.0 in regards to this function? does using glTexSubImage2D() tie you into a particular version of OpenGL? Or is there a OpenGL ES 2.0 version called differently? Does using one version of OpenGL mean that you can't use features in a higher version? Or can you mix and match?

Well, there are plenty of changes but mostly related to using shaders.

Here is the man page for this function (2.0)
http://www.khronos.org/opengles/sdk/2.0/...mage2D.xml

And here is the same thing for 1.0:
http://www.khronos.org/opengles/document...age2D.html
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  What is the fastest way to blit a framebuffer to the iPhone screen? Rasterman 20 27,672 Mar 2, 2011 08:26 AM
Last Post: arekkusu
  rendering framebuffer texture not working lookitsash 2 4,051 Oct 30, 2009 09:49 AM
Last Post: arekkusu
  [Problem] Loading image with OpenGL orthogonal view pixel-to-pixel Lexis 2 3,685 Aug 21, 2009 01:35 AM
Last Post: Lexis