## Move in direction of camera

Member
Posts: 65
Joined: 2009.03
Post: #1
I've set up a simple camera that has a position x, y, z and a 4x4 rotation matrix that I apply so that I can look around my 3D scene. What I now want to do is move in the direction of the camera.

I can't work out how to create a vector from my rotation matrix and then transform the position of the camera along that vector.

My Vector and matrix maths is very rusty and I've not been able to make sense of the resources I've been finding on google. I'm hoping that one of the clever guys on the forum can talk me through this. Any sample code on offer would also greatly help my understanding.

Thanks

MikeD

iPhone Game Development Blog - 71Squared
Member
Posts: 166
Joined: 2009.04
Post: #2
You need the initial facing vector of the camera which is where your camera is facing when you start out ( say, if you start out facing -Z then your vector would be Vector3(0,0,-1) -> it needs to be normalized)

Once you have that then you need to transform this vector using your matrix4 and use it as your translation vector for the camera.
This will only work if your matrix4 is used only for rotations ( and in that case you really only need a 3x3 matrix anyway)
Moderator
Posts: 1,563
Joined: 2003.10
Post: #3
I've written some tutorials that may be of some help to you:
http://sacredsoftware.net/tutorials/Vect...tors.xhtml
http://sacredsoftware.net/tutorials/Quat...ions.xhtml
http://sacredsoftware.net/tutorials/Matr...ices.xhtml

If your camera has a 4x4 matrix, you could actually do without the separate xyz values and just express them as a translation in the matrix. Alternatively, you could use xyz and a quaternion instead of a matrix for the camera. In either case, as said above, you'd multiply an identity orientation vector by your camera's orientation and offset your camera's position by the result, multiplied by the distance you want to move in that direction.
Member
Posts: 65
Joined: 2009.03
Post: #4
Thanks for the responses guys. I've got the camera working, but I can't say I understand why. My current render method is:
Code:
```- (void)render {     glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);          static GLfloat transY = 0;          // Set the clear color and clear the screen     glClearColor(0.7, 0.7, 0.7, 1.0);     glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);          // Reset the modelview matrix with the identity matrix     glLoadIdentity();          // Grab the current attitude from core motion     attitude = motionManager.deviceMotion.attitude;     CMRotationMatrix rm = attitude.rotationMatrix;          // Create a 4x4 matrix that rotates by the inverse (=transpose) of attitude.rotationMatrix     GLfloat rotMatrix[] =     {         rm.m11, rm.m21, rm.m31, 0,         rm.m12, rm.m22, rm.m32, 0,         rm.m13, rm.m23, rm.m33, 0,         0,      0,      0,      1     };          // Apply the matrix to the modelview     glMultMatrixf(rotMatrix);          // To correctly orient the view, rotate the view 90 degrees about X     glRotatef(90, 1, 0, 0);     // If the player is moving, grab the current modelview matrix that has been calculated     // based on the rotation matrix applied.     if (movingForward || movingBackwards)     {         glGetFloatv(GL_MODELVIEW_MATRIX, modelViewMatrix);     }          // Using the modelViewMatrix, check the direction being travelled and adjust the position     // of the camera based on the modelview matrix     if(movingForward) {         cameraPos.x += 0.0 * modelViewMatrix[0] + 0.0 * modelViewMatrix[1] + 0.2 * modelViewMatrix[2];         cameraPos.y += 0.0 * modelViewMatrix[4] + 0.0 * modelViewMatrix[5] + 0.2 * modelViewMatrix[6];         cameraPos.z += 0.0 * modelViewMatrix[8] + 0.0 * modelViewMatrix[9] + 0.2 * modelViewMatrix[10];     }     if(movingBackwards) {         cameraPos.x += 0.0 * modelViewMatrix[0] + 0.0 * modelViewMatrix[1] - 0.2 * modelViewMatrix[2];         cameraPos.y += 0.0 * modelViewMatrix[4] + 0.0 * modelViewMatrix[5] - 0.2 * modelViewMatrix[6];         cameraPos.z += 0.0 * modelViewMatrix[8] + 0.0 * modelViewMatrix[9] - 0.2 * modelViewMatrix[10];     }          // Move to the cameras position     glTranslatef(cameraPos.x, cameraPos.y, cameraPos.z);     // Render the scene     int n = 4;          switch (modelMode) {         case 0:             for(int x=-n; x<=n; x+=2)             {                 for(int y=-n; y<=n; y+=2)                 {                     for(int z=-n; z<n; z+=2)                     {                         if(x!=0 || y!=0 || z!=0)                         {                             glTranslatef(x, y, z);                             [cube render];                             glTranslatef(-x, -y, -z);                         }                     }                 }             }             break;         case 1:             transY += 0.075f;             GLfloat y = (GLfloat)(sinf(transY));             glTranslatef(0, y, -4);             glRotatef(45, 1, 1, 0);             [cube render];             glRotatef(-45, 1, 1, 0);             glTranslatef(0, -y, 4);             break;         default:             break;     }          // Render the floor     [floor render];          // Bind to the correct buffer and then present it to the screen     glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);     [context presentRenderbuffer:GL_RENDERBUFFER_OES]; }```

At the moment I'm grabbing the rotationMatrix from CoreMotion, transposing it and then multiplying it with the modelview matrix. I then rotate 90 degrees around X so that the orientation on screen matches what I expect i.e. the floor I have created is on the floor and is not a wall behind the cameras start position (0, 0, 0).

I've played with all sorts of different methods and the only one I can get to work is shown above, where I take the modelview matrix from OpenGL and adjust the cameras position using that. I'm not sure why I have to grab the modelview matrix and not just use the rotationMatrix I got from CoreMotion, as apart from the 90 degree rotation I do around X, the rotationMatrix should match the modelview matrix!!!!!

This works fine, but I'm trying to understand why it works and why I have to grab the modelView which has the potential to kill performance.

ThemsAllTook, thanks for the tutorials. They are the first ones that have actually made sense to me, especially the one about Quaternions. CoreMotion provides a Quaternion as well as a rotationMatrix, but I've not tried to use that yet.

Confused!

Mike

iPhone Game Development Blog - 71Squared