Disfigurement When Rotating Objects
To start off, I'm not using glRotate() nor do I want to. With that said, I'm trying to make the gun in my game move along with the player and have been moderately successful as of now. I have the gun moving around and staying 'stuck' to the lower right side of the screen but it does not rotate with the camera, i.e. it won' always point towards the crosshair. I have a function that rotates vectors (of which my vertices are) and so I tried using that. It works to some extent. It rotates along with it, but tends to shrink the object on the Y axis until it becomes a plane. If I rotate faster, I can even get some weird results. Is this a result of my code being bad, inefficient, both, or neither?
Here are the functions in use (Mesh::Rotate(), Vector::Rotate(), and Mesh::Update()). The way I track the mesh is VERY messy and that could be it as well. I'm trying to store vectors and distances and the center and create the vertices but that's not helping keep them from shrinking in.
I've also posted the "broken" program up on my site as SimEngine Problem if you'd like to see what I mean.
Here are the functions in use (Mesh::Rotate(), Vector::Rotate(), and Mesh::Update()). The way I track the mesh is VERY messy and that could be it as well. I'm trying to store vectors and distances and the center and create the vertices but that's not helping keep them from shrinking in.
I've also posted the "broken" program up on my site as SimEngine Problem if you'd like to see what I mean.
Code:
Mesh crate[6];
....
//this code is called to rotate the mesh (not the gun, but has the same problem and simpler code)
crate[5].RotateAround(crate[5].GetCenter(),5*FrameInterval,0,1,0)
void Mesh::RotateAround(Vector theCenter, float angle, float a, float b, float c)
{
center.RotateAround(theCenter,angle,a,b,c);
changeCenter = true;
for(int i=0; i<numVertices; i++)
{
v[i].RotateVAround(theCenter,angle,a,b,c);
vectorFromCenter[i].RotateVAround(Vector(0,0,0),angle,a,b,c);
}
update = true;
updateNormals = true;
Update();
}
void Vector::RotateVAround(Vector Center, float angle, float a, float b, float c)
{
Vector newPosition; //the vector to be our rotated position
Vector tempPosition;
tempPosition.SetX(x  Center.GetX());
tempPosition.SetY(y  Center.GetY());
tempPosition.SetZ(z  Center.GetZ());
float cosTheta = (float)cos(angle); //find the cosine of the angle
float sinTheta = (float)sin(angle); //find the sine of the angle
//find the rotated x coordinate
newPosition.SetX((cosTheta + (1  cosTheta) * a * a) * tempPosition.GetX());
newPosition.AddX(((1  cosTheta) * a * b  c * sinTheta) * tempPosition.GetY());
newPosition.AddX(((1  cosTheta) * a * c + b * sinTheta) * tempPosition.GetZ());
//find the rotated y coordinate
newPosition.SetY(((1  cosTheta) * a * b + c * sinTheta) * tempPosition.GetX());
newPosition.AddY((cosTheta + (1  cosTheta) * y * y) * tempPosition.GetY());
newPosition.AddY(((1  cosTheta) * b * c  a * sinTheta) * tempPosition.GetZ());
//find the rotated z coordinate
newPosition.SetZ(((1  cosTheta) * a * c  b * sinTheta) * tempPosition.GetX());
newPosition.AddZ(((1  cosTheta) * b * c + a * sinTheta) * tempPosition.GetY());
newPosition.AddZ((cosTheta + (1  cosTheta) * c * c) * tempPosition.GetZ());
x = Center.GetX() + newPosition.GetX();
y = Center.GetY() + newPosition.GetY();
z = Center.GetZ() + newPosition.GetZ();
}
void Mesh::Update()
{
Vector faceNormal(0,0,0);
if(update && filename!=NULL)
{
if(title!=filename) { cout << "Updating " << title << "..." << endl; }
if(changeCenter)
{
for(int i=0; i<numVertices; i++) { v[i] = center + (vectorFromCenter[i] * distanceFromCenter[i]); }
changeCenter = false;
}
for(int i=0; i<numFaces; i++)
{
for(int j=0; j<f[i].GetNV(); j++)
{
f[i].SetV(j,v[f[i].GetVI(j)]);
f[i].SetTC(j,tc[f[i].GetTCI(j)]);
f[i].SetVN(j,vn[f[i].GetVNI(j)]);
faceNormal = faceNormal + vn[f[i].GetVNI(j)];
}
faceNormal = faceNormal / (double)f[i].GetNV();
f[i].SetFN(faceNormal);
faceNormal.Init(0,0,0);
}
Vector bCenter(0,0,0);
for(int i=0; i<numVertices; i++)
{
bCenter.AddX(v[i].GetX());
bCenter.AddY(v[i].GetY());
bCenter.AddZ(v[i].GetZ());
}
bCenter = bCenter / (double)numVertices;
bounds.SetX(bCenter.GetX());
bounds.SetY(bCenter.GetY());
bounds.SetZ(bCenter.GetZ());
center = bCenter;
float distanceX,distanceY,distanceZ;
for(int i=0; i<numVertices; i++)
{
distanceFromCenter[i] = Distance(center,v[i]);
vectorFromCenter[i] = v[i]  center;
vectorFromCenter[i].Normalize();
if(abs(bounds.GetX()v[i].GetX()) > distanceX) { distanceX = abs(bounds.GetX()v[i].GetX()); }
if(abs(bounds.GetY()v[i].GetY()) > distanceY) { distanceY = abs(bounds.GetY()v[i].GetY()); }
if(abs(bounds.GetZ()v[i].GetZ()) > distanceZ) { distanceZ = abs(bounds.GetZ()v[i].GetZ()); }
}
bounds.SetW(distanceX);
bounds.SetH(distanceY);
bounds.SetD(distanceZ);
bounds.Update();
glNewList(list,GL_COMPILE);
for(int i=0; i<numFaces; i++)
{
f[i].Draw();
}
glEndList();
update=false;
}
}
Your not using the same 'method' that i use so i cant really help you but this might be a nice alternative:
rz is the rotation about the z axis, ry the rotation about the y axis, and rx is the rotation about the x axis.
x,y,z are obviously the corresponding components of the point your rotating.
rz is the rotation about the z axis, ry the rotation about the y axis, and rx is the rotation about the x axis.
x,y,z are obviously the corresponding components of the point your rotating.
Code:
xPrime = (x * cos(rz))  (y * sin(rz));
yPrime = (x * sin(rz)) + (y * cos(rz));
zPrime = z;
x = xPrime; y = yPrime; z = zPrime;
//Y
xPrime = (x * cos(ry)) + (z * sin(ry));
yPrime = y;
zPrime = (x * sin(ry))+ (z * cos(ry));
x = xPrime; y = yPrime; z = zPrime;
//X
xPrime = x;
yPrime = (y * cos(rx))  (z * sin(rx));
zPrime = (y * sin(rx)) + (z * cos(rx));
x = xPrime; y = yPrime; z = zPrime;
There was a long silence...
'I claim them all,' said the Savage at last.
I'll give that code a try and see how it fares. Thanks.
It looks like your code is directly modying the mesh's vertex positions due to numerical inaccuracy, your object will eventually lose its shape if you do this, which is why each vertex is typically tranformed each frame by some sort of matrix while keeping its original object space position. You need to do this both for your physics and graphics engine (OpenGL makes the graphics part easier with its matrix operations, but your physics engine will need to do this too.)
phydeaux Wrote:It looks like your code is directly modying the mesh's vertex positions due to numerical inaccuracy, your object will eventually lose its shape if you do this, which is why each vertex is typically tranformed each frame by some sort of matrix while keeping its original object space position. You need to do this both for your physics and graphics engine (OpenGL makes the graphics part easier with its matrix operations, but your physics engine will need to do this too.)I know this is off topic but:
Why would you perform the exact same operations twice? If you have to know where your mesh is going to be for physics purposes, why not just pass that information to openGL?
There was a long silence...
'I claim them all,' said the Savage at last.
hangt5 Wrote:I know this is off topic but:
Why would you perform the exact same operations twice? If you have to know where your mesh is going to be for physics purposes, why not just pass that information to openGL?
Sure, if the information is not changing, you might as well only compute it once. But if you transform the vertices inplace every frame, the overall shape of the object will eventually collapse due to numerical inaccuracy.
Also, for a given object, it's more expensive to send every vertex position to the graphics card each frame rather than keep that information on the graphics card and simply supply a different matrix transformation every frame.
How could I use a matrix to represent the transformation?
Ok so if im cumulatively transforming the object it will deform. But why cant i just keep track of what transformations i do, and start from scratch every frame?...Its just cheaper to just let openGL do it after i do it?
There was a long silence...
'I claim them all,' said the Savage at last.
hangt5 Wrote:Ok so if im cumulatively transforming the object it will deform. But why cant i just keep track of what transformations i do, and start from scratch every frame?...Its just cheaper to just let openGL do it after i do it?I'm not sure what you mean by this. You still can cumulatively keep track of what transformation is being performed, but you keep that information in a matrix instead of in the vertex positions of your model. The model keeps the same (object space) positions for all time, but then you have a transformation matrix (This will usually be the same as the modelview matrix in OpenGL) that you can apply to object space positions every physics step to get their world space positions (after which you can perform collision detection.) So with this collision detection step, the cpu and graphics card will be performing redundant work, but it's faster than sending each vertex to the graphics card since that redundant work is essentially free (the graphics card will transform each vertex by a matrix anyway each frame.)
Nick Wrote:How could I use a matrix to represent the transformation?If you don't feel like writing a complete matrix transformation class, you can actually have OpenGL do quite a bit of the work for you. If you have created a modelview matrix that moves your objects to where you want, you can then use glGetParameterd(MODELVIEW_MATRIX) (or something like that) you can get the OpenGL matrix, which you can then multiply by each of your vertices to get their world space positions. Again, you need to do this each physics step, and leave the original positions intact, or else they will eventually lose shape due to numerical inaccuracy.
I guess I phrased the question poorly. If I have, say, a cube sitting with the center at (0,0,0) and each vertex being a +1 or 1 in the directions (so I'd have (1,1,1), (1,1,1), (1,1,1), (1,1,1), (1,0,1), (1,0,1), (1,0,1), and (1,0,1)). I have the center and each vertex stored as a vector containing the X,Y, and Z for each, what would the matrix look like that...
1) Moved the cube along an axis?
2) Rotated the cube?
3) Scaled the cube?
Thanks for help.
1) Moved the cube along an axis?
2) Rotated the cube?
3) Scaled the cube?
Thanks for help.
And also, how would I send that matrix to OpenGL?
Translation Matrix :
1 0 0 dx 
0 1 0 dy 
0 0 1 dz 
0 0 0 1 
Scaling Matrix:
sx 0 0 0 
0 sy 0 0 
0 0 sz 0 
0 0 0 1 
You simply loop through your points and multiply the matrix by the point. (matrix * point) NOT (point * matrix)
PS: il post the rotation matrices i have, there kinda big and i dont have time right now.
EDIT: Rotation Matrices about axis X,Y,Z
RX
1 0 0 0
0 cos sin 0
0 sin cos 0
0 0 0 1
RY
cos 0 sin 0
0 1 0 0
sin 0 cos 0
0 0 0 1
RZ
cos sin 0 0
sin cos 0 0
0 0 1 0
0 0 0 1
cos = cos(Rotation about Axis) sin = sin(Rotation about Axis)
You have to do
resultant = Rz* point
resultant = ry * resultant
resultant = rx * resultant
The original rotation stuff i posted is the expanded version of these rotation matrices.
1 0 0 dx 
0 1 0 dy 
0 0 1 dz 
0 0 0 1 
Scaling Matrix:
sx 0 0 0 
0 sy 0 0 
0 0 sz 0 
0 0 0 1 
You simply loop through your points and multiply the matrix by the point. (matrix * point) NOT (point * matrix)
PS: il post the rotation matrices i have, there kinda big and i dont have time right now.
EDIT: Rotation Matrices about axis X,Y,Z
RX
1 0 0 0
0 cos sin 0
0 sin cos 0
0 0 0 1
RY
cos 0 sin 0
0 1 0 0
sin 0 cos 0
0 0 0 1
RZ
cos sin 0 0
sin cos 0 0
0 0 1 0
0 0 0 1
cos = cos(Rotation about Axis) sin = sin(Rotation about Axis)
You have to do
resultant = Rz* point
resultant = ry * resultant
resultant = rx * resultant
The original rotation stuff i posted is the expanded version of these rotation matrices.
There was a long silence...
'I claim them all,' said the Savage at last.
This is an example of how to pass the orientation matrix of the camera to OpenGL. The matrix is a 4 x 4 array, but bear in mind that C++ uses row major matrices, and OpenGL uses column major matrices, so you need to store you array items in the proper order, or transpose the matrix before you pass it to OpenGL. OpenGL sees the array as a linear array with 16 items. The camera orientation matrix must be inverted before you pass it to OpenGL. If you are working with quaternions, get the conjugate of the camera quaternion (w, x, y, z) to calculate the orientation matrix that you will pass to OpenGL.
After you load the camera matrix, then proceed to load the matrix for each object:
Code:
glPushMatrix();
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
glMultMatrixf( (GLfloat*) test_camera.orientation_matrix );
glTranslatef( test_camera.position.x, test_camera.position.y, test_camera.position.z );
After you load the camera matrix, then proceed to load the matrix for each object:
Code:
glPushMatrix();
glTranslatef( test_object.position.x, test_object.position.y, test_object.position.z );
glMultMatrixf( (GLfloat*) test_object.orientation_matrix );
//Render the object model here...
glPopMatrix();
I think I understand all of that. Let me ask one more question before I go ripping into my code to convert a whole bunch of stuff from absolute coordinates to relative coordinates:
Say I have this crate which has the same points as above. Let's say the vertices are simply stored as an array of vectors V[8] within the mesh. If the mesh needs to be rotated 45 degress on the y axis and moved along the x axis at the same time, how do I find a combined matrix? Or do I simply use two?
Say I have this crate which has the same points as above. Let's say the vertices are simply stored as an array of vectors V[8] within the mesh. If the mesh needs to be rotated 45 degress on the y axis and moved along the x axis at the same time, how do I find a combined matrix? Or do I simply use two?
Possibly Related Threads...
Thread:  Author  Replies:  Views:  Last Post  
Rotating a triangle to vertical.  Joseph Duchesne  5  3,993 
Apr 14, 2006 05:18 PM Last Post: unknown 

Rotating Images in Classic  Josh  3  5,528 
Jul 18, 2002 09:01 AM Last Post: aarku 