Would this really be that much slower?

Jones
Unregistered
 
Post: #1
I'm using arrays to hold vertex information in version 5 of my *almost* done collision system. First I fill them like this:

Code:
    int col_sect_surf1_vtxdat[7];
    col_sect_surf1_vtxdat[0] = col_sect_rect_surf1.x;                            //    Top left.
    col_sect_surf1_vtxdat[1] = col_sect_rect_surf1.y;
    col_sect_surf1_vtxdat[2] = col_sect_rect_surf1.x + col_sect_surf1->w - 1;    //    Top right.
    col_sect_surf1_vtxdat[3] = col_sect_rect_surf1.y;
    col_sect_surf1_vtxdat[4] = col_sect_rect_surf1.x + col_sect_surf1->w - 1;    //    Bottom right.
    col_sect_surf1_vtxdat[5] = col_sect_rect_surf1.y + col_sect_surf1->h - 1;
    col_sect_surf1_vtxdat[6] = col_sect_rect_surf1.x;                            //    Bottom left.
    col_sect_surf1_vtxdat[7] = col_sect_rect_surf1.x + col_sect_surf1->h - 1;

I then compare surface ones vertexes to two's, but when I do this test, I ad the width's and y's and height's and x's again, just because it's easier for me to debug and wrap my head around. Would it really be that much slower to do so? I want my code to be super efficient and fast, as that is necessary for collision, but I can't program very well if I have to stop and think for a minute about every aspect of my code. Wacko
Quote this message in a reply
Moderator
Posts: 1,562
Joined: 2003.10
Post: #2
A general rule that some of us follow is to get code working first, then worry about optimization. Don't start worrying if a couple of add operations are going to affect the performance of your code unless A) you know really well what you're doing, B) it would be difficult to change later, and C) you've profiled your code with Shark and found it to be too slow. Premature optimization is often regarded as an anti-pattern.

Once your system is working 100% (however slowly), certainly do run a Shark profile on it and tighten it up wherever you can. If you optimize before the system works, thought, it's likely to make your code more difficult to understand, and less malleable. Not usually what you want while you're still in the middle of implementing a system.
Quote this message in a reply
Member
Posts: 131
Joined: 2004.10
Post: #3
Nothing really more to add but in gerenal the mantra is...

1/ Design
2/ Code
3/ Optimize

Putting Optimize in any other location will be bad. Optimizing is just profiling where you program is slow. Most often in first pass code this means a refactoring of the underlying algorithm and not fussiness over using too many ifs or adds etc. So after finding out where things are slow you go back to step 1.

Ignore step 3 when things are fast enough.
Quote this message in a reply
Jones
Unregistered
 
Post: #4
Very well, I shall finish the code first. I'm just worried that I'm putting a structure in place that will be hard to optimize later.
Quote this message in a reply
Member
Posts: 131
Joined: 2004.10
Post: #5
I would say unlikely.

1/ You are adding memory moves. This may actually add overhead. (more main memory access, potential cache hits.)
2/ Accessing cells in an array is probably just as slow as accessing variables in a structure. What would be better would be to hard code with register variables instead of an array of the same values. This way compares will be on registers instead of an array out in main memory/cache memory. There is no guarrantee that the compiler will make those variables registers though.

If there is a difference in 2, I would say it's marginal. If you are hitting the case where it is a performance hit then you should have finished all your other optimizations that are higher level, meaning algorithmic. I would also concentrate on optimizing function calls and other expensive jumps which are much much more expensive than this. But even then you may not get that much in speed gains. The largest gains will always be picking better algorithms to use if there are any. Plus the compiler may do a decent job of optimization and make the structure access insignificant in the grand scheme of things. So run your code in release/production mode to see it's effect.

When optimizing I would look at where the program is spending the most time and which function(s) are being called the most. Optimizing can be difficult as it may not be too clear where to optimize in a complex system.

For long functions runtimes, see if you can change the algorithm to speed up what ever it is that function is trying to do. Or to minimize the calling of other time expensive functions that it uses.

For functions that are called oodles of times, see if you can change the algorithm to reduce the calling of that function. A simple optimization here would be to inline/macro that function so that the function call doesn't happen. Function calls are expensive if you are looking at the low level optimization. More so that memory access and math functions.

But I draw the line when it will affect debugability and maintainablility of the code. Up to you where to draw the line.
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  is sdl+opengl slower than cocoa/carbon+opengl? Najdorf 9 4,867 Nov 16, 2005 09:45 PM
Last Post: WhatMeWorry