## Signed distance fields

Member
Posts: 161
Joined: 2005.07
Post: #1
Hey all, hopefully this should be quick and simple thread.

I've read up on a few ways of handling mesh-mesh collision detection, and the signed distance field sounds like the best way of handling things. What I am wondering, though, is how are you supposed to use the SDF for the detection?

My best guess right now is that you loop through the vertices of the first mesh, then if the point is within the SDF boundaries, interpolate the four closest distance values stored in the SDF grid. Then keep track of the most negative value to find the vertex that penetrates the second mesh the most. Then repeat the algorithm for the second mesh. If the recorded value is less than zero, then the meshes intersect at the saved point, with the saved interpenetration value.

Is that the correct way to handle it? If not, how do I use the SDF? And if so, are there any optimizations I can add?

Sage
Posts: 1,403
Joined: 2005.07
Post: #2
It might be best to start with, where did you hear the term "signed distance field"?

Sir, e^iÏ€ + 1 = 0, hence God exists; reply!
Member
Posts: 161
Joined: 2005.07
Post: #3
It was referenced in a lot of places, but here's one paper that stands out:

http://www2.imm.dtu.dk/pubdb/views/publi...hp?id=1289

From what I understand, it's just a 3D grid of numbers, where each number represents the smallest distance between the current point in the grid and the surface of the mesh. I just need to figure out how this set of data is supposed to be used for the collision detection.
Moderator
Posts: 373
Joined: 2006.08
Post: #4
for my mesh collision detection, I plan on using bounding ellipses; this is a quick and simple way to do things, and looks like it would be a bit faster than yours, also...just depends on how accurate you need your information to be. The basic theory behind bounding ellipses is that you record the max/min values for X,Y, and Z for each object, then construct a 3D ellipse out of that information. You then test to see if the two ellipses intersect anywhere, and if they do, you know that you have a collision. This method is not as accurate as the one that you were talking about, because there is usually a bit of a difference between the max points and the min points in your mesh; if this is a problem, however, you can simply make more bounding ellipses for more accuracy.
Probably not the answer you were looking for, but I thought it might help anyway
-wyrmmage

Worlds at War (Current Project) - http://www.awkward-games.com/forum/
Member
Posts: 161
Joined: 2005.07
Post: #5
Sorry, I should have specified - I was looking for the fastest mesh-mesh collision detection system that works well with a rigid body dynamics system, so the ellipsis method wouldn't work too well. The SDF apparently gives a very fast and close approximation of the points of collision and interpenetration distances between the meshes (close enough to result in plausible physics). The only problem is I'm not 100% sure on how it's supposed to be used.
Sage
Posts: 1,487
Joined: 2002.09
Post: #6
I think you have the general idea. The part I could never figure out though is how to get useful normals for the collision points. You could store the indexes of the closest features as well, but that still isn't enough in a lot of cases.

Scott Lembcke - Howling Moon Software
Author of Chipmunk Physics - A fast and simple rigid body physics library in C.
Apprentice
Posts: 19
Joined: 2007.03
Post: #7
I think what he might have been referring to was a hierarchical bounding volume tree, where each ellipse (i use spheres) represents the minimum bounding volume for a set of triangles. Each ellipse has sub-ellipses which bound smaller sets of these triangles.

When testing for collisions, one traverses the tree of ellipses until you reach a leaf, where you then do only the few needed triangle-triangle tests. This way you get much improved speed and only have to do a few triangle tests.
Moderator
Posts: 373
Joined: 2006.08
Post: #8
heh, yes, thanks...that was exactly what I was trying to say
-wyrmmage

Worlds at War (Current Project) - http://www.awkward-games.com/forum/
Member
Posts: 161
Joined: 2005.07
Post: #9
Skorche Wrote:I think you have the general idea.
Okay cool, thanks. Once I understood what the SDF actually represented, that seemed like the most logical way to use the data. I guess I'll just have to implement it and see how it works.

Quote:The part I could never figure out though is how to get useful normals for the collision points. You could store the indexes of the closest features as well, but that still isn't enough in a lot of cases.
Why not?

That seemed like a really clever idea when I read it, but then I read the second half of the sentence. I mean... I guess there are a few cases of when the closest point is an edge or a vertex, but it seems like you could then either store the interpolated normal or just store the index of one of the polygons that shares that edge/vertex.
Member
Posts: 161
Joined: 2005.07
Post: #10
Actually, now I'm reading up on how the Havok engine breaks nonconvex meshes into a series of smaller connected convex shapes, then uses a Minkowski difference approximation (although I don't really know what that is yet...). Apparently the SDF - although fast - is a huge memory hog. I'm going to continue reading up on this stuff.
Sage
Posts: 1,487
Joined: 2002.09
Post: #11
imikedaman Wrote:Okay cool, thanks. Once I understood what the SDF actually represented, that seemed like the most logical way to use the data. I guess I'll just have to implement it and see how it works.

Why not?

That seemed like a really clever idea when I read it, but then I read the second half of the sentence. I mean... I guess there are a few cases of when the closest point is an edge or a vertex, but it seems like you could then either store the interpolated normal or just store the index of one of the polygons that shares that edge/vertex.

You can't always get usable collision normals from the closest feature.

Consider the following:
You have a slightly smaller cube sitting on a larger cube. When you check for collisions, the smaller cube has moved into the larger cube deeper than the difference in it's width. When you check the vertexes, you find that the bottom 4 have negative distances. So far so good. Now you find the closest feature to get the normal, but because it's penetrated to deep, none return the top face of the cube. So you have correct contact points, and pretty good penetration depth values, but all of the normals are tangent to what the collision normal should be.

Scott Lembcke - Howling Moon Software
Author of Chipmunk Physics - A fast and simple rigid body physics library in C.
Member
Posts: 161
Joined: 2005.07
Post: #12
Man, all these collision systems seem to have serious drawbacks. What's the best one to use? I'd need (of course) the collision points, the penetration depth of each, and the normal. My original idea was just running an intersection test on the edges of the first mesh against the triangles of the second mesh, but there has to be a faster way.
Sage
Posts: 1,487
Joined: 2002.09
Post: #13
I use SAT in Chipmunk, sampling for collision points at polygon vertexes. You can always get a collision depth and a normal for overlapping convex polygons, you can't always get usable collision points when only sampling at the vertexes.

Overall I'm pretty pleased with it though.

Scott Lembcke - Howling Moon Software
Author of Chipmunk Physics - A fast and simple rigid body physics library in C.
Member
Posts: 161
Joined: 2005.07
Post: #14
In 3D I can see a case where two cubes would intersect via their edges and not vertices, but is there a case in 2D where using only vertex collisions can lead to problems? This part is only for enlightenment since I have no plans for 2D anytime soon.

If I end up using the intersection test for the edges of one mesh against the polygons of another, each collision between an edge and polygon would add these points:

1. The exact point of collision between the edge and polygon (penetration depth = 0)
2. The vertex (if any) from the edge that is inside the second mesh (penetration depth = distance between above point and this point)

Even though it seems like this algorithm might give me all the contact information I need (although I haven't tested it yet), I'm going to continue looking for a faster way of handling it. I looked at SDF and GJK and could only find ways of generating part of the required info, then I'll look into SAT and EPA sometime tomorrow to see if I can find a good use for it.

By the way Skorche, how do you handle large velocities in your RBD sim? Multisampling? Sweep tests? Manually changing the number of iterations per second to suit each demo? I'm referring to that one test I remember seeing in your Chipmunk demo where you shot a tiny circular object really quickly at a large stack of blocks.

One final thing I'm wondering is this: once I find the minimum distance vector required to push the two bodies apart from one another, can I just normalize that distance vector and use that as the normal for each collision point? I've only worked it out for a few test cases so far, so I don't know if it will lead to any unexpected problems.

I think it's pretty obvious I still have a lot of material left to read.
Sage
Posts: 1,487
Joined: 2002.09
Post: #15
How do I handle large velocities? I don't. The only option currently is to decrease the time step. Though I'm working with a guy on adding swept collisions.

Quote:once I find the minimum distance vector required to push the two bodies apart from one another, can I just normalize that distance vector and use that as the normal for each collision point?

That's what I do, but I only deal with convex shapes, so that is technically correct.

Scott Lembcke - Howling Moon Software
Author of Chipmunk Physics - A fast and simple rigid body physics library in C.