just a small ramble about making collision detection a bit more efficient.
overlay the system with an square grid (or cubic if the z axis object spread is pronounced enough)
overlay this grid with an second one of the same resolution but shifted half an structure with
that the midpoints of the one grid are in the place of the vertices of the other grid
now sort objects into this grid based on their position
every object is in 1 space of every grid (every object has 2 grid-indices)
now we only need to check objects for collision which are sharing one of the 2 grid indices.
sucessfully reduced amount of objects we need to check for collision
(nothing ground breaking, just wanted to post it here in case of it is helpful)
Post
Fri May 02, 2014 3:47 pm
#2
Re: some rambling about collison detection
This is fairly close to how it already works
Broadphase isn't really the problem though - the problem is that with 100+ ships flying very close to asteroids, there are a whole lot of narrowphase checks that need to happen, no matter how fast the broadphase is. Just needs optimization
Broadphase isn't really the problem though - the problem is that with 100+ ships flying very close to asteroids, there are a whole lot of narrowphase checks that need to happen, no matter how fast the broadphase is. Just needs optimization
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post
Fri May 02, 2014 3:49 pm
#3
Re: some rambling about collison detection
octree refining of the grid maybe?
offload as much as possible on the broadphase
offload as much as possible on the broadphase
Post
Fri May 02, 2014 5:31 pm
#4
Re: some rambling about collison detection
the best approach would be the Dual Bounding Volume Hierarchy.
the computation time is formulated like this:
T = Nv * Cv + Np * Cp
where
T: total execution time
Nv: number of bounding volume pair overlap tests
Cv: time for testing a pair of bounding volumes
Np: number of primitive pair overlap tests
Cp: time for testing pair of primitives
This has proven to be the best approach at collusion detection on a massive scale.
source:
the computation time is formulated like this:
T = Nv * Cv + Np * Cp
where
T: total execution time
Nv: number of bounding volume pair overlap tests
Cv: time for testing a pair of bounding volumes
Np: number of primitive pair overlap tests
Cp: time for testing pair of primitives
This has proven to be the best approach at collusion detection on a massive scale.
source:
Spoiler: SHOW
Post
Wed May 21, 2014 3:14 am
#5
Of course this only works if your entities or some part of their data is in a linear array...
Re: some rambling about collison detection
Code: Select all
//n = bounds of the entity array
//i & j are indices into the array being checked against.
for(i=0;i<n-2;i++)
for(j=i+1;j<n-1;j++)
{
//do collision check
}
Post
Sat May 31, 2014 9:09 am
#6
Re: some rambling about collison detection
Another random thought:
Is there some possibility that using alternative, not classic triangle based collision detection methods could provide an advantage?
As the ships are based around distance fields there must be an analytic way of seeing which ships are colliding and which are not.
As you could test whole sub-sections of models against each other instead of single polygons.
Is there some possibility that using alternative, not classic triangle based collision detection methods could provide an advantage?
As the ships are based around distance fields there must be an analytic way of seeing which ships are colliding and which are not.
As you could test whole sub-sections of models against each other instead of single polygons.