Advertisement

[Cooperative Pathfinding] Agents get stuck at choke point

Started by July 14, 2015 11:42 AM
39 comments, last by lucky6969b 8 years, 11 months ago

dou you have the option to pre-calculate A* costs offline and save them with your map? I mean are you creating your maps randomly?

I guess there is a better way which saves you lots of A* iterations.

as you know, A* is a best first seach meaning it expands the nodes towards the goal, taking F=G+H into account. if you set H=0, it will expand equally to all sides.

a. make sure you set H = 0 for all

b. select a source node

c. select a goal node and continue as regilar A*

d. when you a put a node into closed list, the distance from that node to source node will be the shortest

e. continue as regular A* until you put the goal node to closed list.

f. at this point you have calculated many shortest distances from source node to others

g. select a goal node to which shortest distance is not calculated yet and set it as goal and repeat the steps from c

h. when all shortest distances from source node is calculated, pick another source and goal node and repeat the steps from b

it will be wise to pick initial source and goal nodes far away to each other so in the process many other nodes are explored.

you can compare the results with regular A* search to check if this algorithm actually finds shortest distances.

Advertisement
Wow, that's really a time saver, Thanks for telling me about that. Let me try this method... That's really so nice of you. Jack

I havent looked at any code but - if ....

Get rid of EVERY needless constructor destructor being done - the pathfinder will be used over and over, so maintain a 'free pool' of whatever node data is being used to cut out alot of overhead. Minimizing node size (if possible) to reduce cache misses can also ibcrease speed.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact
it will be wise to pick initial source and goal nodes far away to each other so in the process many other nodes are explored.

A little late to the party, but if you literally want all destinations, pick a non-reachable destination (or create one eg by adding a flag like "if dest_reachable and dest_pos == current_pos then <found destination>"). The path finder will continue to expand nodes in an effort to reach the destination, until it runs out of nodes to expand.

If you always want to compute all destinations, you can simplify the algorithm; delete the "dest reached" chek, and all code about H. (Which will give you the Dijkstra algorithm, since A* + H==0 is equivalent to Dijkstra.)

My focus is now turned, to how the reservation is checked


int minX = minTile->x;
int minZ = maxTile->z;
int maxX = maxTile->x;
int maxZ = minTile->z;

bool isIntersected = false;
for (int i = minX; i <= maxX; i++)
{
	for (int j = minZ; j <= maxZ; j++)
	{
			 
		int realX, realZ;
		realX = i;
		realZ = j;
		std::stringstream key;
		key << realX << ":" << realZ << ":" << t; 
			
		// if it is the object itself, it is valid, and can bypass
		// the global reservation table
		if (reservedBBs[unit->GetName()].find(key.str()) != reservedBBs[unit->GetName()].end())
		{
			// early out
			return false;
		}

		// if not found in local reservation tables, check global reservation table
		// slowwwww..., have to check all entries of the extents
		if (reserved.find(key.str()) != reserved.end()) {
			isIntersected = true;
			break;
			
		}			
	}
}
return isIntersected;

The minX to maxX and minZ to maxZ (please don't take notice about the order of min max, it was intentional), are the extents the agent is currently covering, say the agent is covering x of 18-20 and z of 20-22 then I have to check the whole range of the global reservation table, which tends to be very slow The local reservation table is for when the agent is checking on its own spot, it can ignore it and consider it to be unreserved and early out of it. But if the 2 agents don't overlap at all. It will take a lot of hash table searching Now the searching time (and also I don't know if the TimeAStar really is unable to find a path), it takes 30000 iterations to get to the destination which results in 20 secs to resolve. Are there any good ways I can work around this? Thanks Jack

Advertisement

You seem to be doing a lot of string operations, which is horrible for performance.

Some low fruit is

"reservedBBs[unit->GetName()]" is computed twice for each inner iteration, yet it looks very constant. You can setup a reference to this result before the loops.

"realX" and "realZ" don't buy you anything by the looks of it, why not rename i -> realX, and j -> realZ ? (won't help in performance though, I expect)

Some higher fruit is

unit->GetName() sounds like a string to me, but perhaps it is not. If it is a string however, consider replacing it with an integer or an enum value (if the set is predefined). Computers are much faster at comparing numbers than they are on comparing strings.

"key << realX << ":" << realZ << ":" << t; " constructs a string representation of your X and Z (and something 't', no idea what that does, I ignore it here). Then you go about comparing that text with other strings. It is faster if you stay in the numeric domain; make a class with X and Z number, use use it as key for your maps. If the X and Z ranges are not too large, you can even merge both together in a single integer.

Eg if they are both in the range 0..65535 (16 bit), you can do "unsigned long v = (((unsigned long)realX) << 16) | realZ; // lower 16 bit of v contains Z, upper 16 contains X.

and use 'unsigned long' as key type.

G'day.... Just wondering how I can compare an unordered_set in two different ways. First, I'd like to check for equality, next I want to check for range. like this


struct NodePoolKey {

	NodePoolKey(int minX, int maxX, int minZ, int maxZ, long t) {
		this->minX = minX;
		this->minZ = minZ;
		this->maxX = maxX;
		this->maxZ = maxZ;
		this->t = t;
	}

	bool operator==(const NodePoolKey& other) const 
	{		 
		return (this->minX == other.minX) &&
			(this->minZ == other.minZ) &&
			(this->maxX == other.maxZ) &&
			(this->maxZ == other.maxZ) &&
			(this->t == other.t);
	}

	/*bool operator==(const NodePoolKey& other) const {
		return overlays(*this, other);
	}*/
	 

	bool overlaps(const NodePoolKey& a, const NodePoolKey& b) const {
		if (a.maxX < b.minX) return false; // a is left of b
		if (a.minX > b.maxX) return false; // a is right of b
		if (a.maxZ < b.minZ) return false; // a is above b
		if (a.minZ > b.maxZ) return false;

		TRACE("overlaps");
		return true;
	}


	int minX;
	int maxX;
	int minZ;
	int maxZ;
	long t;

};

I think this will leverage the impact of reserving, reclaiming and checking of the reservation tables in depth*extents number of times. But the problem is I can't just overload the == operator twice.

Any ideas?

Thanks Jack

Hello,
Another question coming.
The one that is how do I design the scale between the h score and the g score.
When the agent is moving vertically upward/downward or sideways, it costs exactly 4.0f
and it moves diagonally, it costs about 5.xxx...
It is by design.
With the wait move, I just give it a 4.0f, because although the diagonal move
costs more, it indeed costs less in the h score.
The total f score could be lower.
Not quite. This heuristic is not consistent. I see when the path is really clear,
It jumps back to the earlier position.
This h score is based on the distance between the current node to the neighbor
However, the tile may not be next to. I skip the agent radius and move forward
to the next tile that is with agent radius added to the original location.
Anyways, when I consider small tiles, like you need to do this, because when you don't
Those accessible tiles cannot be built because of the grid building algorithm
So I make it smaller to fit to the scene.
The original setup
Sideways: 1.0
Diagonal: 1.4
wait: 1.0

The real setup (maybe wrong)
Sideways: 4.0
Diagonal: 5.6
wait: 4.0

Any better ideas for me?
Thanks
Jack

Equality:

It is generally recommended to overload operators only when you express the exact same notion. Your == implementation really does equality, so that should be fine.
Your 'overlaps' is not equality, it's overlap. There are lots of NodePoolKeys that have overlap but are not equal to each other.

You can in theory use a different operator, like <, but you're comparing in two dimensions which makes things very hazardous.
The < relation has a number of properties you must obey, for a!=b, it must hold 'a<b or b<a but not both'. For a==b you must have 'not a<b and not b<a'. Last but not least a<b and b<c implies a<c.
If you break these properties, the < becomes unreliable as code using the < operator relies on them.

The general recommendation says not to overload. Instead, make it a normal function, so you can test arbitrary NodePoolKeys as desired.
I don't understand why you desire to attach it to the == operator. Is there some need to do that?


Scale:

If you want the optimal path, the estimate from a point p to the destination should never return a bigger value than the real remaining distance from point p to the destination.
On the other hand, you want the estimate as close as possible to the real remaining distance to get good steering of the search.

Your next point is selected based on f+h (real traveled distance + estimate of remaining distance).
- If you make h 0 at all times, walking in the wrong direction is equally good as walking in the right direction. As a result, the search expands equally in all directions.
- If you make h exactly the same as the real distance, and you make a step in the right direction, f grows a bit, and h decreases the same amount. f+h is thus the same at the next point. If you move in the wrong direction, f grows a bit and h grows too, punishing 'bad moves' by twice the distance that you moved.
- If you make h too big, f+h is too big compared with reality. Now if you pick a point to expand, it must be the smallest f+h. You make a step, f grows a bit, and h decreases more than f grows. So the new point is even better than the previous point! As a result, the next iteration, this path will again be extended, again decreasing f+h. You thus stick with the path you expand first, and stay expanding it, without much regard on its optimality.

The latter is useful if you want "a path" rather than the optimal path. It dramatically increases the search speed at the cost of not getting the optimal path.
It can be useful for very large search spaces.

This topic is closed to new replies.

Advertisement