Given a map, I want to find the areas that would be traveled heavily. This could be useful for finding strategic points, crossroads, dead ends, places to build roads, etc.
Here’s an example map with rooms and corridors. Yellow areas show up on more paths. You can see that the central corridor is heavily traveled. It’s the only path between the west and east sides of the map. Try adding another corridor to remove the bottleneck. Try adding more bottlenecks.
To analyze this map, I want to run pathfinding from all points to all other points. Then I’ll count how often each tile shows up in a path.There are several pathfinding algorithms that could be useful here.
- A*[1]. This is the classic pathfinding algorithm that finds a point from a point P to a point Q. We could run it once for each P and each Q to get all paths.
- Dijkstra’s Algorithm[2]. This would let us run for a random P but for all Q, giving us a lot more results at once. It would run significantly faster than A*. We could run it once for each P, each time getting paths to all Q. In the end we get paths from all P to all Q.
- Bellman-Ford[3]. This works for one P and all Q, like Dijkstra’s, but it also handles negative edge weights, and runs slower than Dijkstra’s.
- Breadth First Search[4]. This runs even faster than Dijkstra’s Algorithm but only if we have constant movement costs. We wouldn’t be able to analyze the map with slow movement through forests vs fast movement through plains.
- Johnson’s Algorithm[5]. This extends the idea of running Dijkstra’s Algorithm repeatedly, but handles negative edge weights.
- Floyd Warshall[6]. This is a simpler tighter loop for analyzing paths from all P to all Q. It handles negative edge weights.
For performance, the general rules of thumb here are:
- it’s faster to use a batch algorithm that calculates many paths at once than to use a single path algorithm repeatedly
- it’s faster to use an algorithm that’s specialized to handle fewer cases (constant costs faster than integer costs faster than non-negative costs faster than all costs)
For the demo on this page I want many paths, and also have constant costs. The first rule of thumb suggests I should use Floyd-Warshall because calculating all paths at once should be faster. The second rule of thumb suggests I should use Breadth First Search because an algorithm specialized for constant costs should be faster than an algorithm that works for any costs.
I had originally written this page to use Floyd Warshall, and it ran in 200–400 ms, which isn’t bad for map analysis. There are 450 tiles on this map, so there are 450⨉449 = 202,050 shortest paths to calculate. That’s a lot of paths! But I then implemented it with Breadth First Search, and it took 5–15 ms. Whoa! Always profile.
Here’s another map to play with:
In this one you can see that inside the central room there are some diagonal paths that are being used when crossing from the north to south doorway or vice versa.
For larger maps, there are some more things to try:
- Use a random subset of points P and Q. This will provide a reasonable approximation.
- Use a non-random subset of points P and Q that represent the places your creatures actually want to go.
- Run the analysis in parallel. Floyd-Warshall seems especially well-suited for the GPU (but I haven’t tried), but Breadth First Search is running independently from each start point so it can be parallelized relatively easily.
- Run a portion of the analysis each frame.
A warning for grid maps: grids have many “ties” in the paths, and the pathfinding algorithm will choose one arbitrarily. This can lead to artifacts in the map analysis. It might be better to add slight randomness to the movement costs each time through so that it smooths out these artifacts. I haven’t tried this yet.
Sometimes there’s a more specialized algorithm for the type of map analysis you want. For example, Tarjan’s algorithm can be used to find choke points[7].
Also see: Betweenness centrality[8] and Brandes’s Algorithm[9] [PDF].