So forgive the stream of consciousness, and read on if you wish. Any ideas, counter or otherwise, are welcome.
It's an easy assumption to make, that movement and unit control is important. It's arguably the most important thing to get right in any game. I know it is one of Miyamoto's 3 more important things for game development (Character, Camera, Control, IIRC), so it deserves some attention. In our game, the control is somewhat more complicated, since it is neither direct nor constrained. Almost all movement is given as an intent; rather than moving the units directly, the units are told they need to move, however they can manage that given their other constraints. This is important because managing those constraints are critical to gameplay.
Anyway, what are the constraints that make this complicated?
- Stay as close to in-formation as you can, maximally deviating by a class-specified variable distance.
- Move towards the user's selected goal location (including pathfinding)
- Get to a class-specified distance from your current hostile target.
The current approach, so far, has been to compute an 'ideal' location for each of the criteria, then use that to weight each currently-adjacent cell, thereby determining which direction to move (or not, if at the ideal). Even computing the ideals for each is complicated, let's ponder that first.
- Compute the current estimated formation location of the squad as a whole, and from that compute our ideal offset.
- The user's goal location is set for the formation as a whole; finding a relative position to this is not compicated. When obstacles are included and pathfinding is required it becomes a mess, since individual units may opt to take a different path to the ideal location.
- The attack-distance ideal can be defined as the closest location to the current location that is the specified distance. This one is pretty easy at least.
So given three target locations, how do we decide which to use? So far the subtleties in this decision have all resulting in problems. If 1 is too important, then the units cannot move to the target location. If 2 is too important, the units deviate from each other and the formation mechanics break down terribly once combat is engaged. (Oh, if we're not in combat then most of this is easier, so we're ignoring that condition for now). If 3 is too important, it is impossible to retreat, since units will stay at combat range. Anyone who has played the game at various points has probably seen all three of these faults.
While we're making things sound dire, a few more complications: Pathfinding that accounts for formation offsets is complicated. That is to say, if you want the formation to move around a corner while staying in formation, the individual paths are substantially more complicated than a simple A* will find. Each step would need to determine the 'radius' it accomodates, but this would have to have some directionality to it. Units on the right edge of a formation would need no extra space around right turns, but lots of space on left turns. Can this be computed efficiently? Each cell could retain 6 'wedge distances', perhaps, which represent the distance within each hex wedge that is walkable. Currently we retain a single 'free radius', but there is no directionality to it. (This radius is used to limit occupancy by large units, and was used in some formation movement tests already).
A second option for formation pathfinding is to compute the path at the squad level rather than the individual unit level. This could solve the problems with diverging paths as well. The question is how and when to compute the next step. It could be recomputed every time the estimated current formation location changes, and set to a few cells forward on the path, perhaps. If computed this way, the singular occupancy radius could be used to generate a smooth formation path, which should keep the units together.
See? That was an idea. I hadn't thought of that before. This is useful. I like this experiment.
So I suppose now we have a coherent answer for the ideal locations. Back to resolving the decision making once the ideals are known.
It feels like, rather than a weighting function, a sequential deciding criteria may work. Consider:
- If the unit is not within tolerance of the current formation ideal, move to that, otherwise:
- If the unit is not within tolerance of the target formation ideal, move to that, otherwise:
- Move to the target-distance ideal.
Could something this simple just work? The current approach uses weighting functions rather than a decided single purpose for each move. Maybe it is worth a try.
It's worthy to note that when I started writing this last paragraph I started by assuming this approach wouldn't work, and I began writing reasons. I failed, but isn't it interesting to note where one assumes incorrectly, without even noticing that one is confused?
Recapping the movement process:
- Component 1: Formation estimation
- The formation leader maintains an estimate of the current formation location. Whenever this location changes cells, a new path will be calculated to the user's target location, using formation-width weighting to round corners. The new target formation location will be set a few cells down this path. (Setting it to merely 1 cell away
willshould result is some nasty stutterstepping and un-smoothness, but given earlier mistakes, I will TEST this when implemented). - Component 2: Per-unit movement step
- Compute the current-formation ideal location and leeway. If the current location is outside of this leeway, move towards the ideal and finalize decision.
- Compute the target-formation ideal location and leeway. If the current location is outside of this leeway, move towards the target-ideal and finalize. NEW IDEA: If the location towards the target-ideal violates the current-ideal leeway, STOP motion and wait one tick to re-evaluate. We don't want to cause cyclical stepping between target and current in the case that the current is 'stuck' away from the target.
- Scan neighbor locations for the cell closest to the target-distance ideal. If a better cell exists and this cell neither violates the current-ideal nor target-ideal, move to this cell and finalize.
For now I'll use A* for the pathing, and I'll ignore efficiency entirely. (Early optimization is the root of all evil, after all)
I can do this. Let's see what happens.
-------------
Here's an image of the scenario editor showing a variable-width path. I rigged up the editor to show me the pathing calculations; easier than expected, even though it requires bringing in server code to the editor. Apparently the mapping subsystem isn't coupled too tightly to the rest.
-------------
Aftermath 1: They dance around a bit, but in they at least get around, eventually. The twitchiness will need to be addressed, but I believe it is in better shape than beforehand, at least.
Aftermath 2: After adding in the post-constraints (moving to target will no longer move outside the current), it works smoother. There were some code bugs associated with the time to decide motion that got resolved as part of this.
Aftermath 3: There's a problem still remaining if the target and currents have no overlap; in this case each unit will stop proceeding. In theory this just means that the unit is awaiting the formation to reform correctly, but there are stable cases where this happens; for example, a formation directly turning around. Each unit is stuck waiting on each other; kind of like deadlock. Adjusting the formation location analyzer helped this a bit, since the prior was quantized in angle and really angle-twitchy, but the lock is still possible. Smoothly changing the rotation from start to finish may be necessary as a pathing-like step.
-------------
Here's an image of the scenario editor showing a variable-width path. I rigged up the editor to show me the pathing calculations; easier than expected, even though it requires bringing in server code to the editor. Apparently the mapping subsystem isn't coupled too tightly to the rest.
-------------
Aftermath 1: They dance around a bit, but in they at least get around, eventually. The twitchiness will need to be addressed, but I believe it is in better shape than beforehand, at least.
Aftermath 2: After adding in the post-constraints (moving to target will no longer move outside the current), it works smoother. There were some code bugs associated with the time to decide motion that got resolved as part of this.
Aftermath 3: There's a problem still remaining if the target and currents have no overlap; in this case each unit will stop proceeding. In theory this just means that the unit is awaiting the formation to reform correctly, but there are stable cases where this happens; for example, a formation directly turning around. Each unit is stuck waiting on each other; kind of like deadlock. Adjusting the formation location analyzer helped this a bit, since the prior was quantized in angle and really angle-twitchy, but the lock is still possible. Smoothly changing the rotation from start to finish may be necessary as a pathing-like step.