Saturday, December 5, 2009

A programming aside: SRP in World's code

It's a bit different than most recent posts, but I felt like talking about about good code, bad code, and some of the World's code design that has paid off, in theory.

Divide-and-conquer is a pretty well known strategy, in any field. Any task which is sufficiently complicated has to be broken into pieces to be completed. Needless to say, almost all programming tasks are beyond the mental capacity of an individual to solve. Breaking the problem into pieces is a fundamental task in programming, and most advances in programming, historically, have been to make this break easier.

Now, there are many ways to divide things up. Or rather, there are many ways to combine the pieces together. Some of them are bad. For example, introductory object oriented programming teaches us to split things up by classes, and further by their respective nouns (data) and verbs (functions). This is bad! Very bad. In the long run, you end up with a class with a dizzying variety of tasks, data, and code, none of which may be terribly related to each other. Furthermore, the noun/verb class abstraction often breaks down in terrible ways. They say a square is a rectangle, but try and implement a square and a rectangle class with that relationship embedded and you'll have trouble, I promise you.

Anyway. We all agree that dividing the work up is a necessity. However, as soon as we did the work, we assumedly integrate it back together to form the whole. But, and here's the point, if we have to go back to it, how to we redivide it? We broke up the work, but only once, and then we threw the division away! That seems silly. Why not retain the division somehow, so that if we have to go back to code a year later, it is already divided into understandable components?

This has been codified into a programming principle called SRP, the Single Responsibility Principle. When following this principle, the rule is that any given piece of code must only 'care' about a single aspect of the program. Defining 'care' is tricky of course, but I define it to mean anything I can understand without reference. If the code brings two complicated systems together, and each is too complicated to keep in my brain, that's too much responsibility, and the offending code must be further divided (and conquered).

An anecdotal example. In Twilight 1, each potential action a unit could perform was a discrete class. Each individual class had a large set of knowledge required to perform its job. A 'rob that building' task action needed to know how to move, how to respond to danger, how to rob, and how to return with the results. Not only was each action very difficult to get working, but they inevitably had bugs. The code was too complicated, because it knew too much.

To fix this, in Twilight 2 I implemented an action sequencing system. Robbery was defined as 'Move to Building', 'Rob', and 'Return', and these were reusable pieces. How could this single element, the sequence, possibly be incorrect? It can't. There are no possible bugs. The sub-components can be wrong, but that's not the worry of this action, so it MUST work. I never had to touch the sequences again. (Actually, since I implemented my own scripting language to build these sequences, I overdid it greatly and suffered from second-system-syndrome, but that's a talk for another post).

So yes, Single-Responsibility is good. It makes for a HUGE number of classes, but each one is individually understandable, debug-able, and iterate-able. Ironically, the only good way to get SRP out of traditional Object-Oriented Programming, is to invert the traditional approach. Classes are no longer nouns with function verbs. Classes are now verbs, that operate on classes that are nouns. This separates data storage and activity. Each potentially complicated verb is split into its own responsibility, and life is good.

World's design, entirely on accident, absolutely enforces this paradigm. I didn't even realize it till a year past I had been working with it.

When working with the game entities in World, there are only 3 concepts available for use: Attributes (just data), States (usually implemented as a State Machine), and Actions (which are just functions, but with some attached metadata). When constructing a new action, there is a set of action classes already implemented, which can be used, or a new one can be implemented. An action implementation has a known definition and limited data availability. It cannot possibly do more than an Action is supposed to do. Each action definition, therefore, is extremely easy to understand, debug, or copy.

States are slightly more complicated, since state machines are not a simple construct. A state machine is a set of states (just a number, really), each of which has a set of transitions (conditions for moving to a new state), and behaviors (just some operation again, like an Action). If new code is needed, the only places to hook in are at these two types. Either you make a new transition, or a new behavior. And, like actions, these consist of a very small set of functions and responsibilities. They are simple. There are a lot of them, but each one is simple.

So as it turns out, actions can only act, and transitions can only transit. But perhaps more importantly, only actions can act, and only transitions can transit. This means that nothing else can possibly care about acting or transiting, and that, in effect, simplifies everything else.

Perhaps I should redefine SRP. Single Responsibility Principle: Any given piece of code must only 'care' about a single aspect of the program, and ONLY that piece of code is allowed to 'care' about it. Divided and Conquered.

Wednesday, December 2, 2009


Some have said that laziness is a virtue of a good programmer. Perhaps this because a lazy programmer will often do a bit of extra work to be sure he doesn't have to work hard later. We'd rather write an automatic solution than exert manual effort.

Now, I think that's kind of a fallacy really. A really lazy programmer won't bother with the efficient solution, and just plods along doing whatever works. Perhaps there is a Ballmer-peak-esque point where laziness yields maximal efficiency, however. Indulge me in a brief allegory from the engagement implementation.

In Skirmish, there are two tables of hostile-entity information retained. First is engagement, which is a constant value, and is recalculated every 2 seconds. Second is damage-threat, which is additive, but decays at a rate of 5% per second. The only important yield from all this data is knowing which entity is on top of each table, and then combining the two tables to know which entity is the overall 'winner'. Remember, the entire system is event based, so we won't be asking who is on top and reacting. Rather, we need the system to tell us when the top entity changes. Furthermore, we apply a threshold, and define this threshold to be 10%. Only when a new entity reaches 110% of the current maximal entity, will the event fire.

So there are three cases. First, a newly applied engagement value trumps the current top. This is extremely easy. We are informed when engagement changes, so we can easily fire the event if the math is correct. The damage-decay is easy to manage, since at any given time we can compute the new value correctly and easily. Equally easy, is when a newly applied engagement value is the previous top, and is now no longer top. Very easy to implement.

Second, a newly incoming damage-threat rises above. Also easy for the same reason.

The only tricky case is handling transitions due to damage-threat decay. Herein there are several cases as well. First, one damage-threat drops below another due to decay. But HA! This is easy. Each threat decays the same exponential rate, so they are guaranteed to never cross, however long time passes! Piece of cake, unless someone decides they want damage-threat to decay at different rates per class. We'll just pretend that'll never happen for now.

And now the point. What if damage-threat decays below 90.9% of the top engagement threat, so that engagement is now 10% greater than damage-threat? Since the decay is implicit (that is, it is never calculated and assigned a new value), we would have to compute the future time where the decay reaches 90.9% of max engagement, set a timer, and wait till then. While the scheduling system World is written on makes that sort of thing easy, I think you can guess my reaction. I decided that that was too much effort, got lazy, and forgot about it.

Later I grumbled a bit and decided I polish the implementation off and get it right. But I realized a curious thing. The timer will NEVER FIRE. That's right. If I implemented the delayed decay transit system, it would never be relevant.

Two simple facts combine to put a constraint on the time the decay would require. First, know we have a 10% threshold for the top damage-threat to be relevant. Then know that there is a 5% decay per second on damage-threat, and then note the 2 second recalculation of engagement.
In the 2 seconds before engagement is reassigned, there is only marginally more than 10% of decay possible. So it'll never transit, or if it did, it'd only be minutely before the engagement ping would fire anyway. Horray for laziness!

Actually, all that is wrong.

The 10% threshold is for overcoming threat. It could be that at the 2 second mark, entity A has 100 damage threat, and entity B has 109 engagement, and entity A is currently the 'top' entity, since B has not crossed the threshold. In this case the transit would need to fire in about .2 seconds, in order to be correct.

But you know what? It doesn't matter. The end effect is that in extremely rare cases, damage-threat will not be overcome by engagement immediately, and instead will need to wait a very rare worst case of 2 seconds. If that becomes a gameplay disaster, then you can call me wrong and make me write the bloody algorithm.

Guess I'm Lazy.

Skirmish Update

  • Implemented engagement value calculations and broadcast, and nullification when unconscious.
  • Added per-class engagement data.
  • Implemented Engagement table, which tallies up incoming engagement values, and tracks which entity is on top of the list.
  • Implemented Threat-by-damage tracking, with an appropriate table and top-tracking as well.
  • Added a new type of AI state called a Behavior. The only existing one is called 'Attack by Aggro'. The main scenario has been rewritten to start this state on the hostile NPCs. As expected, this state starts idle, and invokes an Attack Target whenever the top entity in threat (taking the greater of damage and engagement threat) changes.
  • Aggro works! The game is fundamentally different to play, and works masssively better.

Wednesday, November 18, 2009

Commander Skill Design

Well, as promised (or perhaps dreaded by few?), let's talk about commanders.

As (maybe) previously mentioned, a player enters a skirmish with a single squad. A squad consists of a set of 'primary units', which use a class-based advancement system, and are led by a single commander. The commander is responsible for managing unit resources, morale, and formations, and is intended to be the director of combat. He is not necessarily a combative unit, but with a small squad the commander will necessarily become part of the action.

The primary goal, from the design perspective, of the commander, is to be a focal point for group management and development, and provide an advancement and customization path for different players. The commander is the player's avatar in skirmish, and perhaps also in the exploration and development modes of the game. The commander is where the player decides how he wants to play.

So let's look at how the commander develops. That's been the primary focus of our design, since implementation of formations and combat implications of command are not yet implemented.

Primary units are numerous, and therefore use a 'simple' development scheme, which requires only very rare user interaction. Primaries can change their current class at any time, but new ranks of the class are automatic, and new class availability is relatively rare. Commanders are single, and warrant more interesting decision making. We've decided to go with a skillpoint based system, where at set quantities of experience, a commander will gain a skillpoint, to be used whenever and however the commander wishes.

Commanders purchase skills from 1 of 3 categories. The first category is called the Resource tree, and covers skills that manipulate resources, equipment, and often morale. The second is called Tactics, and covers actions and maneuvers. Tactics may grant new abilities or buffs to the commander and/or his squad. The last category is called Command, and governs the leadership of the commander. Command may grant new formation slots, new maneuvers, or introduce a bias towards defending, or perhaps improve the commander's ability in certain situations, for example, being outnumbered, or defending an outpost.

Each category is arranged in into a set of skill 'Cells'. A cell is like a small skill tree, broken into 3 conceptual areas. The first row is the entry point of a cell, and there is only a single skill to be learned. Learning this skill grants access to the cell. Rows 2, 3, and 4 are other skills, and may have interdependencies, or not. Typically we expect to have 2 to 3 options per cell row. Requirements to advance down a tree are not set in stone yet (or maybe I just do not remember?). The last row, row 5, is the advancement row. Skills in row 5 match the row 1, access skills, of a new cell. When this is taken the new cell is unlocked. Furthermore, access skills are restricted. It is intended that selecting which cell to advance into is an important decision, and maps to how the commander is developing. The intent is that a commander uses skill points to develop within a cell, then occasionally picks a new cell to advance into.

The method of restriction is still relatively undecided, though several possibilities are under consideration. A simple option is to make all row-5 skills exclusive, so only 1 in any cell may be taken. This prevents breadth-first development, and forces a 'downward' momentum, which may force specialization, depending on cell structure. The next possibility considered would be to introduce a new type of skill point, rarely granted, which may only be used to select a new access skill, but may be used on any available access cell when granted. This allows more freedom at the expense of some simplicity.

Now, each category of skills (Resource, Tactics, Command) is not weighted the same, nor will they become available at the same time. At the moment I'm considering having Tactics be available from the onset, introducing Command later, and adding in Resource last. Introducing new concepts to players in a controlled way has its advantages. Hmm, perhaps we should actually write down the 'Player training experience' sometime....

To implement the separation of categories, we may introduce 3 types of skillpoints. Each is granted at different XP values. Say tactics points are granted every 200 XP, starting at 200. We could introduce Command points every 400 XP, starting at 2000, and easily control the rate of skill gain.

For reference, since it hasn't really been discussed, Kol is been in charge of Primary Unit development, and Eirikr is in charge of Commander design. We all discuss the possibilities of course. I'm in charge of implementation, so they are both doing more of that themselves as time goes on. Maybe I'll pester them into posting sometime too.

Wednesday, November 11, 2009

More on Engagement, Threat, and leading to Formations

After this last TOAST we came to some conclusions about the engagement and threat mechanics from last post.

First, the relationship between engagement threat and damage threat. Balancing the lines between these two is a tricky thing! One we think will be simpler if the current maximum of the two is used, rather than the sumation. This lets us balance 'tank' unit threat versus 'nuke' unit threat without having to account for the nuker engagement, and the tank damage.

Second, adding a health modifier to engagement threat. Wounded or weakened targets are more desirable to be attacked. At least by smart enemies. This could be implemented by modifying engagement threat magnitude based on current health. However, I think this aspect needs to be kept separate from the engagement itself. A bit more detail here:

Consider a line of attackers and defenders, the defenders being equal in threat. The units will tend to disperse along the line evenly because of this. Say the attackers nukers focus their efforts on one section of the wall. The defender there weakens, and the attackers then focus on this section of the wall, breaching the line at a weak point. A good mechanic! Also, something for the player to counter by adjusting the formations, or by using some sort of temporary boost on the weakened area.

Now, I do NOT think that this should be baked into engagement threat. I think this factor should be managed by the enemy, since its an AI decision, more than a gameplay mechanic. Doing so lets us make stupid enemies that behave more boringly and predictably. These types of AIs are crucial for teaching and making the game fun! If enemies are smart, we have a problem with the player... isn't. Undoubtedly in the long run we'll need NPCs to behave differently. Scouts may partially ignore engagement threat, for example. These effects needs to be applied in the threat-response logic, rather than the threat logic itself. Moving on!

Third, threat thresholds. Without a threashold to change current targets, pingponging is way too easy. It's a common practice for good reason. WoW has a 10% threshold for melee and 30% for ranged. Seems like a good place to start.

Fourth, engagement range decay must be greater than 0. There was some mathematical reason I had for this, but I can't remember the details. I remember that the gradient of the engagement threat must point outwards, even if the value is fairly small. I'll pull a Fermat here and say I'll show why later.

Fifth, morale modifications to engagement. This seems like an obvious way to tie in morale to formation effectiveness. When tank units get less happy their engagement threat goes down. This has a side effect of balancing out incoming damage, but is dangerous beyond a certain level. Let morale drop too low, and the line disintegrates immediately.

And lastly, pondering formations. Engagement encourages flanking. The more radii the cover the enemy, the slower they become and their weaknesses increase as they become more surrounded. It just works.

I like it.
Commander design next time.

Monday, November 2, 2009

Combat speed and Threat mechanics

Two topics I want to expand upon, both to say it 'out loud', and to keep a record of thoughts. First, movement speed in relation to combat, and second, threat mechanics. Both with respect to the skirmish game design, naturally.

First, Speed.

Ever notice how in games everyone runs everywhere, all the time? This is because walking is very, very, very slow. So slow that no one ever walks in games, unless it's a social thing. Those quests where you escort some NPC who decides to walk? Very irritating. But equally irritating are escorts where they move faster than you can and leave you in the dust. There's a reason all the escorts in WoW move at 95% run speed. So you can catch up.

Now take the perma-run speed and bring it into combat. For a game without positioning being important, no problem. For a game where you control one person, no problem. But in a game where managing formations, lines, and special maneuvers are important, and where you control a plethora of peeps, run speed is just too fast. Nothing is understandable. Small breaks in the line are immediately filled, to the point where the line break isn't noticeable. For formations, motion, and knockback to be relevant, units need to SLOW DOWN. A lot.

So we slow them down. Now we have the irritating dilemna of the slow escort. Waiting for your units to get in position is intolerable. Crossing a skirmish to setup a trap is agonizing. Exploration is, well, a slide show rather than a whirlwind tour. It's meh.

We get around one problem in the Skirmish system by the nature of its design. Exploration is not done with combat units. We can have exploring units move at a run pace, or even substantially faster if we wish, with no balance concerns, since out-of-skirmish speed has no advantageous effect in combat. But this is a digression, easily solved and set aside.

The current plan is to introduce the concept of engagement. When a unit is engaged, they are considered in combat, and their movement speed slows dramatically. Now, how engagement works depends on the offenders and defenders. Shieldmen have a small engagement radius, but may slow their opponents more. Scouts may have a large engagement radius but only slow their opponents slightly. On the receiving end, a scout may reduce the amount of motion slowed. This makes scouts faster and more nimble in combat, but all units match outside of combat, when travelling.

There are lots of open questions here. Do engagements stack? Are there disadvantages to being multiply engaged? This is sort of a flanking determination. If you're engaged by 5 enemies it is certainly worse than being engaged by 1. But this does not account for support. If you are engaged by 5 archers, but surrounded by 15 shieldmen, you are still at an advantage. So perhaps we should integrate the concept of support, which is a sort of anti-engagement.

Or perhaps this is getting too complicated. The design may need to be simplified down to understandability.

So let me review the basic tenets: In combat, units move slower. This is to promote the efficacy of formations, breaking formations, in addition to increasing the 'parsability' of combat. That is, the ability of the player to understand what is happening and make interesting combat decisions.

The second point is related: Threat mechanics. The holy trinity of MMOs is the tank/healer/dps role distinction. Tanks manage threat, Healers manage tanks, dps manage damage. Threat mechanics are imperative to this design working, even if they are quite artificial. It doesn't make much sense, really, but it makes the game fun, and makes the roles possible. Without them each role blends together. Compare class roles in Diablo 2 to WoW, for example. Diablo has no threat mechanics, and therefore you have no control over combat, except to kill things faster. In its defense, that works quite well for a brawler, but less so for a game intended to be tactical in nature.

So, threat. That is, how the AI units decide who to strike. The most basic threat mechanic is based on incoming damage. Units attack whoever hits them hardest. The next simplest is that they attack whoever is nearest. Let's consider our goals first then see what tools we need to implement to reach them.

AI behavior needs to be predictable. The user needs to be able to understand them and setup a formation and units to overcome the enemy 'strategy'. In skirmish, there are 2 seriously differing cases to consider. First, let's ponder the simple case; independent units. Secondly we'll address enemy squads, which should behave at a group level.

Certain types of player units are built to be the wall. They are the tanks of the squad. The first of these is the shieldman, naturally. We need to make sure the shieldman can do his job. His job really is to block units from getting behind him. We could enforce a sort of literal interpretation of this and eschew threat entirely, but just requiring a solid wall of shieldmen. Any independent unit that decides to get through to the nukers just keeps trying to walk around until they get there. But really, that just looks ugly. It means walls don't work at all. In reality, the shieldman would be easily able to prevent a single unit from passing, just by staying in the way. But if the AI routines and movement system cannot handle that by itself, we'll have to cheese it using something else. One solution (clearly) is a threat system. A basic proximity system would both get incoming units to stop at the shieldwall and appear to engage.

But, well, that's boring. Everything will stick to your closest units and strategy is too simple. If there are 3 incoming units and one defender, that should be a problem, and the 3 shouldn't merely stick to the 1 defender while the 15 archers plug away.

So how bout this. Earlier we introduced the concept of engagement, as part of a combat speed mechanic. Let us reintroduce that. Give each unit type an engagement radius and an engagement magnitude. Furthermore, let's make the engagement magnitude decay (optionally) with distance. Secondly, and importantly, have the magnitude of the engagement drop off by a function of the number of units engaged. Now, what do to with the engagement magnitude calculation? Let's say every tick, we recalculate engagement, perform a weighted combat-speed adjustment as before, and introduce a threat mechanic. The more engagement, the higher the threat ticks up. An example:

Let us define a shieldman's engagement: Short radius, high magnitude, but high decay. It's very strong in melee but even 10 feet away is almost nonexistant. Let us also define an archer's. Wide radius, low magnitude; they don't really slow you down since they aren't in your face.

So a unit attacks your formation from the front; the shieldman is closest and the first tick produces a lot of threat, the shieldman stays in melee with the unit and easily maintains a threat lead. Second example, 4 units attack from the front. The shieldman is closest and manages in get in melee with 2 of them. Each gets a moderate amount of threat. The decay rate on the other 2 means they go for the archer. The shieldman is able to hold a few but is overwhelmed as intended. One more case to ponder: 4 shieldmen in a line that isn't touching but overlaps engagement radii, and many attackers. As each attacker comes in, if they engage anywhere in the center of the line, they will be slowed substantially by each defender's engagement, and will likely stick to the shieldmen. However, if enough pile in on one location, they can overrun that shieldman's engagement by splitting it among themselves, and break through the line. Or perhaps they have enough knockbacks to split up the shield line, reduce engagement count in the breach, and suddenly those units are at full speed.

But... why did they try and break through at all? Well, the nukers, of course. But, we haven't accounted for them at all! We need to make sure they're on the threat table and known to be the highest threat, and therefore the intended target. So, we introduce a damage-to-threat mechanic as well. Balancing the intensity of the two will be tricky of course. Maybe there's an alternate thought here...

Consider keeping engagement threat and damage threat distinct. Damage threat will continually accrue from damage and decay slowly, while engagement threat is only based on the current state, and has no history at all. The current target is still the greater of all the threat entries, but this gives a new desirable behavior. When the line is broken, engagement threat falls off and damage threat dominates. The line falls apart and the attackers flood the nukers. Dramatic! Or, if your wall is solid, nothing will ever escape, and your formation does its job.

Food for thought!

World development update!

Yes, yes, things are actually getting done, isn't it odd?

  • Directory methods are implemented. The directory is responsible for keeping track of which servers are running, where they are, and what zone they are running. Servers use this to contact each other, clients use it to find their server, and the directory uses this to find prospective servers to create new zones on. Everything but that last one is implemented. This basically means that the Skirmish client can connect to any running zone without needing to know IPs or anything. It's important structurally, even though noone cares.
  • Morale partial implementation. The morale system is in place and works, although most of the morale modifiers are not running. We can set up classes to behave with respect to their morale, but the morale value itself isn't really changed. That's coming soon.
  • Unit details window. A first cut at a sub-window to show unit statistics is in place. It also lets primary units change classes, which is useful.
  • Commander design. Lots of work on the development and operational design of the commander units. Beginning implementation of this will probably start once morale is more polished up.
Going's been slow recently; been too busy for additional programming time. But it does continue. We're at the point where the game is playable, but it's kind of too ugly and cumbersome to want to. I may take another development detour to actually implement some graphics and effects.

On the list now: Morale modifiers, commander skill basic implementation, client-side data, server-servers, combat-speed mechanics, threat and engagement mechanics, graphics engine integration...

Might be a bit.

Monday, June 8, 2009

Another Medley

Another Medley. This one was recorded a few weeks ago, but it came out pretty nice. There are a few glaring mistakes, but aside from the obvious ones the playing is pretty good for me.

Of course it's the same style as the rest of my stuff. It's fun to play. :)

Medley 4


Monday, June 1, 2009

Skrimish Update (Again!)

  • Client code is looking a lot better; actions usable, targets are known to clients now, multi-select works.
  • Server UI has some much needed expansion; can show entities of all flavors now in varying forms.
  • Movers are working great. Knockbacks and Dodging effects work. Movers can also now 'leave space', so corpses are no longer in the way. Most corpses despawn instantly now anyway though.
Working on showing cooldowns, using targets and showing targets in the client, retooling the AI scripts to not use every action (unless it's an NPC AI..), and then adding toggles to the AI for certain actions. Probably going to do AoE targetted effects soon too, they sound fun.

Thursday, May 21, 2009

Skirmish Update

  • Client actions working, single and multi-select.
  • Icons on units based on their class for now.
  • Zai rebuilt the main viewer; much nicer now. :)
  • Zai working on the selection information and viewer, Nick on the actions page.

Thursday, May 14, 2009

Skirmish update

  • Updated Movement system to handle the hexmap and non-intersection. Working on a priority system for uncontrolled motion (knockbacks and motion inhibitors).
  • Working on client; multiselect works, working on acting and invoking shared actions. Protocol for the client works, just need to make the data visible. Since we don't have the display engine yet, graphics are largely eschewed, using a very simple world viewer at the moment.
  • Tier 1 classes simple abilities are working and built automatically; still adjusting numbers for balance in the sim, but that's an ongoing task.
  • Writing Aux and NPC unit autobuilder for a PvE style simple scenario.

Formations, Motion, and dynamic combat.

A big question in my mind, regarding the development of the combat engine for Skirmish, is this: What makes this combat interesting and fun? Clearly this is an important consideration; if the main component of gameplay isn't fun, the game is pointless and forgotten. In addition to this we ask ourselves; How is it different from other RTS combat?

The answer isn't set in stone. We have some ideas, we're working on implementation. Iteration is the key; make something, then see if it's fun, and adjust or reinvent until you have something worth playing. One of our main concepts behind combat is making it dynamic, by keeping things moving and adapting to eachother. I'm going to focus on that aspect and try and communicate all of our ideas about it, rather than speaking broadly about the combat engine.

Formation is important. The player will be able to create arrangements of their units, physical distributions that control how they move and act. For example, a defensive oriented player formation might call for the primary units to huddle up and get close to each other. At least, that's what we'd think in historical combat. What we need to do is introduce concepts that make formations relevant, but without just declaring it so.

To clarify; we could easily make a 'Defensive Formation' that was huddled together and provided a defensive buff. This Is Boring. I don't like it. Making formations relevant without the cheeseball factor will yield more progressive behaviors, emergant behaviors that the player can exploit and learn about to become a better player.

So how does a defensive formation work? It reduces the ability to be attacked by multiple enemies, for one. This will happen by the nature of being adjacent to allies. Tight formations also mean that any attacker is in range of multiple defenders, making attacks harder to execute safely. How does this come into play, though?

How about we introduce an ability on certain classes that executes when a neighbor is attacked? An attack of opportunity, to use D&D jargon. Now say this ability isn't universal. Now you have a possible intelligent structure to your defensive formation; Counterers interspersed. Aha! Make your counter attackers longer range! Now you've evolved a phalanx, with no hackery involved.

What else? Well, let's make our defending units able to block neighboring attacks, a kind of intercept. Now the tight formation provides more defensive power, but ONLY when the formation is maintained! Which brings us to the next topic.

If formations are important, formations need to be broken. The defensive formation above needs a weakness, one that naturally evolves from its nature, rather than from numerical values. What's the disadvantage to clustering? Area of Effect comes to mind, if we include such style attacks. What else? Well, clustered units may have a hard time moving efficiently, so they'll have a movement speed issue. Does this happen naturally? Well, depends on how the motion system works. The movers are about half written, so we'll know soon.

But let's take a more direct approach. Let's introduce a mechanism to literally break up formations. Knockbacks. A new action that moves the target unit back. But! If the unit cannot move back, it deals additional damage, or perhaps a stun on nearby units as they get knocked over. Say your tight-formation group encounters a large creature, a giant or somesuch. He charges, smashes into your shield wall, and scatters your group. The group scrambles to recover into their formation, but each individual attack breaks the cohesiveness. The formation has a weak point! Excellent.

So we introduce local defensive effects, local offensive effects, knockbacks. But what incentive is there to 'not' cluster?

How about dodging? Let's change dodge from a simple chance to negate an attack, to a chance to literally dodge, gaining distance from the attacker. But the knockback caveat applies; you cannot dodge if you have nowhere to jump! So take a non-phalanx style group, say, a group of archers style stealth units. They need space to act accurately. Perhaps they gain a small bonus when they have noone near them, accounting for freedom of motion for managing a bow. This group is far more resilient to the giant, but for few numerical reasons. The giant cannot disrupt their attack nearly as easily as the phalanx, and each individual ranged unit maintains their offense, as the giant scrambles to smash each weakling into the ground, one at a time.

The last planned aspect of formations involes the roles of the constituents. Take a shieldman/archer style group, a few defenders and some ranged in the back. Assign the offenders a typical shoot-whatever-moves task, but task the defenders to intercept incoming units. They are there to guard, not attack. This involves a different style of AI and mover, one that can predict enemy motion and block it. The AI will want to focus on engaging foes, maybe stunning or slowing somehow.

There's an interesting word there: Engage. From my experience in simple combat, disengaging a foe you've entered melee with is risky. Again with the attacks of opportunity. Perhaps engaging a target in melee should codify that somehow. How bout we introduce a simple speed penalty when in melee range of a melee enemy. That is, an aura that effects enemy targets, slowing them? This lets a single guardian cover a small area, allowing the ranged units to attack with impunity. 

The combination of controlled and uncontrolled motion, formation changes on the fly, etc, etc, we hope will make a more interesting RTS style combat. Since the skirmish focus isn't on development as much as combat, we need the combat to be interesting. We need the units abilities to matter in a macro-scale kind of way. 

I guess we'll see when we get there.

Friday, May 8, 2009

Crazy GPU Tricks.

I haven't really posted yet on graphics programming. But my current task at Bunkspeed has me doing some crazy shader tricks, and I ran into something that GPUs do that is simultaneously awesome and awful, so I had to mention it.

If you don't know anything about GPU graphics reading this post will either confuse, fluster, or bore you. 

So here's the deal. You have a texture and some texture coordinates. Texturing correctly happens to involve calculating a differential equation of the texture coordinate over the screen. In the old style fixed function pipeline this is done for you. Well, relatively recently at least. If you remember how textures looked on the Playstation 1, how they kind of swam and made you sick, that's what happens if you don't do this differential correction.

Anyway, the hardware can do this for you since it knows what the texture coordinates are, since they're just in the polygons. But when we start using custom shaders that generate texture coordinates dynamically, this changes. If you can spit out any random number, how does the GPU determine this differential? It could make the shader generate it too, but it doesn't. The shader creator doesn't have to produce differentials himself, which is a very good thing; that'd be obnoxiously difficult.

To understand the answer you have to know something about shader execution. GPUs are fast because they are massively parallel. They execute many pixels in parallel. In fact, they cluster pixels together and run them in lockstep. That is, the shader executes for every pixel in the cluster at the identical time. Each pixel does every instruction simultaneously. 

So the processor cheats. To calculate the differential across the screen, it literally grabs the value from the next pixel over, subtracts, and calls it a day. It's perfectly accurate (if only first-order), and gets the job done, in most cases.

Which value does it compare to? Well, when you execute a texture lookup, it looks at which register the input texture coordinate is on, and uses that register index to lookup the neighboring pixels. Brilliant! So where's the problem?

Flow control. Say you have an if statement. Very simple. Not too common in shaders but we'd like them to be usable. Say you compute the texture coordinate inside the if statement. Say your neighboring pixel never went into that if statement. Now what? Your register has been computer, but the neighboring pixel has not! The differential is utterly invalid! Just because you used an if statement. Consider the following two blocks of shader code: 

float3 TC = float3(R.x, R.y, -R.z);
if (dot(R, R) > .01)
{ += texCUBE(SpecularMap, TC).xyz;

if (dot(R, R) > .01)
float3 TC = float3(R.x, R.y, -R.z); += texCUBE(SpecularMap, TC).xyz;

Not functionally different. In fact, most C programmers would opt for the second one, because why calculate something outside the if when you never need it? BUT, the second one causes visual artifacts, while the first one does not! 

The second use of that differential calculation is to select which mip-level of the texture to use. If the differential is high, that implies the texture is being scaled down, so the GPU uses a small mip level. If you have a correctly built mip-chain, the worst you'll get is a small color abberation. But if you're using a rendered texture without a good mip-chain, well, you get garbage data. Here's my recent example for you, guess which is which.

The white edges on the cells are pixels where the differential is invalid, picking a white pixel out of the mip-chain. 

So yes, it's a wonderous hack, because it's capable of computing complicated differentials perfectly and automatically. But it's an awful hack, because it's extremely difficult to track down when something's going wonky, and because the fix looks like an arbitrary nonsensical change to the shader code.

But It Is Awesome.

Sunday, April 26, 2009

Music! Music! Music! Smash!

Now, the introduction down there says that these here blag is for four topics: The World Project, which will inexorably get the most attention, Bunkspeed, Programming, and Music.

So let's get into that last one there. I've been playing unprofessionally for a long time. I composed a bit, though I don't hold much weight by most of my compositions. For the most part this aspect of my life fell by the wayside through college. The only exception was a single course in composition, in which I wrote a piano duet, and then later when I finally had a room for my parents' piano, which was gifted to me.

So I began playing again, just my own tinkering as usual. Rather than writing songs I seem to write basic melodies and chord progressions, and improvise around them. The basic pattern is always very simple, but they are fun to play, and Irene seems to like listening to it.

I've yet to really pin down how to describe the style of music I play. I may be too self-conscious about it to try and define it. So I'll leave that to others. On that note, here's the first decent recording I made. This one has some audio issues (especially near the beginning, it gets better as it gets louder), but stick with it and let me know what you think! This one is a medley of the 6 main rhythms I've been improvising upon.

Medley 1

I'll work on an embedded player eventually, but I'm not terribly experienced at that sort of thing.


Anyway, I'll be making a point to keep recording. Let me know what you think!

Forward! For a game!

'The state of things' was a pretty dense description of what has been written, code wise. Let's get into the more recent work, actual progress on the actual game.

Rather than a list, let's just say what we have.

We have a zone that runs a scenario. Players can connect to the zone, and are then given several dialogs to select a team and commander type. AI players will be created to fill out any empty teams. The scenario specifies the team count, typically 2 but it will work with any number. When the scenario begins the commanders will move to a spawn point, summon their primary units, and they'll enter a combat state. If they enter proximity with an enemy unit they'll begin an attack sequence. The user can execute a couple different 'AI' actions, which tell the units where to go and how to act. This is all pretty simple stuff, but clearly necessary.

More time has been spent on the unit development. We've come up with a relatively detailed design for the primary unit types, how they develop, how they act, and what control the user has over them. Primaries develop with a simple class system; Each class has 10 ranks, each rank may grant new abilities and attributes. Some of these are allowed to be permanent improvements, which continue to take effect after the primary changes class. Some classes have prerequisites on other class ranks. This forms a fairly wide tree of possible primary unit types.

The intent for the primaries is that they are your group's customization. Selection of which class they are (and which they have been in the past) determines your ability selection. It also determines how you'll want to build formations and develop your commander. The commander's development methodology is not as well determined or implemented as of yet, but the plan is to use a fairly simple skill system. Commanders focus on formations, managing morale, and utilizing territory, rather than styles of attacks and defenses. This makes them a bit more complicated to design, but more interesting as well.

We started designing each primary class in the generic server editor, but quickly ran into a scalability problem. The editor was not designed for mass data entry. Even just the 4 introductory classes would require 50 states and 40 actions, almost all of which are identical but for small changes. Since we want there to be a wide variety of primary types, this had to be resolved.

So we started building a file to store all our data in, which we would parse, and then construct the gamedata from. This is remarkably easy with the World as it is. Next, since now I'm not the only one working on the system, we decided to make this file sharable and easily to collaborate on. So we uploaded it to google docs. Next, I decided that downloading the file, processing, and running it, was entirely too cumbersome, so we wrote some code to do all that automatically.

So now we have an online document, and the system automatically builds the classes from it. This is very cool. Trust me. We expanded the file to include action declarations, state descriptions, class data, and some commander data as well. 

So great! We've got some data. Creating a new class is wonderfully straightforward; add its abilities and states into the doc, make the class ranks, and rebuild. Takes about 10 minutes. Sure, there isn't a client to actually play with them yet, but that'll get here soon. 

About this time someone mentioned how cool it would be to have a simulator to test class balance. That sounded fun, so I wrote it. Continuing on from the previous kick, I decided that we'd use another google doc to store the sim inputs. And heck, why not just spit it back out into the spreadsheet as well? Oh, and since we're using a scheduler for all the operations, why not just fake the timing so that it runs at 100% CPU, instead of waiting for realtime to pass? Sure, why not?

So we have a single program now, that reads in the class data from online, builds the class data, actions, and states, then runs every single class rank against every other class at the same rank, 100 times, then spits the output back to a google spreadsheet. 100 fights takes about 7 seconds, so the current testing spreadsheet (about 70 comparisons, 100 iterations each) takes a couple minutes. But that's 7000 fights! You can watch the spreadsheet just fill in with data. Make a small tweak; rerun just one line, test the output, and repeat! It's addictive. I think I'll go make a new class now, just for fun.

Wait, maybe I should make the game playable.

Nah, the sim's good enough. We'll just play that. 

An Aside: Independent software development methodology

I said last time that I sounded lazy and that it was intentional, and that I'd talk about it later. This is later. Welcome to later.

The idea is that with a side project, motivation and managing time are the most important aspects of the project.

If you're not motivated internally (since there are no external motivations like money), you just won't do it. I ran into this twice, once after writing libraries post Twilight 1, and again in the first year or so of World. Work just slows down because it's boring. So first and foremost, you have to work on the part that is interesting, even if that means that other pieces don't get written. Yet.

Secondly, if you waste your time doing menial tasks, you never get to the interesting stuff, and the motivation detractor pulls you in again, and the project stops. Getting rid of the cruft is extremely important for a long-winded independent project. To that end I made a fateful decision to write this project in C#. This has saved me countless hours of tasks that should have fallen by the wayside in the last century. Writing a kick-ass reference counter does not qualify as interesting code. And if it does, then I'm supremely jealous. In any case, any costs of C# have been utterly overshadowed by the impractibility of not using it. 

So to counter the first one, I eventually picked a specific game, rather than being working on a toolset, and I started work on specific aspects of the game. This of course shows exactly which parts of the generic system just don't work. The kicker is, I had to throw stuff away or just ignore code I'd already written! Any professional programmer can tell you that this sucks but is still common. But in an indie project where such things can sap the entire energy out of a project, it's a bit more painful.

Now, working with Bunkspeed and specifically a guy named Jamie Briant, taught me about Agile development. Maybe I'm drinking the Kool-aid here, but being agile is essential. Now, Agile development has a lot of codified and formal methods associated with it. All that is cruft. The important thing is this: Don't do what you don't have to. 

So remember how I wrote this complicated proxy and server/client system for the gamedata, but never used it? It's rotted by now I'm sure. I didn't 'have' to do it, but I did, and it was a waste of time.

Aha! But there is a catch! Maintaining interest is of extreme importance! At the time, the network stuff was what I wanted to do (otherwise I wouldn't have written it, yes?). So, stopping yourself from doing what you want to is another good way to kill your project.

So which takes precedence? Being interested or being efficient? It's a good question. Ideally one that doesn't have to be answered, if you're lucky with your project. In my case, if I'd started the game instead of working on the generic solution (which I've already stated I should have), the decision would have never become relevant! 

I suppose the concluding advice for this aspect is to have a goal. Know what you are aiming for and keep it in sight. If you think you're 90% there, the last 10% always becomes the most interesting. So set a goal that's 10% out, then when you get there do it again. 

The State of Things

The World Project was started in January 2007. There's been plenty of time to get moving on things, even though the end goals were not known at the time. Despite this, I was working throughout this time, and I definitely gave more attention to work throughout the process. Basically, the World got worked on whenever I had the spare time, energy, and willpower. I haven't obsessed over it like I have previous projects, but the further it gets along, the clearer the resulting vision becomes, and the greater the drive to reach it. As always it feels like the last 10%, but anyone in software knows: The first 90% of a project takes the first 90% of the time, the last 10% takes the other 90% of the time.

So here's what I wrote:

  • State Machines: Separates machine structure, machine layout (visual), and machine execution. Machines can be edited using a visual layout method. States and Transitions execute behaviors. Transitions utilize a TriggerListener, which invoked the transition via an event. Decision points can be created, and evaluators on transitions that exit from them. Sub-machines are somewhat supported, but I ended up doing these a different way within the World, so I scrapped the original methodology. Idealy we'd have nested machines, but until I need it, I won't write it. Writing the visual editor, generic methodology for creating behaviors/transitions/evaluators, and allowing extensibility, all took some time.
  • Scheduler: The main game thread executes within a scheduler, from which any task may be invoked at any time. It's single threaded (simpler), but does the job. Priorities have been in the task system from day 1, but never used. I'll get there when I need it (noticing something, I bet, but don't mistake it for laziness).
  • Socket wrapper: Wrapped a socket based communication system for ease of use. UDP or TCP. It's designed to take a set of possible objects and optimize for transmitting the limited set. Originally it used the C# BinaryFormatter. Once I built a better data protocol the transmission size shrunk by a factor of 20 or so. In any case, it makes data send and receive very easy. It also fires an event (for the receives) for each individual type of data received. Since the data types are known ahead of time it's easy. I like being clever with C# and generics.
  • Network Scheduler: A second thread, similar to the scheduler, that handles incoming and outgoing (well, someday) traffic scheduling. Because of this and the scheduler, an idle zone sits at 0% CPU usage. It might not seem like a big deal, but it means that the code is structured well enough to know when it is idle and not waste resources. Coming from the typical client-side game process of update->draw->repeat ad nauseum, this is quite the change.
  • Directory server: A simple client/server protocol that supports looking up the location of a zone server. It's really pretty easy given the above. Not even used yet, but I wanted to get right into it.
  • Generic GameDatabase system: This was probably implemented much more complicatedly than necessary, but I had some incorrect ideas about the layering of this system. This is a gnarly bit of code that represents the storage and transmission mechanism of the game system data. This includes the mechanics, formulae, attributes, etc, even though this layer doesn't know what the data even is. Probably more complicated than necessary. It also supports client/server updates of the gamedata, but my hunch (2 years later) is that it was a waste of time. Oh well. It's neat code anyway.
  • Lua scripting for embedded language: Lua can be bound into behaviors, transitions, and evaluators of state machines. The higher up system can also use lua to evaluate Effects and Actions. This was pretty much cake.
  • GameSystem implementation of the GameDatabase: Specific types to implement the World project. I also wrote a system for editing the data, using some wacky Windows Forms code. It's mostly PropertyGrid based, so it's a bit... raw..., but functional if you happen to be the writer. There's some pretty crazy behind the scenes stuff in there though. For example, if you assign a State Invoke to a behavior, and that state (in the database) has parameters, it will automatically setup the parameter list for you, with the provided defaults. It's amazing how much a pain UI can be.
  • Zone implementation of the GameSystem: Yet another nested layer! This is the sort of main execution code. It takes the GameSystem and implements an instance of it: A Zone and its constituent Entities. There's a lot of stuff here that I'll leave for later.
  • Zone Client/Server Protocol: Uses the network stuff to keep a set of clients up to date and allow them to play in the server. The relevant data is all in client-space now, even though we don't have much of a client to view it.
  • Custom modules: Zones can establish a plugin module to provide additional types of behaviors, triggers, evaluators, effects, and actions. It also provides a set of options to the server UI which happen to have become extremely useful.
Code wise that's relatively where we stand. Of course there's more detail than that implemented. It's not a small codebase, but I wouldn't call it massive. There are 2 major points I want to touch on, since they're at the core of the design. First is the event-driven nature of the system. Second is to address the 'but I haven't done that yet' part, since I think it's one of the strengths of my development method, despite sounding like laziness. The second I'll delay for another post entirely, the first I think I'll write now.

Event driven systems have a lot of benefits, and a lot of problems. First let me define that as I see it. Event driven systems work off of knowing when things happen, and reacting, rather than poking through the system to see if something needs to happen. It's reactive rather than proactive. Why do things this way?

Well, consider the cost of poking through the system analyzing potential occurances. What if nothing needs to happen? You just wasted time! You also forced the entire zone to cycle through memory! That's more time! You also might have missed something happening if the cycle takes too long! That's inaccurate! If the expected zone size involves crazy-large numbers of entities, you just can't afford to run naively.

Now consider the cost of figuring out when things will happen. First, you have to somehow know how to know when things'll happen. This varies dramatically from thing-to-thing. Right there you know this will be harder than the alternative. Each case is different. Let's consider a set of possibilities:

  1. X seconds pass: This is easy! Just tell the scheduler to fire in X seconds. Yay!
  2. An Entities is moving north: Wait, when do we care? Well, the entity needs to move! Continuously! When do we move it? Every millisecond? Every second? Well, I guess in some games you could get away with infrequent moves, but not many. Chess maybe. And the first? Well, then you're basically being proactive again, you're forcing anything that moves to be updated even when it doesn't really care. So instead, represent motion as a start point and a vector at a specific time. Then the location is a function rather than a constant. Physics complicates things, but as long the equations are solvable you can do this. Now, this limits the game, of course. No crazy space physics, nothing hyper-non-linear. But most games don't need those things, and if you did, you wouldn't write an event-driven server.
  3. Two entities collide in the world: Typically collision is done after everything moves, then analyzing if things intersected. Well, we aren't explicitly moving. So how to do this? Well, we can intersect two lines, right? Just extrapolate the entity's location into the future and figure out the next thing it collides with. Use some aggressive space-time culling to remove unnecessary tests. And then, any time anything changes motion, you get to recalculate. All sorts of fun! This is the hairiest part of the system. If motion becomes very dense, this system may not work out as written! But we'll see. The key will be culling unneeded tests. You don't care about something that will collide with you in 3 hours and 12 seconds. You can just recalculate in 10 seconds, and ignore anything that is 'probably' more than 10 seconds away. Put a speed clamp on things and you can limit the space sampling needed pretty easily. It was tricky but this system works. Two objects moving at each other, if they care about collisions at all, will be woken up when they touch. And in between that time? 0 resources spent on collisions between them. Heck, if they were the only thing there the CPU would go idle.
  4. Do something when someone else acts: Well, we can just hook an event on 'someone else' and then do 'something' when needed. This means we have a LOT of events throughout the zone and entities. keep in mind that state machines pretty much work off these events as well. This task is 100% natural for a state machine transition and behavior.
Since I want to be running very large, sparse zones, I decided to invest the necessary time to make the event driven system work. It's hard to know when things will happen, but the rewards have been pretty sweet. I can run zones with 100 or so entities fighting each other, using about 1% of my CPU. If I actually want to watch it, I use about 15%, due to the bad drawing algorithms I use right now. (There's some irony there, huh..)

So that's the code and what it can do. Ah, but remember 'It Must Be a Game'? Where's the game! Gra! We got started on that fairly recently, and it's not just me anymore, and I think we've done some pretty neat things in the last couple months. Next-next time.

Thursday, April 23, 2009

On the World Map, Tier 2.

So the remaining piece of the gameplay is the top tier. This component has only undergone theoretical development, so it'll feel pretty rough. That's okay. First we're working on the skirmish, and when that's fun, we'll contemplate this level further.

The top tier gameplay has several important aspects, which I'll list in order of importance and likeliness to get implemented. First, it is where the player manages his commanders and their units. Second, it is where the player finds and joins skirmishes. Third, it is where the player interacts with other players, with the exception of grouped skirmishes and global chat channels. Fourth, it is another aspect of the gameplay; the player will create settlements, set up trade, set up resource harvesting, construct new items and buildings, learn new abilities, and train new kinds of units. Fifth, it provides impetus for skirmishes; Skirmish events may be created dynamically based on the environment. That is, the game (or a GM) may spawn a set of moving skirmishes that represent a force attacking a player settlement.

Perhaps more detail is in order.

Management: Commanders develop in terms of skills, learning new abilities, setting up new formations. Primary units may change class. Equipment can be shifted around. Primary units may be traded in and out of the active set for the commander. Most of these are not 'twitch'y changes, so we'll keep it disabled in skirmish mode. This way the skirmish stays focused on combat and the skirmish objectives. More options of management are provided as the player's faction's settlements expand and resources are harvested.

Skirmish selection: I think of this similarly to questing in WoW. You want something to do, you go find a skirmish to join. If you succeed your team is rewarded. Skirmishes come in several flavors. There may be permanent skirmishes located on the map, say for a type of dungeon exploration. There may be temporary skirmishes involving attacking an enemy camp. There could also be a user-generated skirmish ability, for, say, something like dueling another player or testing formation abilities in a controlled environment.

Faction, Social, Settlment, Resources: The intent is that players belong to several tiers of groups. The top level being their associated major faction, i.e. the Alliance. Secondly being a group of players, like a guild. As a guild players will share resources, and should want to develop guild-centric settlements, rather than solitary small settlements. Guilds may fight over resources, but we haven't worked out how we want to limit this. With user controlled settling and exploration, you run the risk that you cannot control the development of the world and keep it within realistic limits. My tendency is to limit this not with artificial methods, but rather through realistic ones? Why don't cities just spring up right next to each other (ignoring the modern world for a moment)? Because they would not have enough resources to both survive, or maybe because they'd blow each other up, depending on what they thought of each other. This aspect isn't exactly finished design.

Dynamic Skirmishes: Part of settling is the risk of being attacked. Why build defenses and walls if they won't be used? We want to provide the sense that, in the more dangerous areas, you will be attacked and you could theoretically lose resources. But players don't like resources being lost. They don't want to have their work undone. The current thought is that there are two levels of these NPC attacks. First, there may be a set of nested skirmishes that change as the enemy approaches. The first aspect may be a set of scouting missions, which open up further skirmishes. The second may be some hit-and-run tactical missions, or perhaps some assassinations. Deeper in could be some group-based larger army missions. Depending on the type of NPC group attack, this could all culminate in a raid-style boss encounter.

But what happens if the player isn't there to defend his stuff? Well, first, we want people to guild-up. Guilds as a whole can help defend each other's property, and this would also keep the focus on larger, fewer, guild settlements. Second, pacing. These attacks would be slow moving. The overall attack may take weeks of real-time. Third, automated defenses. Commanders of offline players are still in the game. While they are not used in a skirmish, they may be providing static defenses to the settlement. So if an NPC group reaches an attack phase, the attack will be 'simulated' against the present defenses of the town, including offline commanders. So mere presence will help defend, though in defending this way fewer (if any!) rewards are granted to the player.

We want this tier of gameplay to be able to stand on its own. We'd like players to be able to focus on one aspect of the game, if they desire to, and play it relatively exclusively. The top tier is also more natural for mobile or web-based gaming, so we'd like to manage a portal to the world map via the web.

Now that we've created years of work for ourselves, perhaps we should get started? Next time, where we stand now.

Monday, April 20, 2009

Finally, on Skirmish.

The World Project, from last post, is just an engine. A rather limited engine in terms of sophisticated capabilities, but it runs the simple stuff pretty well. Like making libraries from Twilight 1, though, a game must be made.

The plan is to make a persistent world Real time Strategy. Now, I can't think of any terribly successful games that have done this. I suspect there are reasons. We can come up with some if you like. But you realize by now that this just adds fuel to the fire. More reason to do it. Let's list a few:

  1. RTS games are traditionally about rapid build up, economic balancing, and relatively simple combat, where losses are expected and necessary.
  2. RTS games tend to be light on customization and freedom, both of which are typical in a persistent world.
  3. RTS games tend to involve large quantities of units acting in formations of some sort. This requires unit-unit boundaries/collision of some sort. Large unit count works against a massive setup, as the computational requirements are impressive. Unit-unit collision is problematic in the massive case as well, but this is of lesser concern.
So what to do about these? Well, how bout we split the RTS gameplay into two distinct, but necessarily linked, playing fields? In turn:
  1. Rapid build up will not work in a persistent game, or the increase of power over the lifetime of the player would be enormous. So, we split power gains into 2 types. One, a mid-combat gain, which is largely temporary. Second, an out-of-combat gain, similar to a typical levelling system.
  2. Make customization part of the players units. Typically in RTS games units are selected for relative power in the current scenario, strategically. Change that from 'selected' to 'developed' or 'trained' and you have a longer term customization. Regarding freedom, this is a bit trickier. RTS Player vs Environment is almost universally scripted scenarios, or random fights over set maps. Freedom to travel and choose what the player wants to do would lean towards the latter. Build a world and a set of scenarios to accomplish.
  3. Large unit count. Hmm. RTS players like armies. Typical approaches to managing army sized groups are to cluster them into 'group' units. This is contrary to a customization point, though. With a group it is harder to justify massive retraining. So how bout we make this a sort of 'elite skirmish group', where each individual is a powerful fighter in his own right, but the workings of the group as a whole are more important? A close-knit fighting group. Let's give them a leader too; a Commander. This gives the player a troupe to customize as a whole, and as indivuduals. This keeps the unit count to customizably sane levels. Say, varying from 4 to 20 as the player designs.
So now we have a sort of 2 tier game. The combat tier (which I call the Skirmish) is fought in a relatively traditional RTS manner. But there's no base development and economics. RTS are interesting because they require managing multiple aspects simultaneously. If we remove 2, we'll have a pretty boring game. So we either find something to replace these aspects, or increase the complexity and management of combat itself.

Economics we'll replace with unit resources and morale. Morale will become a per Commander value that fluctuates with the users actions, allows certain actions, and may be spent for others. Individual units may be granted abilities that require some local personal resource, typically mana, which may require some maintainence.

Base Development we'll retain in a temporary sense. Defenses can be constructed, or in some cases, inhabited. This makes terrain and location relevant even without a base development aspect. Let's throw some local, disposable units in there too. Now we've got a resource, extra units, that territory may grant (or revoke!).

With these aspects in place, we can build a set of types of scenarios. Scouting, ambushes, escorts, sieges, strikes. Since the combat phase is only a part of the game, and local power gains are small, let's keep these short. 10 to 15 minutes, maybe 30 for a particularly difficult scenario. Starting to sound a little more like a quest in an MMO.

That's the combat tier in a nutshell. The second tier is for managing units, guilds, cities, exploration, customizing, and construction. This tier is not played with your squad! It's for the player himself, so to speak. This way we avoid trying to explore towns and landscapes with an army in tow. Can you imagine Ironforge where every player was really 20 different individuals? It'd be madness. Oh, and really really slow.

So for our first incarnation of the top tier, we imagine a sort of world map. An abstract high level view of the world. Players travel about, either hunting for a scenario to engage in. But what's the purpose? It's it just a glorified scenario selector? Is it a pretty facade for a multiplayer game lobby?

More on the World soon.

Monday, April 13, 2009

On the World project.

You know that bit in the description regarding over-ambition and general lunacy? Well, this here should explain it nicely.

Everyone who plays games has ideas to improve them. Well, maybe not, but I do. And my friends do. And we talk about them. Many ideas, most of them utterly and completely terrible. I've also spent a good time (in undergraduate lectures, of cours), developing a setting for tabletop roleplaying. While I haven't run a game in years, I'm still enamored of the setting I built. The 2 Twilight games were built on this setting.

The conclusion to all this, is that I really want to make this setting a virtuality. Not a reality. That'd be complicated. A virtuality; a persistent environment. Er, an MMO, if you want to be uncouth about it. The concept of starting from nothing and building even a small part of the world is... enticing.

The obvious problem is that these games are hard, and expensive. Look at the state of the industry right now. Overworked and underpayed for late games, most of which fail. And the MMO market is worse than the typical games market in those regards.

You're beginning to see the connection to "over-ambition and general lunacy"? I hope so.

After NegativeAgain became fun, and I became more used to work and schedule wrangling, it was time to start the project. The goal is to make a persistent environment. The theory was to keep the system possible by limiting its scope. That is, the core system would only handle a small set of behaviors. This keeps the networking communication, server-side communication, and related programming time low. Complex behaviors need to be built up from the basic stuff. It's a pretty standard layered abstraction.

The system manages 6 data tables, attribute types, states, actions, entity types, event ids, and effects. Effects got added in as a method for containing complexity. These are effectively just functions that actions use to execute. Events and Attributes could be removed entirely, as they just provide a name. However, I prefer having a sort of 'strongly typed' system, so laying out the set of attributes and events explicitly makes a lot of sense to me. In the long run, attributes ended up with a bit more data anyway (default value, derivation formulae). The low level data system handles serialization, update, etc. It also has a wrapper that supports a server-client set up for the database. I never use this, though, so it's probably rotted.

The next layer up takes this data and executes it. I call this the Zone level. The zone is actually quite simple. It's just a set of entities. There are no other types of objects. If it isn't an entity, it doesn't exist. Sometimes this feels weird when you're working above this level, but it's clean and simple, once you're used to it.

On top of the zone lies the server communication management layer.

Also on top of the zone is the sort of game logic. It's implemented largely within the system, using states and types constructed either in code or within the system editor. Note that this means that the game management and communications layers are completely independent.

The important thing here is that all the guts are in their appropriate places. The gamedata layer (bottom) is the only layer that cares about the gamedata storage method, serialization, interfaces, management, etc. The upper level pretty much uses 2 interfaces, aside from initialization. The zone layer manages execution of said data. The server does server things, and the game manages game things.

So there's clearly a lot of things that I didn't mention. No zone-to-zone communication at all, for example. No instancing, no classes, no weapons, no items, etc. Everything is built within the system, that way everything works without updating the protocols. Items are just entities too. Classes are states. Instances? Well, that's just a new zone.

So now, to make a game that uses what we've got, and not what we've not. Next time.

Friday, April 10, 2009

Blog rules

  • More links!
  • Not allowed to go back and edit; perfectionism must be curbed. The Line Must Be Drawn.
  • More... Englishisms.

Thursday, April 9, 2009

On redTOAST (A History)

Regular, Every Day TOAST, Or, Alternatively, Simply, TOAST.

It's really quite ridiculous, but that's a pretty good description of what we do anyway, so it fits.

redTOAST was started in 2001 or so as a collaborative project between 2 of us, who realized that we were no longer missing any critical knowledge to make a game. Now of course we knew very little, but we decided to just do it. And we did it. We made a game. It was even fun. It was a StarControl style game (that is, top down space-physics ship combat). The code was disgusting. The interface frightening. BUT. It. Worked.

If nothing else, TOAST has been a vehicle of learning. We called the first project Star Control: Negative. It made us better programmers.

We followed up with Negative Two, dropping the StarControl homage. Negative Two taught us exactly what happens to every single other project. We called it second-system-syndrome, but we like alliteration, so you'll have to forgive us. It the simple fact that, after having learned so much from Negative, we designed a much more complicated system for Two, and it imploded under its own impossibility. So we learned a lot there too.

I was in graduate school about this time, and my partner in crime ended up working elsewhere, out of range of easy collaboration. We buried Negative Two, consecrated the ground and let it fade into memory. Having learned a bit more about software architecture in graduate school, I decided to apply the combination of personal experience and lecture, and started a new project. I called this project Twilight in the City of Kaiur, a Real-time strategy/economics game set in a... well, the setting isn't important.

I obsessed over this project to an absurd degree. It was in my thoughts all the time. I bought a laptop so I could work everywhere. Step by step, line by line, I built this crazy system over the course of a year. It worked. The software did not implode under its own weight; well, not for a while at least. Knowledge only goes so far without experience. Working with this software, with the new methods learned (I found out 6 months later they're called Design Patterns), was an absolute dream. Stuff just worked. Well, the network never really did. And the game design was so miserable, it's really quite entertaining (in retrospect).

Twilight had to end though, it got too big a game and did become unmanagable. I spent the next year splitting twilight's codebase into a set of libraries, refining and improving as I went. 

This was a mistake. 

Now, I learned a lot and had some lovely code, but it was MIND NUMBING. I stopped because I was losing interest. It had to be a game. If there wasn't a goal product, there was no point. Therefore, to such ends a new project was started.

Two new friends joined in for this one. We called it the one-week project. We took the fun parts of the Negative (one) game, and rebuilt it for multiplayer, with the new libraries and code to back it. The 3 of us cranked for a week, and 2 weeks later we played a multiplayer session with 8 people at my birthday LAN. No, it wasn't perfect, but it was a only bloody week. The game was called Negative again, but rather than being '3', we called it 'ether', both as a pun on 'e' (2.718), and as a reference to some physics adjustments we introduced. The ships sort of surfed in space, rather than using free-space physics.

But apparently we're not good at working on group projects together. The e-team didn't reconvene to continue Negative Ether in any meaningful way.

I think around this point I began to write my thesis and get busier in graduate school. It follows, then, that I started work on Twilight 2. Did I repeat second-system-syndrome? Certainly. But I was at least aware of it this time. I focused on certain elements that I wanted to have experience with. I wrote a scripting engine from scratch. I did more focus on generic actions, states, user interface. The game was substantially more data driven, and it worked, even if it wasn't elegant. The economics aspect of the game worked, but it was only about one third the game content. So, naturally, the project had to die. It was too big, but more importantly...

I began working for Bunkspeed. Now, I love being the type of person who programs all day at work, then goes home, and continues as if nothing changed. For awhile, though, this wasn't achievable. This was my first actual job, since graduate school and related activities don't count. I mean, I spent all of graduate school on the previous paragraphs, so you know how serious I was about my 'formal' education. My actual education was quite successful, thanks to the free time afforded by being in graduate school. Something seems weird there. Must be me.

Where was I? OR right. Twilight 2 was too complicated in conjunction with work. So I reluctantly buried it too. It felt like my game programming history was a sequence of graveyards, but each grave had yielded a flowerbed of knowledge, so I never considered it time wasted.

6 months or so later I decided I wanted to do an 'easy' project. I must've gotten over my fetish for complexity. I began touting principles like KISS, YAGNI, bla bla. I also got tired of fighting multiplayer networking. So! A simple project. This project would be YAN: Yet Another Negative. "Negative Again". Built for simplicity. Graphically? Everything was particles, no shaders, no complex features. Just lots and lots of particles. Data complexity? No states or complicated data; just property bags of ship, weapon, projectile parameters.

It was simple and fun. I still play it at work. I still tweak the numbers in the files, even though I haven't recompiled in years. I stopped active work on it 2 years ago. It's the closest thing I have to complete, and it isn't.

So in short: made some games, played one.

Monday, April 6, 2009

On Bunkspeed

Bunkspeed, for those of you not aware of their presence, is a Carlsbad (San Diego area) based company specializing in rendering software. Our 3 primary products are HyperShot, HyperMove, and HyperDrive. Shot is based on an excessively fast and gorgeous raytracing engine, and Move/Drive are based on a DirectX 9 pipeline written mostly by myself.

Now, Shot's engine, though I said it is a raytracer, don't confuse it for an offline renderer. HyperShot is a realtime app. Dynamically updating with your camera moves, and continually improving the image. Let it sit for 2 seconds and you've got a brilliant image. Peruse the Gallery over at our site and you'll see the results. Go try the demo and see it in action.

But I digress, this wasn't intended to be a sales pitch. The interest for me is in the interaction between Shot and Move/Drive.

Move/Drive uses a DirectX 9 based rendering pipeline (though I'd love to be able to use 10 or 11 these days). When I started work on this engine, my goal to was to make a fast, robust, and high-quality-output engine. HyperShot did not exist at the time, and wouldn't for another year or so. So we set out to make a real time engine that focused on high quality, rather than unfallable speed or consistency. We made a lot of decisions on how to build the engine accordingly:
  1. Dynamic shader construction. This lets us build shaders specifically for each exact required surface. Shader construction time is non-trivial (but caching would later make this less relevant).
  2. Dynamic shader population. We use a robust and easy to adapt system for managing shading constants and parameters, but it made the typical tradeoff of generality vs. efficiency (i.e. ran like a dog). Now we have a solution that yields both, but it took 3 or 4 rewrites of the system to find something palatable.
  3. On-demand and recursive dependent passes. Many objects in our scenes can be tagged to use a local reflection/irradiance map. This eats up lots of VRAM and rendering time, so we do this only when needed. In a game, you'd never do this, but this isn't a game, it's a visualization tool. The more interesting part here, is that we allow recursive reflections. The first frame a recursive reflection is found, an additional 2 passes will be drawn. This gives you additional 'bounces' of reflections, but it means that first frame is EXPENSIVE! At times I'd see the first frame after load take a full 3 seconds, and somewhere around 150 rendering passes, but after that, everything's there and lovely. No bad frames.
  4. Cook-Torrance lighting instead of Phong. Cook-Torrance is more expensive to evaluate, but is more physically accurate. It also matched up to our rendering at the time.
  5. VRAM TASTY. Oh my did we use a lot of memory.
So, we've got a 'slow' GPU renderer with good quality. Along comes HyperShot, a 'fast' CPU renderer with great quality. It's like the two were both working toward eachother from opposite ends of the spectrum. Take a game-style GPU engine, make it slow and accurate. Take a photorealistic render, take some shortcuts and make it fast and less accurate.

Isn't it, weird? If we'd succeeded we've have the same product from two completely different angles. So, we changed it up. Move/Drive focuses more on being a tool for setup, animation, and presentatio. We've focused more on speeding up the rendering and improving usability (though we retain the ability to have the high-quality GPU renders). Hypershot focuses on fast, crazy-good quality raytracing, backed by a photorealistic renderer of uncomparable quality.

Match made in heaven. The best of both worlds.


Behold, a blog.

Herein, shall be contained the assembled rambling of one Nick Gebbie, more often known as Ventare, being composed of varying topics, including but not constrained to:
  • redTOAST Studios; being myself in conjunction with some friends, developing entirely-too-complex games in our spare time. I suspect much of this blo(a)g will be spent blathering on about our TOASTprojects.
  • Bunkspeed, for whom I work; I'm their Senior Graphics Programmer, so I'm basically in charge of writing the GPU rendering engine used in HyperMove and HyperDrive.
  • Programming in general; graphics programming more often than not. I've had the good fortune to work with a set of excellent (and varied) programmers, so perhaps there's some wisdom to be passed forward.
  • Game Design; often in the context of TOASTdevelopment, but I'd expect some WoW talk scattered about, among other games.
Let it begin.