• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

Computer learning in Starship Combat

robject

SOC-14 10K
Admin Award
Marquis
So the other night I had an idea for training a starship combat system.

Basically the training is hands-on, requiring a human to perform the actions until he's confident that the computer knows his style. Here's how I think it might work.

First, we assume a realtime combat system that has 2D non-vector movement, generalized distances such as range bands, and of course weapons and defenses. We assume that all of these elements are reasonably accessible to the human player. And we assume that each tick of the clock is a training moment.

The human player plays one ship. The computer plays the opponent.

The computer observes the human's behavior. At a given tick, a player's state is represented by his ship's current maneuver rating, its current weapons and defenses bearing, its vector to its opponent, and the opponent's ship's maneuver, weapons, and defenses status.

These data are inputs multiplied through a series of weights to a small number of central values, which are in turn multiplied through a series of weights to output nodes, which represent behavior in terms of a movement vector and weapons activity. The computer plugs in its ship's state to determine behavior, and acts accordingly. At first, its behavior will be quite random.

At the same time, the computer plugs in the human player's state to predict the human's behavior. The computer compares this with the actual behavior of the human; the difference is used to modify the weights to the output nodes and generate a new set of central values, which in turn are used to modify the weights to the input nodes. At the end of each training moment, the computer is better able to simulate the human's reaction to states in the game arena. The more the computer plays, the more likely it is to learn. In addition, playing with different humans ought to result in a mixture of playing style, though not necessarily a superior one.

In short, the computer trains a software neural network in order to build a customized decision-making process. My theory is that the computer will learn how to behave like the trainer in ship combat.

I estimate fifteen input nodes (plus or minus), four hidden nodes, and fifteen output nodes (plus or minus). Total network size is therefore about 120 double floating-point numbers -- maybe 1k of memory.

Thoughts?
 
Last edited:
I think you would be better off using a neural net to identify situations and then a genetic algorithm to choose the best course of action given the situation, and opponent.
From experience training a NN is not an easy task and requires a fair amount of Black magic. Also NN are really good at identifying things but not so much at decision making.

Another option is to use an "A*" like search to find the optimum path to the enemy's destruction. The advantage here is that as your program wins or looses it can easily update the weights on the decisions it made, or even add new decision paths to it's database.

However, any direction you go will likely take some time to code up and test.
I suggest that you get a grad student to do it for you.
They are cheap, and as long as you are paying them under a contract, you own the code and the rights.

-Will
 
Back
Top