Let's talk about AI - Just for fun
Posted: Sun Feb 09, 2014 6:36 am
First, a little about me. I do mean a little. I'm not a complete stranger to game development and, I have messed with some AI programming. Quite a number of years ago I wrote a game development framework based on a modified hopfield net. The coolest part of the thing was that it was self-organizing. What that means is that a game created in the framework was not structured by the programmer; the programmer just created all the pieces/parts and the framework took care of arranging and ordering everything into a coherent working system. Now, that might sound neat and all but, it had some significant problems.. one major problem being concurrency. If the pieces/parts were created thoughtlessly they could easily step on each others toes. Anyway, I just thought it would be fun to talk about AI so here goes...
For those of you who don't know anything about STRIPS or goal planning the basics of how a goal planning AI works is as follows. Goal planning has three major parts being:
Pre-existing states(Preconditions) -> Actions -> Goal States(Effects)
The preconditions represent a definition of the world such as a boolean value for tired(true/false). The actions represent transitions from one state to another for example the Sleep Action is a transition from state tired(true) to the state rested(true). Goal states represent a set of desired effects that you want your AI to have on the world. AkA Your AI wants to be rested(true) so it formulates the plan Sleep to achieve that state. The way the AI knows it needs to Sleep is IF it currently posseses the state tired(true) AND is NOT rested(true).
Thats very simplified but, basically the way goal planning works. An actual planning algorithm might look as follows:
1. Choose a goal to achieve.
1a. Have a goal invoked by some other Action.
2. Check preconditions(make sure its actually possible to achieve the goal) on fail choose a new goal(goto step 1).
3. Choose an optimal action sequence to achieve goal states.
4. Implement the Action Sequence(actually perfrom the actions and transition across states).
5. Check effects(test your states to see if you have arrived at your goal states).
6. Goal achieved? No(Replan-goto step 3) Yes(Do something else-goto step 1).
Ok so,if you don't already know about goal planning you might be saying, " How does that work?" Well basically its just a graph. You graph all the world states and that becomes the definition of your world. Then, you build into the actions what states they transition across and, you assign cost to the actions. Once you have that all in place you can use the A* algorithm to find the optimal action sequences for the desired goals.
Where this gets interesting is conjecturing how the AI chooses its goals. And, perhaps, conjecturing what goals the AI might have to choose from. For right now Ill leave it there. I have been at work all night, and I'm starting to get a bit too tired to think. Later, I might go into a list of all the goals that would be neat for an AI in a space sim to have or, if someone else wants to start up a list that would be cool. Then maybe talking about how an AI could choose bewteen those goals would be fun to chat about.
Bye
For those of you who don't know anything about STRIPS or goal planning the basics of how a goal planning AI works is as follows. Goal planning has three major parts being:
Pre-existing states(Preconditions) -> Actions -> Goal States(Effects)
The preconditions represent a definition of the world such as a boolean value for tired(true/false). The actions represent transitions from one state to another for example the Sleep Action is a transition from state tired(true) to the state rested(true). Goal states represent a set of desired effects that you want your AI to have on the world. AkA Your AI wants to be rested(true) so it formulates the plan Sleep to achieve that state. The way the AI knows it needs to Sleep is IF it currently posseses the state tired(true) AND is NOT rested(true).
Thats very simplified but, basically the way goal planning works. An actual planning algorithm might look as follows:
1. Choose a goal to achieve.
1a. Have a goal invoked by some other Action.
2. Check preconditions(make sure its actually possible to achieve the goal) on fail choose a new goal(goto step 1).
3. Choose an optimal action sequence to achieve goal states.
4. Implement the Action Sequence(actually perfrom the actions and transition across states).
5. Check effects(test your states to see if you have arrived at your goal states).
6. Goal achieved? No(Replan-goto step 3) Yes(Do something else-goto step 1).
Ok so,if you don't already know about goal planning you might be saying, " How does that work?" Well basically its just a graph. You graph all the world states and that becomes the definition of your world. Then, you build into the actions what states they transition across and, you assign cost to the actions. Once you have that all in place you can use the A* algorithm to find the optimal action sequences for the desired goals.
Where this gets interesting is conjecturing how the AI chooses its goals. And, perhaps, conjecturing what goals the AI might have to choose from. For right now Ill leave it there. I have been at work all night, and I'm starting to get a bit too tired to think. Later, I might go into a list of all the goals that would be neat for an AI in a space sim to have or, if someone else wants to start up a list that would be cool. Then maybe talking about how an AI could choose bewteen those goals would be fun to chat about.
Bye