Cool. Now lets say you retraced your steps perfectly. You aren't yet at the station and the AI hasn't decided to undock yet. He just completed some task and has to run through his decision algorithm to decide what he is going to do next.
Based on his attributes, whenever it comes time to make a decision, he has a preference for the types of task he now wants to pursue:
1. Aggressive oriented tasks - 10%
2. Economic oriented tasks - 70%
3. Exploration oriented tasks - 20%
The first time the scenario ran he decided to go mining and you shot at him. Now should there be a guarantee from the Game Master, the computer in this case, who provides seeds for the algorithm to be setup in such a manner that if x does happen then y is guaranteed to happen. If the input is the exact same, as in, the player performs the exact same input, should the result be same.
So for the scenario, it would probably boil down to a RNG that has it's value clamped to a number between 0.0 and 1.0. As in:
Code: Select all
// This method would be called when an AI completes a task and requires a new one to strive towards
// The Random Number Generator that it will use to determine exactly what to do next is passed as a parameter
void AI::generateNewTask(RandomNumberGenerator generator)
{
// Just a holder variable that will contain a value between 1 and a very large number
// which will possibly be 18,446,744,073,709,551,615 which is the max decimal value a 64 bit unsigned integer can hold
unsigned long long rawRandomValue = generator.getNextValue();
// We want to scale the value down to a value between 0 and 9999, this prevents weird percision bugs that could be created in the next
// calculation if we didn't do this
unsigned long long clampedRandomValue = rawRandomValue % 10000;
// We take our clamp value and divide it by the maximum value a clamped value can be, which effecitivley transforms the value into a normalized float
// 'c' contains the 0.0 to 1.0 floating point value that we want
float c = (float)clampedRandomValue / 10000.0f
// Our AI is going to be more "economic" oriented since in your example, boba, you said he was heading out to mine.
// Lets say the description of our AI's attributes or perosnality is just described by a single floating point number for each attribute possible
// The sum of the values for all these attribtues must be equal to 1.0. Lets say he has the following attributes:
// Aggressive - 10% or 0.1 as a floating point value in memory
// Economic - 70% or 0.7
// Exploration - 20% or 0.2
// These are the constants talked about above, in reality these would be determined when the AI is first created
// And just be instance member variables for the "AI" class
float currentAIAggressiveRate = 0.1f;
float currentAIEconomicRate = 0.7f;
float currentAIExplorationRate = 0.2f
// Thses values set below represent the chance an attribute will influence the generation of the AI's new task
// The key takeaway point here and in the branching code below is that an attribute with a higher rate
// will more likely be chosen which is what the programmer would want, they would want an "aggressive"
// person to usually choose aggressive tasks to do like pirate, fight, ect...
float aggressiveChance = currentAIAggressiveRate;
float economicChance = aggressiveChance + currentAIEconomicRate;
float explorationChance = economicChance + currentAIExplorationRate;
if(c <= aggressiveChance)
{
// Now that we know the AI has decided he will work towards an aggressive oriented task, we have to generate one
// Which an entire different function and just take the returned task and set this specific AI's task instance variable
// with that value
Task newTask = generateAggressiveTask();
setTask(newTask);
}
else if(c <= economicChance)
{
Task newTask = generateEconomicTask();
setTask(newTask);
}
else if(c <= explorationChance)
{
Task newTask = generateExplorationTask();
setTask(newTask);
}
}
Okay with all that out of the way. The poll and question is about the parameter passed to this function. What seed was given to that pseudo random generator passed, if it is even pseudo.
The question isn't referring to whether the player has the ability to perfectly repeat his actions, that is a given in this situation (or the player simply has no influence on the decisions about to be made). The question is referring to whether given the exact same state of the game, except for the fact that this isn't the first time this state has been reached, should the saved game files be setup in such a way where they remember where the pseudo generators left off from last time. One might think the procedural nature of the game will break if the save file doesn't remember what iteration the pseudo number generators left off but that isn't true at all.
The game will NOT break if they don't remember, however if they don't remember the universe that plays out will most likely be very different every single time because those decisions made by the AI boil down to that 0.0 - 1.0 value and if the generators don't remember where they left off, when the saved game is loaded again, the values pumped out of the generators won't always be the same and hence different RNG values are created and the decisions algorithm generates a different output. The output could be the same yes, but not always.
I really don't think I'm explaining this very well ha-ha.
Here’s an illustration: