But I think I can do it. It would be the last real step towards an AI that literally does everything on it's own. I would be little more than the creator of the game logic – simply giving the AI the rules of the gameplay – then stepping back to watch as it fits together all the pieces into a coherent fabric of intelligent behavior. No heuristic functions, no hints, no help whatsoever. Just the rules of the game and some CPU time. How great would that be!!!
Sounds awesome, doesn't it?
As humans we don't really know all actions we can take under all circumstances either. We can "guess" how something will work out, but we have no real way of knowing. But there's also learning from experience, and literally sleeping on it. While we sleep and dream, the problems of the previous day get processed into our memory and entire being.
To me, as a layman, it almost seems like Josh is building this way of interacting with reality in LT. The AI guesses how a choice works out by simulating it 'in its head'. Much like humans do when they're troubled. They go over likely scenario's, which choices are based on their personality. A violent man might seek a violent solution.
How do we know which choices we have? By seeing others making these choices, by reading about them or just creativity. To me, this descent algorithm sounds a lot like "sleeping on it", but in the forward sense. What if the AI would also dream about ways to make its choices better in the future? It wanted to expand to quickly and made an enemy... Shouldn't it expand more cautiously next time?
Perhaps the AI can't really know that it made an enemy when it expanded too quickly, but a possible conclusion of its own inspection could be that it should send out scouts first next time.
While the universe gets simulated, before the player enters it, the AI of each faction could become unique. Interactions could hone the AI's skill a bit relative to each faction's preferences/personality.
Wouldn't it be magnificent to discover the enemy has a weakness against lasers, and abuse that fact in a fight? Only to discover that the AI pursuits research into better plating against lasers, and weapons that are more damaging to your preferred hull type? Of course, it only had this weakness against lasers because it regularly has conflicts with another faction that primarily uses missiles. Another faction that has learned to rely less on research and more on teamwork might even want to call its bigger friends to help out.
How would the AI start making choices? I propose to have each faction have a score in their 'fight, flight or adapt' aspects, and applies these to problems it encounters. Anything that causes it to influence the territory of another faction, in a negative way for this faction, could be considered fighting. Anything that's reactive, in a peaceful non-encroaching way, to the problem itself could be called adoption. Everything else is flight.
As in reality, once you're established it's much harder to flee. You're pot committed and more likely to protect your assets, even if it doesn't make sense logically. How smaller the organisation (also in regards to the opposition), the more likely it is to encounter fleeing.
By letting the simulation run for a while, I'm sure AI can figure out smart things to do by trial and error. Didn't you learn both that: a stove that's turned on is hot, and your mother doesn't lie about hot stoves? Personally, I learned that by touching a hot stove.
Beware of he who would deny you access to information, for in his heart he dreams himself your master.