Sunday, September 14, 2014
Tremendous LT App Launcher
I mentioned a few weeks ago the idea of having a single executable that simply runs an LTSL 'application' script -- a script that describes the full functionality of a program. Today I've finally seen the birth of that tool! My primary motivation behind this is to be able to maintain a bunch of different small LTSL applications that I can use to test various pieces of the game in isolation. Previously I had a few hard-coded variants of that idea: the full-blown Limit Theory (which I would often tweak in certain ways to test specific facets of the game), a single-system testbed that simulated one system and presented a top-down, UI-only map for me to poke and prod as the simulation ran (I developed most of the dynamic economy in this application), and even a console-only version of LT that simulated a universe at maximum time compression (200-300x real-time).
With a unified LTSL application launcher, I can now do all of this and more by simply writing new LTSL scripts that use specific pieces of the game. I can and will use the launcher to build simple test applications for things like benchmarking engine performance, prototyping and measuring the performance of new AI algorithms, quickly building individual UI widgets, and more! In time, Limit Theory itself will become a script that opens the main menu, builds universes, etc. I've not quite got enough exposed to LTSL yet to make that leap, but it's actually not far off at all
I don't think I need to point out how much benefit this launcher idea has -- but just to drive the point home, I'll also say that it will allow me to always keep a working
version of LT, since I won't have to screw with the real game anymore to test things. Come monthly update time, we should, in theory, be seeing far less "gotta fix that!" moments once the playable LT is cemented into an LTSL script.
Currently, I'm already using the launcher to create new UI widgets, as well as to create benchmarks for LTSL performance (in fact, I was very surprised by the results of my first few benchmarks -- LTSL executes significantly faster than I would have guessed! Even in its fairly-unoptimized state!)AI Testbed
With the advent of the launcher, one thing that I'm looking to do very
soon is set up a tool for benchmarking AI algorithms. In order to develop quality AI (dogfighting, trading, piloting, management, etc.) I'll need to have precise ways to measure the results. By setting up and executing simple test scenarios in LTSL, I envision a world in which I can actually reduce the efficacy of an AI algorithm to a single number! In the case of dogfighting, for example, I will create a very simple (perhaps empty) system, spawn two identical ships, load a 'baseline' dogfighting algorithm into one pilot (for example, the simplistic one that they're all using right now), then load the algorithm to be tested into the second pilot. With a nearly-empty system, I'll be able to run a full-detail dogfight at a time compression factor of 100+ -- meaning I'll know who wins almost instantly. I can run 10, 20, perhaps even 100 rounds of the same dogfight, and output a final win percentage and average damage taken for the tested algorithm. Talk about easy-to-understand feedback!
Over time, I anticipate being able to continually build up algorithms of increasing skill until they're ultimately at a point where they blow away the old one. At that time, they'll become the new baseline for the next test, and I'll keep moving on up!Breeding AI Algorithms...
That's great, right? But is it as far as we can push the idea? Surely not...
Testbeds get us automatic quality feedback. That's one piece of the puzzle. What's the other piece? Automatic exploration of the solution space
. If you marry an automatic solution grader
with an automatic solution generator
, something magical happens: a feedback loop of self-improvement that requires no intervention. It's no coincidence that evolutionary computation has become an increasingly-popular and powerful tool. It provides a way to automatically explore a space of solutions. Again, when that exploration is automatically guided by a function that can determine the quality of a solution, ....magic
What I'm getting at is this: if we can develop a mechanism for generating random AI programs, then all we need to do is plug that mechanism into the AI testbed and wait for our AI to rule the world.
I've got a lot
more ideas surrounding this one, including true 'skill' levels for AI pilots by using algorithms of different AI testbed performance ratings. I've also got some fun ideas about how this will ultimately form itself into a 'ranking' system in the game, allowing players (AI and human alike) to work their way up a tiered skill ladder in different areas of gameplay by comparing their performance to one another. Perhaps you'd like to become one of the few feared 'Black Diamond'-level dogfighters in the universe?
Fun times ahead
PS ~ First time ever successfully pushing back the infamous devlog deficit? Just more proof that there is time