Return to “Announcements”


Re: The Limit Theory Email/RSS News Thread

Friday, April 6, 2018

Happy Friday o7

This log is going to be short (EDIT: modestly-sized) and underwhelming. I'm tired and lacking in the usual flair due to a long and not-so-great month. Real life has been both more time-consuming and more exhausting than usual this month, and that's all I'm going to say on the matter :monkey:

Concerning the last devlog: yeah, I got it :cry: That was pretty rough, forum, but I got the message. I haven't touched the thing in a while now. I do think some people missed the emphasis on me needing a tool to keep it all manageable; at the same time, I can't deny it, I've been known to fall prey to ShinyTechTools once or twice in the past :oops: So, regardless of who is right or wrong, I've turned back to 100% gameplay focus for a while in hopes that it'll help those who are feeling anxious about the state of LT. I know it's been five years but...let's relax a bit. Getting overly-worked-up about this game doesn't help anyone!


My focus right now is on the economy and AI. I'm working to get back to a small, functional economy where the AI is performing basic gameplay mechanics to create minimal-but-real market activity. This means: mining, navigation / pathing, trading. From there I will expand by porting more of the high-level AI, in particular, project management so that AI players will be able to choose between activities and dynamically react to economic conditions. Most of this stuff is just a matter of translating things that already exist (in C++, LTSL, or my brain) into Lua, so it's not very difficult. I've got the market mostly-working; the bulk of the remaining work is in AI porting.

Adam has burned through a lot of tasks this month, many of which have been on TODO lists for a long, long time. I can't hope to list it all out, but the man has probably touched every code file in both the engine and the game at least once in March :lol: All hail Adam \o/ On the gameplay front, he's brought over the top bar for switching between various interfaces, and we're both working to populate it with UI content. We've got a WIP command interface, to be joined shortly by a port of the scanning/exploration interface.

All-in-all, things move quickly when we're working on the game side of the game, and, as far as I can tell, we don't have any real blockers on that front at the moment, so...smooth sailing. At some point I will have to go back and commit to either finishing the last 10% of, or scrapping, the tool-which-must-not-be-named, but that doesn't have to be done right now. Lord knows we all need a nice, long ride on the gameplay train to restore some sanity :squirrel:

I'll post shinies when I have them, but right now there's not really much to look at, especially considering you all have seen this stuff before (mining, markets, etc...). Nonetheless, when I've got a bustling system of AI activity working again I'll slap some screenshots up.


Recently I've been doing more thinking (about the game). Remember when I used to do that? Think? Yes, it was fun! Since this log is short and I (regrettably) don't have enough work to talk about, I'll just talk about an idea that has been on my mind this week, old-devlog-style.

A few days ago I started thinking about the birth of cities and how it must be quite an exciting process -- imagining a settlement starting with just a few shoddy abodes, watching it sprawl out over time into a bustling metropolis as wealth pours in. SimCity, I guess. It made me sad to think that this process doesn't really occur in LT, since civilian life is largely hidden behind the black-box veil of colonies. We have space stations, of course, but those are large, discrete investments. We can try to think about the growth of a single station over time as new modules are added. But it's still boring compared to the 'organic' growth of something like a city, where the building blocks from which the whole is born are absolutely miniscule in comparative size.

That's really the key, too, isn't it? When the superstructure is made from atoms that are 'tiny' compared to the whole -- the buildings that make up a city are tiny compared to the city itself, the cells that make up living beings are microscopic compared to the whole, etc. -- that's when the growth process (and I dare say, the final result) is the most interesting. It's this granularity that makes it interesting in the first place! We can and will see such growth processes in many places in LT. But civilian life is largely absent, and it makes me a bit sad. So, what can we do about it?

As with many of my ideas, the answer may well be: nothing. And that'd be fine. But another possible answer is: 'microstations.' Or, to strip the idea of all pomp: "why don't we just do in space what we do on the ground?" Think about how we can make the equivalent of a 'building' in space. Instead of having to have monolithic stations, what if we thought more in terms of 'ship-sized' modules? What if large 'factory' modules -- the kind that scifi/space sims take for granted as being the norm -- were the exception rather than the rule? What if a small settlement could form, one household at a time, around a large, unusually-rich asteroid, in a completely granular fashion, until the population has reached a point of saturating the natural resource yield? Imagine small little 'space houses,' like organic scaffolding hugging the rock. Perhaps such houses could even be converted from ships (yes, I'm talking about trailer parks in space). Perhaps this would be the precursor to a superstructure like a station. Perhaps a (civilian) station is not built, so much as it is grown.

The idea appeals to me on many levels. It makes economic granularity vastly better, which means jump-starting the economy is easier, making sure it can sustain itself by growing and shrinking as necessary becomes easier...basically all the problems with coarse discretization go away. It also makes space feel more 'alive' and 'welcoming' to me. Home can be anywhere now, it doesn't have to just be the handful of stations/colonies nearby. Of course, I've not implemented anything like this before, nor have I played a space game with these constructs in it, so I could be imagining a false feeling...but I don't think I am. There's something to it -- walking through Ald-Ruhn/Suran/Balmora, having people cross your path, seeing their homes nearby (yes, I played some Morrowind recently, sue me. Outlander.) It feels warm, alive. I always wanted space to feel that way. Not so cold and desolate. Maybe I should continue to give some thought to spicing up the civilian side of things.


That's all for today. April should be better for us work-wise (and, by extension, devlog wise), as real life is promising to be less obtrusive than last month. The 100% gameplay commitment doesn't hurt either :)


Link to original: viewtopic.php?f=30&t=6473
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford

Re: The Limit Theory Email/RSS News Thread

Friday, April 20, 2018

Yet again I bid you a happy Friday, fellow pilots!

It's been a fun two weeks. I've concentrated my efforts entirely on the game simulation and high-level AI, in pursuit of a working economy, as per my last log. I'm pleased to report that, after two weeks, we do indeed have a small but working economy happening.

Major Additions Since Last Time:
  • System 'economy caching' ported & re-worked from old code; helps AI agents reason about job and market availability within a system or zone
  • High-level AI reasoning; forms the basis of AI players' ability to dynamically choose a profession based on profitability analysis
  • Basic colony population dynamics (helps create a time-varying economic sink/demand for basic goods, thus seeding the system economy)
  • Market mechanics now fully-implemented including escrow, on-station/on-colony storage lockers for temporary storage of bought goods or canceled sell orders, etc etc.
  • Limited implementation of Zones -- already in-use by AI for reasoning about job locations, but no zone gameplay mechanics (ownership, laws, etc.) yet
  • Happened upon a new algorithm for individual asteroid/ice/debris/whatever placement within fields, resulting in much more natural looking fields (no longer are they obviously ellipsoids :oops: )

At this point I've ported most if not all of the important simulation & high-level AI features that were previously implemented, meaning that I'm now getting to think about and solve new problems -- a welcome departure from porting! The next step for me is smoothing out the volatility of the economy and AI behavior. It's somewhat interesting that periodic/cyclic patterns always seem to emerge in my basic simulated economies when AI agents don't have access to historical data. That result is pretty obvious I guess, but still, interesting to see "those who don't remember the past are doomed to repeat it" play out so literally on-screen. The cyclic behavior can be seen as far back as Development Update #15 (March 2014!), when I introduced colony dynamics and AI job switching for the first time. I've never done a great job of smoothing over this volatility before, but I'm quite convinced that it's a pretty simple matter of factoring in historical data (EMAs mostly) + having a distribution of various AI behaviors with respect to time scale. Some AI agents should act on fast-moving EMAs, making 'short-sighted' decisions about jobs & markets, while others should act on slow/long-period averages, making 'long-term' decisions -- together, the result is a smoothing of the economy at all time scales.

Here you can see the overly-volatile economy in an 8-planet (the other six are off-screen), 50K AI agent simulation. Notice the jagged population graph as well as the obviously-visible 'flocks' of blue AI ships, which are due to market conditions changing so rapidly that thousands of AI units decide to change jobs all at once, hence the 'mass migrations.' Of course, so many units changing behavior all at once will cause yet another major shift in market conditions later on, which will, in turn, produce yet another flock of dissenters, and so on, ad infinitum :geek: With historical averages factored in, this would be a different story.

(NOTE: I know these screenshots are atrocious, but that's part of the point. When I work on the game simulation, I need to be focused 100% on behavior & dynamics, and 0% on graphics/tertiary concerns! As you can see, that is very much the case here :lol: )


Here you can see how a colony that has just recovered from a population crash (and is about to experience a large period of growth) is attracting droves of water traders due to high demand and correspondingly-high prices. Having no access to historical data, the traders are doomed to oversupply the colony, indirectly setting the economy up for the next crash.



I've spent a fair amount of time this week reading papers on market economy simulation (of our own planet, just to be clear). Never before have I really dove deeply into the colony simulation; previous iterations of colony dynamics were still quite placeholder, and really just designed to create an elastic demand for basic goods. The problem of colony simulation is important to me not only because I want the simulation dynamics of LT to create interesting, meaningful behaviors and opportunities, but also because the problem of simulating a colony is precisely the problem of performing a coarse simulation of a (sizable) economy (which is important to us for many reasons, including OOS system simulation and historical simulation at universe-creation-time). Ideally, insights uncovered in my quest to implement a decent colony simulation will bear fruit that can be applied toward the 'big daddy' of remaining problems in LT development: OOS/historical simulations.

Thus far, research has been fairly uninspiring. Many papers in this field address the elephant-in-the-room fact that the field itself has produced models of consistently-poor accuracy. It is not really surprising to me when you look at the models and equations in question :ghost: Lucky for me, I don't care about predicting what will happen to the global economy of Earth...I only care about creating interesting dynamics for fictitious universes! Since I've been having trouble finding inspiring reading on this topic, I would welcome any sources that you guys might know of -- papers, articles, books or the like that you may have stumbled across that have good insights into quantitative models/simulations of global economies/populations/anything interesting. In the end I'm sure my model will end up being simple (like everything I love)...likely just a vector of quantities and a Jacobian of their relationships; but I do like being inspired along the way, and my brain is enjoying getting to read new solutions to new problems again!


Going forward, my next steps are:
1. Recording & factoring historical data into AI reasoning
2. Capital Expenditure in AI & simulation (purchasing new ships, building a new station, warp rails, etc.)
3. Information mechanics in AI & simulation

2 and 3 are both highly-unexplored territory for me, so I'm excited to dive in. Information, in particular, is one of the few remaining 1.0 mechanics that really lacks in past or present implementation. I did have information itself implemented in LTC++, but none of the AI algorithms actually used information correctly. The ability of an AI agent to perform a job should depend on whether or not the agent actually knows about the location and/or associated object of the job. In addition, AI agents need to be able to place value on information that 'unlocks' new job/action possibilities, which strikes me as being very similar to capital expenditure in the sense that it's a one-time cost that provides continuous future benefit (it is inherently difficult to formulate a 'correct' value for such costs).

I'm hoping to have all of this (economy, simulation, high-level AI) in good shape by the end of the month or perhaps in another two weeks. That's an ambitious goal, to say the least -- we're talking about a pretty massive chunk of what makes LT LT here. Still, I think it's at least possible to have the framework and general strategies for all of this done by then. Naturally I will have to tweak constants and so forth when we playtest and realize that the AI is actually too smart and is ruining the player experience ( :ghost: :P ), but having all of the algorithmic bits and general solutions in place will certainly make me feel better about remaining dev effort.

That's all for today, back to coding, see you soon o7


Link to original: viewtopic.php?f=30&t=6483
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford

Re: The Limit Theory Email/RSS News Thread

Monday, April 30, 2018

Hi :wave:

I had so much progress last week that I felt it would be unwise to wait until this Friday to share. To that end, I started writing a log last Friday. Sadly for me, happily for you, it was too long to finish, so I finished it today! Hope you enjoy :)

Flow-Based Economic Simulation

Last time, I discussed economic volatility, and how simplistic models & AI can (and do) cause instability and constant cycles of overshooting, correction, over-correction, and on and on. Since then, I've implemented historical data tracking for market items, which allows AI agents to see and act on data aggregated over various time scales.

However, while giving more thought to colony dynamics and their relationship to LOD economy simulation (as mentioned last time, they are really the same problem), I began to see the whole problem in a new light. A day later, the economy was humming along with a stability and robustness never-before-seen in LT's history!

The insight is simple: it's easier to balance rates than absolute quantities. If you're watching the food supply of a colony, you'll need to observe it for a while to know whether there's a net surplus or deficit, especially if food is subject to lots of noisy processes happening on lots of different timescales (and almost everything interesting is). If, on the other hand, you could see all of those processes listed out with their 'average rate' (in food/day, for example), all you would need to do is sum those rates and you'd know whether, in the long run, there would be a surplus or deficit.

But it gets much better than that. Let's think of the entire economy as a flow network -- for a concrete analogy, a system of water pipes (or a circuit, whichever you prefer). At each 'node' (a colony, a station, ...), we can keep track of the flow of economic quantities. Perhaps colony A -- a densely-populated urban capital -- requires 300 tons of grain/day. So we keep track of a flow of grain, at A it has value -300. Think of this like a 'negative pressure' at A (a drain/sink). 10 different traders, each capable of moving (round-trip) 20 tons of grain/day, decide to haul grain from colony B -- a rural and primarily agricultural establishment -- which has a surplus of +200 tons grain/day, to A. Now we update net flow values: at A, we go from -300 to -100. At B, we go from +200 to 0. Obviously, this is a 'good decision' on the traders' part, because they have significantly reduced the total pressure in the economy, bringing it closer to perfect supply/demand equilibrium. Just like current flows naturally between voltage gradients, just as water flows naturally between pressure gradients, so too do economic quantities flow naturally between supply/demand pressure gradients. This whole analogy borders on common sense. And common sense tends to work well when one can find it :)

The most obvious concern that we might raise about this technique is that it heavily depends on accurate estimation of the result of various economic activities. If traders compute a flow value of +100 per unit time but are only able to deliver +10 in reality, the economy will settle into a wildly-inefficient equilibrium wherein markets are constantly understocked due to what we might playfully interpret as a 'pervasive overoptimism concerning how much can be delivered in a certain amount of time' on the part of the AI. ([clairvoyance] 'Joke' about Josh being a flow-based AI with this very issue :roll: [/clairvoyance]) The solution to this is two-fold: first, use real math to estimate things. The AI in LT is already quite accurate in its ability to estimate job impacts. It will virtually never get things wrong by more than a factor of 2, much less by an order of magnitude. Second, to refine accuracy even further, we can compute a corrective term for the calculated flow value based on market data! We can split the supply/demand terms in our flow calculations and use them as follows: suppose a market has computed flow of 15 supply of death sticks, 10 demand for death sticks (per day). Then we expect, on average, for market data to show about 10 trade volume per day, and about +5 total supply volume. If market data tells us that the average trade volume is 50 sticks/day with 0 change in total supply, we can guess that our flow calculation is probably wrong and that ~50 supply / ~50 demand is a better estimate for death sticks. Also, with that many death sticks trading per day, the folks at this market clearly need to go home and rethink their lives.

It's worth noting that, even with 100% accurate AI estimates, calculating flow corrections is still necessary since the player can have a sustained impact on the economy, but does not report this impact to market nodes like the AI does.

Anywho, that's a lot of theory talk, but...does it work? You bet! It works really, really well. In my simulations, the flow technique quickly finds optimal equilibria, even in complex systems where the optimal economic structure is quite complicated. What's more, since the AI is always thinking about how to optimize the flow / minimize the 'pressure' of the economy, we actually see some interesting dynamics play out as we change the number of assets operating in a system. Watch!

In the following simulation, 1000 ships is simply not enough to saturate the total water demand of 8 colonies (their demand is kept constant for the purposes of this simulation; in the real simulation some of those colonies would die out since the economy can't support them all). Still, the AI applies some fluid-dynamics-like reasoning to try to make sure that the colonies are each 'minimally undersupplied.' The resulting equilibrium is quite nontrivial, with some colonies being supplied exclusively through trade, while those in proximity to ice mining locations are supplied directly:


(Note: I have a colony selected, and you can see the market EMAs (exponential moving averages) for water there; notice how the price has settled nicely to the 5-6 credit range and has remained fairly stable through most of history. Just as one would expect, the stability of these flow-based economies is crushingly-superior to my previous methodology!)

Notice how the AI totally ignores two entire ice fields, which it has (correctly) determined are essentially wastes of time in this system. Of course, when we apply more factors to the simulation, like diminishing returns for overpopulated fields, piracy, AI personality, and the like, we will see more interesting dynamics.

I've also introduced variation in size, speed, and cargo capacity to the simulation. The AI correctly takes things like top speed & cargo capacity into account when computing speculative flow values for activities like mining or trading, so in some cases you can actually see interesting patterns emerge from these considerations. In fact, in this shot, you can see one such pattern! Look at the four trade hubs, and the three trade routes connecting them. The traders are basically all tiny! Look at the miners. On average, the trade ships are smaller than the mining ships. Almost all of the trade ships are the minimum size, whereas we see a large variance in miners. I did not code anything that would directly cause or even suggest this behavior. So why does it happen? Given the various constants of this simulation, the AI has reasoned that certain ship properties are more important for trading, while others are more important for mining. Mining ships must sit idly as they extract water from ice. For them, speed is less important than cargo least, that's my spot-analysis of what's going on. Note that this isn't indicative of any objective truth -- given different 'universal constants' in the simulation, I would expect the situation to change entirely. The point, though, is that the AI has taken the specifics of the simulation and figured out how to craft optimal behavior with them. Nice.

Now, if we crank up to 2000 ships, the situation changes:


The colonies can now be supplied adequately, so there is less pressure on the AI to choose optimal water-supplying jobs. In fact, remember that the goal is to minimize differences in supply/demand (net flow) -- so the AI is going to (again, correctly!) select 'bad' jobs for some ships, because doing so ensures that colonies are not flooded with surpluses! This is the only reason for choosing to mine in the far ice fields, which you can see some ships are now doing. It's actually interesting to note that the AI is not applying 'rational capitalist' behavior here, but rather 'rational collectivist' behavior; some units are performing intentionally-suboptimal work in order that the whole can be optimal. Philosophical arguments aside, this works out well for our purposes of simulating a predominantly-AI-driven economy :geek:

Finally, in a 10,000-ship simulation, the economy is completely over-saturated:


Every possible job is seeing heavy activity. Trade develops along most potential trade routes; mining is in full swing at every location. The analogy to a network of water pipes with way too much water flowing in is apt. We can see the spray of ships here saturating the bursting economy in a very literal way.

In summary: economic volatility is gone, equilibrium is here, and the AI is generally much more capable of setting up well-structured economies that take into account all of the nuances of the star system and game constants. Long live flow-based economics \o/ As I finish more game mechanics and implement the corresponding AI jobs for them, we will continue to see a richer and richer spectrum of emergent behaviors and economic configurations.

Information, Discovery, and Non-Omniscient AI

With one problem solved, I moved on to the next and began implementing information and discovery mechanics. I've already written quite a bit, so I won't go as deeply into this work, but I'm certain there will be more to come. At this point, I've implemented the fundamentals: entities can be made 'discoverable,' and if they are, a list of players that know about the entity is tracked. Furthermore, for the first time ever, the high-level AI is respecting this limitation on information, which means that an AI agent must know about a zone before it can begin mining there, must know about a market before it can trade there, must know about a wormhole before it can compute a course that uses said wormhole, etc!

This is a very exciting step toward delivering on one of the promises that's near and dear to my heart: exploration as a real, profitable job. With AI players restricted and only able to use known information in their high-level planning, the ability to profit from discoveries becomes a natural (even essential) game mechanic.

When discussing information mechanics in LT, I am frequently asked the question: "so that means that if you discover a new wormhole and sell the location to AI, the major trade routes, maybe even the entire economy, could change completely?" and of course my answer is: "You bet" :) Indeed, we're so deprived of this lovely dynamic behavior in single player games that something as simple as AI responding to new information seems downright magical. Like so many things, it is, in truth, far simpler to implement than people imagine :nerd:

So let's have some proof of that answer, yes? Here we have a 1500 agent simulation in the same system shown above. This time, however, restricted information is turned on. AI players are initially given information of all colonies, but only one ice field. Obviously, this makes the initial equilibrium very, very different from the one shown above, in which the AI knew about all fields.


As expected, we see a completely new structure arise due to the information constraints. Again, the structure consists of some colonies being directly-supplied mining hubs, while others are connected through a trade network.

Now, I hit a button to give the AI information of all fields in the system. I have been a very busy explorer, mapping out these 5 other fields, which I now sell all at once to have maximal impact on the economy :) Shortly after the new information has been revealed to the AI, the economy begins to break down and disperse. It is preparing to change shape....


And after a while, as expected, it comes back to the equilibrium shown in the previous section, since the two simulations are now operating with the same information:


Voila! As desired, the discovery of new information can completely re-shape the economy! There's a lot more to talk about when it comes to this mechanic: what information does the AI have to begin with? How often does it discover new information without player interference? Will there be anything new under the sun by the time I arrive? It's too much to discuss at the moment. But I have given much thought to these questions this week and have solid plans for how most of this is going to work.


Just for fun, let's have one final shot with 100 tiny planets. You could imagine that they're stations instead of planets. Let's just see what the AI can come up with in an unrepresentatively-complex system:


Amazing :shock: The AI creates a network not unlike several little hearts pumping blood through arteries, forming a mining 'core' around each ice field, then developing trade routes that fan out to reach the far colonies. All of this behavior is emergent and self-organizing. Why does each colony 'belong' to one and only one 'core'? Why do cores structure themselves like little spanning trees? Why do we sometimes see a far colony supplied by both a trade route to a near colony and direct mining from the core (and why do these 'far' mining operations always seem to be conducted by only the largest ships?) In each case, I'm sure we could spend quite some effort analyzing the situation and uncover why the choice makes sense, which pieces of the simulation have contributed to it being optimal, and so on. For me it is already enough to see this behavior and be happy that we have Real Stuff™ driving the game :geek: :thumbup:

Capital expenditure was my final todo item from last time, and, while I've developed some theory that I think will work nicely, I haven't yet implemented it and thus won't talk about it in this log. After all, I've already gone on for quite some time. But I will give you a little teaser and say: flow-based economics makes capital expenditure much more tractable and even affords a formulaic way to compute the best investment -- be it weapon upgrades, a new research project, or the construction of a new station -- at any point in time. It's still a challenging problem, but is much easier with flow information.

Given that I've spent a fair bit of time on this log, I'm not certain that I'll be posting as early as this Friday. We'll see; if I have lots of exciting developments then you'll hear from me again this week, otherwise count on next week. In the mean time, I'm also going to be getting a KS update out this week (but it won't really be exciting for those who have followed the logs).

Farewell o/


Link to original: viewtopic.php?f=30&t=6491
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford

Re: The Limit Theory Email/RSS News Thread

Friday, May 11th, 2018

Update time! Let's jump straight in.

What have I been working on lately? A little bit of everything, as usual.
  • A boatload of UI polish and new features.
  • A cleanup pass on the engine.
  • Completely overhauled the way we generate engine bindings for Lua.
  • Reorganized and simplified our Lua support code.
  • Implemented the 'control bar' for switching between e.g. ship control, command view, etc.
  • Refactored camera control to allow smooth transitions between different cameras.
  • Re-implemented the command view.
  • Designed zone control mecahnics.

UI Polish
UI elements now store their local position instead of their global position. The global position thing was sort of an experiment to see what it actually ends up looking like in practice. It certainly has a few pros. It's dead simple for one. Comparing positions and checking for intersection is simple. It doesn't make it much harder to support different resolutions, as one might initially expect. On the other hand, once you have something like scroll views it gets a little hairy.

My first thought was to have the scroll view modify the view matrix at the renderer level. This way child elements of the scroll view would never even know they were offset. This was nice since dealing with an offset didn't leak out of the scroll view itself, but it caused a performance hit on some machines due to an OpenGL quirk. Storing global space also meant parent elements would have to pass a delta position to all children when the parent moved. And adding children with a relative offset from the parent was trickier since sometimes we build chunks of UI before attaching them to the UI and therefore without knowing their global position.

Storing local position and origin simplifies all of that. Sure, it means we have to think about whether we're want to be in local or global space, but it ends up largely being pushed down into helper functions and we have have to do that for 3D objects anyway. It actually ended up reducing the amount of code in a few places in the UI elements themselves.

At Josh's request I also did some light refactoring of the inheritance model of UI Widgets. I wasn't happy with the inheritance to begin with and taking the time to stare at it as a whole and contemplate the pros and cons has utterly convinced me that inheritance is the wrong way to share code.

Lua Binding Generation
We run a script when compiling the engine that parses header files and outputs a set of Lua scripts that the game is able to load so it knows how to talk to the engine. Previously this was...less than ideal. The tool produced type information, but we had to manually write the bindings for each API. We had to manually define and flatten some structs. We had to annotate headers the tool wasn't able to parse correctly. Commented out code was parsed. LT specific helper functions couldn't be defined alongside the API functions.

Before the Global Game Jam Josh wrote a replacement parsing tool that was much simpler, yet more powerful. We used it at the jam and I liked they way it worked, but it was only 50% complete. Luckily, this time around it's in Lua instead of Python where I'm much more comfortable, so as one of my 'fun day' tasks I decided to finish the tool and migrate over to using it. And oh boy did it pay dividends. This tool handles everything.

We're able to automatically convert our C style engine interface into idiomatic Lua object code. The engine types are defined as opaque structs and every function that starts with 'TypeName_' is added to a metatable for the type. Functions that take a pointer to the type become object methods and the reset become 'static functions'. 'TypeName_ToString' functions are automatically bound to __tostring metamethods, which means print(engineType) just works. Structs visible to Lua are parsed, flattened, and sorted to put dependencies first. Commented code is ignored, preprocessor checks are evaluated, and warnings are emitted when preprocessor checks exist that may not match.

Function pointer typedefs are parsed. Enums with underscores are split into hierarchical tables. 'Metadata' is stored so other code can enumerate all engine types. Currently this is useful for creating CType entries for native engine types. The tool outputs a single 'loader file' that loads the engine DLL (taking into account 32/64 bit and debug/release configurations), and a binding file for each engine API. The whole thing returns a table hierarchy that can be used like so: PHX.TypeName.APIFunction(). And there are hook points defined so that, when loading a set of API bindings, the game can inject additional functions into a 'namespace' and have them be indistinguishable from true engine API. Previously we had quite a few 'helper scripts' which contained functions the game needed but didn't quite belong in the engine. Trying to remember if Lerp is in PHX.Math or Math is...dumb.

So what does this end up looking like? Well, here's the original C header
Spoiler:      SHOW
And the generated bindings
Spoiler:      SHOW
Note how Directory_Close and Directory_GetNext have been mapped to object methods close and getNext while everything else was mapped to non-method functions. onDef_Directory and onDef_Directory_t are the hooks for extensions. Here's what those extensions look like
Spoiler:      SHOW
We don't ever have to think about bindings now. This tool is awesome.

Engine Cleanup
After fixing up the bindings I was reminded of, and annoyed by, just how haphazardly scripts were organized and loaded. We had Limit Theory scripts, Phoenix scripts, and general Lua utilities just clumped inside LT. Our other tools and testbeds always end up reimplementing the same general utilities because they aren't easily reused. I separated all the into 3 layers: Env, PHX, and LT and moved the first 2 into our shared assets folder. Env is general Lua utilities and PHX is engine bindings and extensions.

I also standardized a bunch of the Env scripts, added helpful functionality, and fleshed out unfinished ideas. My favorite products of that are requireAll and Namespaces. requireAll is a straightforward way to load all scripts in a directory recursively and return a hierarchical table. Under the hood it's using the built in require and package.path which means it works completely seamlessly alongside normal Lua. Namespaces let us inject and optionally flatten those tables into the Lua global symbol table. No prefixing a bunch of code with PHX or Env. PHX.Vec3f(0, 1, 0) gets simplified to Vec3f(0, 1, 0). But the PHX table still exists for disambiguating symbols when necessary. Previously we had manually written scripts that loaded every script in a directory (non-recursively) and returned a table. I especially enjoyed nuking those.

There was also a ton of smaller stuff involved like standardizing header layouts, macro name casing, simplifying ArrayList, tackling some old TODOs, separating LT and the 'launcher' code.

One of my favorites was updating the Lua stacktrace that is printed during a crash. It already printed the names of all functions on the stack, but now it prints local variables, function parameters, and upvalues. It uses any engine provided ToString functions or Lua provided __tostring metamethods for friendlier printing. And it highlights any nils using ANSI escape codes. Together, this means 9 times out of 10 we instantly know exactly what went wrong, rather than having to spend a couple minutes scanning the code for issues or trying to reproduce the crash. Seeing as Lua is awful and lets you crash at runtime because of mistyped variable name, this happens quite often and the extra output already saves us a ton of time.

These backlog, cleanup type tasks can be a nice way to relax after more difficult work. The reward-to-effort ratio is huge.'re sick of infrastructure stuff, right?

Command Interface
Getting back to gameplay, I started working on re-implementing the command interface. I started by codifying the concept of a Control. From an earlier post you may recall that the simulation is an autonomous thing and the UI simply allows the player to poke the state of the simulation. Controls are the UI panels that accept player input and do the poking. There's a Control for each method of interaction with the simulation. For example, the ShipControl when piloting, the CommandControl when commanding a fleet, or the DebugControl that lets us view and edit internal machinery. Only one Control is active at a time, but a single Control can contain arbitrarily complex UI within it.

The first step toward implementing that was to add a MasterControl that determines which Controls are available and lets you switch between them. This is visible as a small bar at the top off the screen where you can change the active control, very similar to what was in the prototype. It auto-hides and has shortcut keys and all that jazz.

Switching out an active tree of widgets exposed a couple issues in the UI system. For this to work smoothly I added the ability to enable and disable widgets. Structurally this is a smooth transition that can happen with a fade or other animation. Previously we'd just destroy and recreate widgets as necessary because it's cheap, and honestly we could have continued doing that, but it ended up being cleaner to enable and disable as needed. This way Controls can maintain state when inactive instead of having to stash that information somewhere and re-load it next time.

I also reworked the way widgets are added to and removed from the hierarchy. We defer adds and removes so we don't have to worry about the list of widgets changing while we're in the middle of iterating though and updating them. Previously we processed adds and removes at the very end of the frame. That wasn't ideal for a few reasons. 1) We'd draw a removed widget for one more frame after it was removed. 2) We'd not draw an added widget until the next frame. 3) The first time a widget was updated it would not have a valid layout. This all stems from the order in which UI events are processed:

Code: Select all

By moving the add/remove logic from after draw to between update and layout we fix all 3 of those issues. I also added an extra mouse focus check after add/remove so there should never be any form of one frame delay on widgets appearing/disappearing, gaining/losing focus, extra/skipped updates, etc.

Next up was making sure switching between camera types was smooth. The ship control uses a 'chase camera' that follows close behind the ship. The command control uses an 'orbit camera' that can be freely rotated and moved. These camera types are actually just movement logic. We have a 'real camera' that handles the viewport and updating the rendering matrices. I modified the cameras to write position and rotation as the final output so it's simple to calculate an offset and lerp it to zero when switching cameras, which gives a perfectly smooth transition. This should have been extremely straightforward, but it turns out our rotation math is not consistent across all parts of the engine. I spent more time than I would have liked digging through our quaternions and matrices to understand what was going on. I didn't end up completely fixing it because it's tricky to do without breaking existing code and I didn't want to spend the time on it right then. I did write fixed versions of the broken code and added some tests to make it easier to suss out other issues when the time comes. This is a good candidate for my next 'fun day' task.

On the visual side I wanted to add the 'holographic view' of the previous command interface. I dug out the old holographic shader and implemented the ability to globally override rendering.

Then, of course, I had to get the meat of the control in: unit selection, setting and restoring unit groups, and issuing orders. Selection works in the obvious way: click and drag to select, hold ctrl to add to selection, shift to remove from selection, or both to invert selection. Since ships have this habit of moving around constantly I added a button to focus on the current selection. It moves the camera to the center of the objects and zooms to fit them on screen (taking into account their bounding boxes). And for fun I added a way to lock focus so the camera will follow selected objects when they move. It's quite satisfying to select your allies, order them to attack some poor miner, and sit back and watch it play out. It feels almost theatrical with the camera smoothly following the action.

Of course this all lead to more UI iteration. I ensured keyboard focus moves appropriately when dealing with menus appearing and disappearing. I added 'modal' windows that are automatically closed/cancelled when you interact with something behind them. I improved the way containers calculate their size during layout passes so things like context menus get clamped to the screen automatically. I combined the old 'refresh focus when widgets are added/removed' and the new 'refresh focus when widgets are enabled/disabled' an drastically simplified it.

Here it is in action. Note that the visuals are all placeholder, this hasn't had a beautification pass.
Spoiler:      SHOW

Design and Next Task
Now that we're solidly in gameplay I'm going to need to do occasional design work to help Josh flesh out some systems. To that end I did an initial design of how zone control is going to work. Josh then ordered me to play some Freelancer to ensure I understand the heritage of Limit Theory.

Next up on my list is docking mechanics. The first pass will be the infrastructure: keybindings for docking, knowing when it's possible to dock, swapping out the current control with a docking control (merchants, storage locker, etc), and changing to some fancier camera. The second pass will be iterating on that until it feels nice. And a third pass will add some transitions and generally just make it sexy.

Phew. That's a bit of a wall of text. I'll try to make the next one shorter.

P.S. Tess has gotten pretty big!
Spoiler:      SHOW

Link to original: viewtopic.php?f=30&t=6498

Re: The Limit Theory Email/RSS News Thread

Monday, May 21, 2018

Hi! It's Monday, which means I'd rather write this log than code :ghost:

Since our last encounter, I've been racking my brain on the 'final piece' of the flow economy: capital expenditure (in particular, the upgrading of existing assets and the acquisition of new ones). CapEx is necessary to make the economy grow. We already have a sturdy equilibrium; now we require a force to grow it out of the void.

I must admit, I have been underwhelmed by my search for algorithmic harmony here. I have not had breakthroughs or clever thoughts. That being said, I believe that this is partly because the right answer to the problem is not clever at all, but actually quite dumb. I am intentionally writing this log prematurely so that I may think through it more in words before I implement it.

This problem is, as I mentioned previously, quite difficult. To know the value of a new asset, one must theoretically be able to predict the impact of that asset on the global economy. In particular, one must (again, only theoretically -- in practice we will approach it very differently) be able to predict the impact upon all other agents in the system. Like a game of chess, much of the difficulty in deciding what to do with your turn (or in LT, your credits) lies in speculatively simulating the decisions of others.

The difficult only really manifests at large scales -- a critical fact that I will go on to exploit vigorously in subsequent paragraphs. If we're talking about acquiring a new fighter escort, a new mining vessel, etc. we can make some reasonable estimates about what's going to happen based on what's already happening. Such an asset is only a small 'perturbation' to the existing economic landscape, hence, we do not need to be concerned that adding a lone new ship to our fleet will upset the entire dynamical system and subsequently render our decision wasteful. The same goes for investing in upgrades to existing assets, which is, in practice, even easier: we invest in our most successful assets and divest of those that no longer justify their upkeep, e.g. upgrading our prized bounty fighter's weaponry, hiring an escort wing for a mining barge that has struck diamond, selling off an old battleship that hasn't seen action in a year, and so on.

Now, consider the construction of a generic trade station. Or of a factory. Or really any such 'large' asset. The situation is not so easy. By their nature, such assets have the potential to disrupt the whole of the local dynamics -- they are more than just perturbations! Following this line of thought and trying to solve the problem of "should I construct a trade station here" in an algorithmically/mathematically satisfying manner is a great way to drive oneself to madness and computational despair as the recursion unfolds before one's eyes :) But fear not: if stations were made of sand, perturbations would mollify all our angst.


Perturbative Quantum Flow Economics
(Look, sometimes I just want to sound fancy, alright?)

Chris Martin wrote:Oh but if you never try, you'll never know
Just what you're worth

Suppose that, instead of building a large trade station, you built a tiny one. Let's say, for the sake of discussion, that you built a single 'quantum' of trade station (although it's not important for this discussion, we could rigorously define such a quantum as, for example, the capacity to handle one transaction per unit time, or perhaps the capacity to handle the transaction of one unit of matter per unit of time, etc...). It would barely impact anyone. Barely.

However, if this capacity for handling transactions is valuable at the spatial location at which we placed our trade station quantum, then we will, in fact, see it being used. The AI is constantly looking for ways to optimize economic 'pressure,' as discussed in previous logs. If that transactional capability presents an opportunity to do so, then it will be used, despite being a small opportunity. We will see, for example, one miner choosing to drop off at our station instead of a further destination, and perhaps another AI ship choosing to trade between our station and another node in the economy (to balance the flow). We can then see, via flow measurement, that our station is being used! Thus, by introducing a differential change to the system, we have extracted a measurement of the change's differential value. And that's all we need :geek:

You see, by taking our purchase of a new asset into the domain of the quantum -- that is, by making the smallest change that it is possible to make to the system, what we have actually done is converted the problem of reasoning about a new asset into the problem of reasoning about an existing one, making the assumption that it is effectively 'free' to purchase a single quantum (minimal discrete unit) of any given asset (this assumption is important and I will probe it further later).

Now we will either kill off our micro-station, or grow it, based on the value measurement obtain from flow data. The algorithm for doing so is the same one that we will use to upgrade any other asset. Eventually, if we continue investing in our station, we will reach a critical point at which additional capacity for transactions will remain unused due to providing no further benefit to the system. Our station has thus reached adulthood and we may leave it be :) It is as though our little quantum station feeds on economic flow until it has reached the limit of its usefulness, at which point it ceases growth. It does not have to be a station; naturally this logic extends to any asset, although the trick of breaking down any given asset type into a single 'quantum' of sufficiently-small size is non-trivial.

But what if the station is in a suboptimal location? Sure, maybe it was viable to put it at X, but what if having put it at Y would have made everyone's life even easier? I claim that it doesn't matter! Here's the beauty: sooner or later, another perturbation will come along, and if it's better than our station, we will slowly-but-surely lose business to it. Sooner or later, optimality will be evolved naturally through competition. Even if our station wasn't optimal, it was good enough to survive, and that meant that it provided value to the system. That's all that matters. If a day comes when the system is so finely-tuned that the suboptimality of its location actually matters, a perturbation will come along and unseat it, eventually growing into the station that will replace it. Such is the nature of competitive evolution. A business that provides something fundamentally new has an easy time growing. It is only later, when adequate competition comes along, that the new market is pushed toward efficiency.

In summary: if you could make a minimally-small investment, it would be easy to invest; just invest in random things, and then grow or shrink your investment according to performance. As long as a 'minimally-small investment' is negligible in comparison to your wealth, you'll be ok. While I don't recommend this advice for real-world businesses, it is a simple and elegant technique well-suited to game AI.


Having heard the basics, there are now several good questions to be asking:
1. What does 'minimal discrete unit' look like for stations, ships, warp rails, etc? Can all assets be made granular?
2. What of the fact that it's not actually free to purchase things, no matter how small? Doesn't this necessitate thought in our 'random' investment, even for 'quantum'-sized ones?
3. Just because an investment is viable does not make it the best, or even close to the best, use of our money?
4. I don't want to see all of this clutter in the game world. I don't want to live in the same sandbox in which AI players are relentlessly experimenting with tiny, bad ideas.

1 is a question of design. It can be made to work, though it has implications that need to be carefully considered for, e.g., capital ships.

2 necessitates either a 'refund' for initial investments, or a 'good-enough' heuristic algorithm for suggesting them in the first place. It requires further thought, but such quasi-random suggestions (followed by more careful analysis) are already at the heart of much of the AI, so it would seem a natural fit.

3 is a non-issue for the same reason that the suboptimal positioning discussed above is a non-issue.

4, despite not being a question, is a little troublesome, but can mostly be waved away by saying "things in the prime of their growth aren't always pretty, it's the result that matters." Indeed, it's no problem if this wild experimentation happens mostly in the historical simulation phase. However, it is true that systems 'on the frontier' of developed space could end up a bit chaotic, and I'm mostly OK with that.


So, does it actually work?


Find out next time! It took an annoyingly-long time to convince myself that there is no simple, tractable solution other than "just try it." In the process, I managed to get myself tangled in a variety of other mechanics that I'm working out in parallel, namely, faction formation, faction goals, and AI algorithms for choosing faction alignment. I had suspected that factions were involved in the answer to capital expenditure, or at least had hoped that they would ease the burden, but so far they have not. Still, it's nice to have some faction theory happening at the same time. Never hurts to be thinking about the big picture.

In the coming week I will be exploring the behavior of systems that employ this granular approach to expansion without concerning myself with the questions enumerated above. First we will take a peek and see if the dynamics are nice. If so, we will use any means necessary to justify them and resolve the questions :geek: With luck, it should be a week of many baby quanta.

Once again, I will inform of any major breakthroughs if they occur this week, else I look forward to having something to show for this theory next week. I apologize for not being clever enough this time, but then again, sometimes it is quite clever to be stupid :ghost: :monkey:


Link to original: viewtopic.php?f=30&t=6513
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford

Re: The Limit Theory Email/RSS News Thread

Monday, June 11, 2018

Hey everyone. I was, as you know, hoping to post a while back, but I'm afraid my most recent endeavors have found less success than last month's work. Despite my frustration with how long these problems are taking to solve, the work is, nonetheless, highly rewarding.

Multi-Commodity Economies with Productions

It should come as no surprise that I have been working on the expansion of the economy / system development / 'capital expenditure.' As detailed in my last log, my formative ideas on the subject were "you gotta try it." Sadly, a careful scrutiny of that thought process reveals an unfortunate blunder: I was thinking too much in terms of a trivial economy. I kept speaking about trade stations and how to position them. This is a hopelessly difficult problem, and it's no wonder that my only answer was "try it!"

Indeed, as embarrassingly obvious as this is in retrospect, it was only upon writing the code that I began to realize how pointless trade stations (and interesting developments in general) really are in a system where the economy consists of mining ice and selling it to colonies :ghost:

Now, if we were mining ice to take to an ice refinery, which produces water, various minerals, and trace amounts of Talvienium, and water is required as a coolant to nuclear reactors, which pump out the energy cells necessary to power Talvienium Warhead Factories, which of course supply everybody's favorite missiles, but alas, colonies also demand water, etc. -- now here is a setup where we can actually start to reason about new capital assets.

So, my time over the last few weeks has been spent primarily on implementing factories, production mechanics, and on getting the AI to a place where it can make such an economy work smoothly. This work is a great chance to begin scaling up the game content to a representative size/complexity, which is a major goal for the coming months.

Using Net Flow to Make Smart Choices

Now, back to the problem of deciding how to spend money. With a multifaceted economy, the question is actually much easier. With access to flow data, the algorithm becomes more-or-less common sense: we sum the flow values for the entire system ('net flow'), then choose the asset whose contribution to this sum would maximally reduce total pressure.

In my own bizarre terminology this sounds a little obtuse, but a concrete example will make it clear that this is, frankly, just common sense:

Code: Select all

  Gamma Centauri
    Ice Refinery
      - 50 ice/s
      + 100 water/s
      + 5 Talvienium/s
    Nuclear Reactor
      - 1 isotopes/s
      - 10 water/s
      + 100 energy cells/s
    Ballawhalla Prime
      - 50 water/s
      - 200 energy cells/s
    Ice Mining Barge 1
      + 20 ice/s @ Ice Refinery
    Ice Mining Barge 2
      + 20 ice/s @ Ice Refinery
    Water Trader 1
      - 10 water/s @ Ice Refinery
      + 10 water/s @ Nuclear Reactor

      - 10 ice/s
      + 40 water/s
      - 1 isotopes/s
      + 5 Talvienium/s
      - 100 energy cells/s
(It is interesting to note, by the way, how the 'flexibility' of a mobile asset is represented above by the fact that we can use it to create a flow 'at' a specific location or between two locations, whereas a static asset like a factory is inherently its own sink/source location. Thinking about the economy in general as a graph, and mobile assets as allocable to edges in that graph, is a fruitful line of thinking :geek:)

Clearly, Gamma Centauri has several net flow problems that we could address: there's a slight ice shortage, but that's not nearly as pressing as the isotope shortage, since the nuclear reactor is going to be stalled indefinitely if we don't address that problem. We could use more energy cells, but building another nuclear reactor is out of the question unless we solve that isotope shortage first. Someone should do something with that Talvienium, because right now it's just going to pile up at the ice refinery.

Assuming there's a source of isotopes in the system, the obvious choice is to buy a new mining ship and send it off to go mine isotopes and deliver them to the nuclear reactor. After that, we should consider building another reactor to put that extra water to use and solve the energy shortage. Each of these changes will inevitably reshape parts of the economy, but at the end of the day, we can always take a new sum of flows and get a decent idea of what needs doing in the area.

As demonstrated by this example, flow data is useful for more than just decisions that involve a single node or a connection between two nodes; by summing flow data for all entities in a specific place, we can quickly determine the net flow for the whole, thus enabling reasoning about the global impact of various choices. Naturally, this strategy of hierarchical flow application can be applied more generally to zones, systems, and even entire regions. If we want the AI to think more globally, we can throw a bit of regional flow weighting into the decisions, such that AI players will address shortages/surplusses that aren't localized to a single system.

Fitting Prices to Flow and Vice-Versa

In all this talk of flow, we seem to have mostly sidestepped money and prices. But money is clearly a crucial piece of the economic puzzle. At the end of the day, everybody needs to get paid. How do we make sure that everyone gets paid when decisions and balancing are performed on the basis of resource flow rather than dollar bills? Moreover, how do we ensure that the flow of money 'conforms' to the flow-based model? The problem is harder than it may at first sound, because it involves bridging the gap between rates and instantaneous events.

Let's think about the initial decision to create a water trader for linking the ice refinery to the nuclear reactor (from our above scenario). Obviously it's a good decision that needs to happen in order for our economy to work. In flow terms, water flow at the refinery goes from +100 to +90, and at the reactor from -10 to 0. At both endpoints, flow is pushed toward 0 (a net flow of 0 is the ultimate goal), so the decision is a win-win. It's important to recognize the monetary implication here: water can be bought at the refinery for a lower price than the reactor will pay for it. Otherwise, the decision isn't profitable (which contradicts both common sense and our flow data). Evidently, resource flow shapes prices. Moreover, it is obvious from this thought experiment that pricing must be proportional to resource flow in order for price-based decisions and flow-based decisions to be equivalent. To be even more precise, since flow is a rate but prices are instantaneous, what this actually means is that average price must be proportional to resource flow. Price fluctuations that balance one another out are permissible.

Sadly, I am now reaching the end of that which I've actually worked out thus far. I'm not yet confident in my pricing algorithms, although I do know, generally-speaking, how to resolve the sustained / instantaneous dichotomy with a temporal pricing model, such that average price agrees with resource flow. With regard to the specifics, I am still developing ideas and watching how the (now significantly more-involved) economy reacts to new AI algorithms. Ultimately, I'm trying to get it all to a point where things stabilize to a good equilibrium. For my purposes, 'good' means that factories are achieving close to 100% uptime by stocking enough supplies and setting prices correctly to ensure regular supply deliveries, traders are continuously choosing profitable trade routes that alleviate demand, AI players are continuously monitoring the economy to change how assets are allocated/switch jobs when necessary, and so on. Interestingly, it is completely obvious when flow-based reasoning doesn't match price-based reasoning, because the AI will quickly go broke due to making trades that are flow-favorable yet not profitable :ghost: Again, with a correct pricing model, that should not happen (at least, never in the long run).


I apologize for not having pulled through with enough brainpower this time, but such is life. I am really hoping to have some better insights this week (but even if I don't, the brute-force method of trying a lot of things and seeing what works is close to completion, so perhaps it will all be resolved by sheer force of will...).

Until next time :wave:

Link to original: viewtopic.php?f=30&t=6528
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford

Online Now

Users browsing this forum: No registered users and 1 guest