Return to “Announcements”

Post

Re: The Limit Theory Email/RSS News Thread

#46
Friday, April 6, 2018

Happy Friday o7

This log is going to be short (EDIT: modestly-sized) and underwhelming. I'm tired and lacking in the usual flair due to a long and not-so-great month. Real life has been both more time-consuming and more exhausting than usual this month, and that's all I'm going to say on the matter :monkey:

Concerning the last devlog: yeah, I got it :cry: That was pretty rough, forum, but I got the message. I haven't touched the thing in a while now. I do think some people missed the emphasis on me needing a tool to keep it all manageable; at the same time, I can't deny it, I've been known to fall prey to ShinyTechTools once or twice in the past :oops: So, regardless of who is right or wrong, I've turned back to 100% gameplay focus for a while in hopes that it'll help those who are feeling anxious about the state of LT. I know it's been five years but...let's relax a bit. Getting overly-worked-up about this game doesn't help anyone!


---


My focus right now is on the economy and AI. I'm working to get back to a small, functional economy where the AI is performing basic gameplay mechanics to create minimal-but-real market activity. This means: mining, navigation / pathing, trading. From there I will expand by porting more of the high-level AI, in particular, project management so that AI players will be able to choose between activities and dynamically react to economic conditions. Most of this stuff is just a matter of translating things that already exist (in C++, LTSL, or my brain) into Lua, so it's not very difficult. I've got the market mostly-working; the bulk of the remaining work is in AI porting.

Adam has burned through a lot of tasks this month, many of which have been on TODO lists for a long, long time. I can't hope to list it all out, but the man has probably touched every code file in both the engine and the game at least once in March :lol: All hail Adam \o/ On the gameplay front, he's brought over the top bar for switching between various interfaces, and we're both working to populate it with UI content. We've got a WIP command interface, to be joined shortly by a port of the scanning/exploration interface.

All-in-all, things move quickly when we're working on the game side of the game, and, as far as I can tell, we don't have any real blockers on that front at the moment, so...smooth sailing. At some point I will have to go back and commit to either finishing the last 10% of, or scrapping, the tool-which-must-not-be-named, but that doesn't have to be done right now. Lord knows we all need a nice, long ride on the gameplay train to restore some sanity :squirrel:

I'll post shinies when I have them, but right now there's not really much to look at, especially considering you all have seen this stuff before (mining, markets, etc...). Nonetheless, when I've got a bustling system of AI activity working again I'll slap some screenshots up.


---


Recently I've been doing more thinking (about the game). Remember when I used to do that? Think? Yes, it was fun! Since this log is short and I (regrettably) don't have enough work to talk about, I'll just talk about an idea that has been on my mind this week, old-devlog-style.

A few days ago I started thinking about the birth of cities and how it must be quite an exciting process -- imagining a settlement starting with just a few shoddy abodes, watching it sprawl out over time into a bustling metropolis as wealth pours in. SimCity, I guess. It made me sad to think that this process doesn't really occur in LT, since civilian life is largely hidden behind the black-box veil of colonies. We have space stations, of course, but those are large, discrete investments. We can try to think about the growth of a single station over time as new modules are added. But it's still boring compared to the 'organic' growth of something like a city, where the building blocks from which the whole is born are absolutely miniscule in comparative size.

That's really the key, too, isn't it? When the superstructure is made from atoms that are 'tiny' compared to the whole -- the buildings that make up a city are tiny compared to the city itself, the cells that make up living beings are microscopic compared to the whole, etc. -- that's when the growth process (and I dare say, the final result) is the most interesting. It's this granularity that makes it interesting in the first place! We can and will see such growth processes in many places in LT. But civilian life is largely absent, and it makes me a bit sad. So, what can we do about it?

As with many of my ideas, the answer may well be: nothing. And that'd be fine. But another possible answer is: 'microstations.' Or, to strip the idea of all pomp: "why don't we just do in space what we do on the ground?" Think about how we can make the equivalent of a 'building' in space. Instead of having to have monolithic stations, what if we thought more in terms of 'ship-sized' modules? What if large 'factory' modules -- the kind that scifi/space sims take for granted as being the norm -- were the exception rather than the rule? What if a small settlement could form, one household at a time, around a large, unusually-rich asteroid, in a completely granular fashion, until the population has reached a point of saturating the natural resource yield? Imagine small little 'space houses,' like organic scaffolding hugging the rock. Perhaps such houses could even be converted from ships (yes, I'm talking about trailer parks in space). Perhaps this would be the precursor to a superstructure like a station. Perhaps a (civilian) station is not built, so much as it is grown.

The idea appeals to me on many levels. It makes economic granularity vastly better, which means jump-starting the economy is easier, making sure it can sustain itself by growing and shrinking as necessary becomes easier...basically all the problems with coarse discretization go away. It also makes space feel more 'alive' and 'welcoming' to me. Home can be anywhere now, it doesn't have to just be the handful of stations/colonies nearby. Of course, I've not implemented anything like this before, nor have I played a space game with these constructs in it, so I could be imagining a false feeling...but I don't think I am. There's something to it -- walking through Ald-Ruhn/Suran/Balmora, having people cross your path, seeing their homes nearby (yes, I played some Morrowind recently, sue me. Outlander.) It feels warm, alive. I always wanted space to feel that way. Not so cold and desolate. Maybe I should continue to give some thought to spicing up the civilian side of things.


---


That's all for today. April should be better for us work-wise (and, by extension, devlog wise), as real life is promising to be less obtrusive than last month. The 100% gameplay commitment doesn't hurt either :)


~Josh




Link to original: viewtopic.php?f=30&t=6473
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: The Limit Theory Email/RSS News Thread

#47
Friday, April 20, 2018

Yet again I bid you a happy Friday, fellow pilots!

It's been a fun two weeks. I've concentrated my efforts entirely on the game simulation and high-level AI, in pursuit of a working economy, as per my last log. I'm pleased to report that, after two weeks, we do indeed have a small but working economy happening.


Major Additions Since Last Time:
  • System 'economy caching' ported & re-worked from old code; helps AI agents reason about job and market availability within a system or zone
  • High-level AI reasoning; forms the basis of AI players' ability to dynamically choose a profession based on profitability analysis
  • Basic colony population dynamics (helps create a time-varying economic sink/demand for basic goods, thus seeding the system economy)
  • Market mechanics now fully-implemented including escrow, on-station/on-colony storage lockers for temporary storage of bought goods or canceled sell orders, etc etc.
  • Limited implementation of Zones -- already in-use by AI for reasoning about job locations, but no zone gameplay mechanics (ownership, laws, etc.) yet
  • Happened upon a new algorithm for individual asteroid/ice/debris/whatever placement within fields, resulting in much more natural looking fields (no longer are they obviously ellipsoids :oops: )

At this point I've ported most if not all of the important simulation & high-level AI features that were previously implemented, meaning that I'm now getting to think about and solve new problems -- a welcome departure from porting! The next step for me is smoothing out the volatility of the economy and AI behavior. It's somewhat interesting that periodic/cyclic patterns always seem to emerge in my basic simulated economies when AI agents don't have access to historical data. That result is pretty obvious I guess, but still, interesting to see "those who don't remember the past are doomed to repeat it" play out so literally on-screen. The cyclic behavior can be seen as far back as Development Update #15 (March 2014!), when I introduced colony dynamics and AI job switching for the first time. I've never done a great job of smoothing over this volatility before, but I'm quite convinced that it's a pretty simple matter of factoring in historical data (EMAs mostly) + having a distribution of various AI behaviors with respect to time scale. Some AI agents should act on fast-moving EMAs, making 'short-sighted' decisions about jobs & markets, while others should act on slow/long-period averages, making 'long-term' decisions -- together, the result is a smoothing of the economy at all time scales.

Here you can see the overly-volatile economy in an 8-planet (the other six are off-screen), 50K AI agent simulation. Notice the jagged population graph as well as the obviously-visible 'flocks' of blue AI ships, which are due to market conditions changing so rapidly that thousands of AI units decide to change jobs all at once, hence the 'mass migrations.' Of course, so many units changing behavior all at once will cause yet another major shift in market conditions later on, which will, in turn, produce yet another flock of dissenters, and so on, ad infinitum :geek: With historical averages factored in, this would be a different story.

(NOTE: I know these screenshots are atrocious, but that's part of the point. When I work on the game simulation, I need to be focused 100% on behavior & dynamics, and 0% on graphics/tertiary concerns! As you can see, that is very much the case here :lol: )

Image

Here you can see how a colony that has just recovered from a population crash (and is about to experience a large period of growth) is attracting droves of water traders due to high demand and correspondingly-high prices. Having no access to historical data, the traders are doomed to oversupply the colony, indirectly setting the economy up for the next crash.

Image

---

I've spent a fair amount of time this week reading papers on market economy simulation (of our own planet, just to be clear). Never before have I really dove deeply into the colony simulation; previous iterations of colony dynamics were still quite placeholder, and really just designed to create an elastic demand for basic goods. The problem of colony simulation is important to me not only because I want the simulation dynamics of LT to create interesting, meaningful behaviors and opportunities, but also because the problem of simulating a colony is precisely the problem of performing a coarse simulation of a (sizable) economy (which is important to us for many reasons, including OOS system simulation and historical simulation at universe-creation-time). Ideally, insights uncovered in my quest to implement a decent colony simulation will bear fruit that can be applied toward the 'big daddy' of remaining problems in LT development: OOS/historical simulations.

Thus far, research has been fairly uninspiring. Many papers in this field address the elephant-in-the-room fact that the field itself has produced models of consistently-poor accuracy. It is not really surprising to me when you look at the models and equations in question :ghost: Lucky for me, I don't care about predicting what will happen to the global economy of Earth...I only care about creating interesting dynamics for fictitious universes! Since I've been having trouble finding inspiring reading on this topic, I would welcome any sources that you guys might know of -- papers, articles, books or the like that you may have stumbled across that have good insights into quantitative models/simulations of global economies/populations/anything interesting. In the end I'm sure my model will end up being simple (like everything I love)...likely just a vector of quantities and a Jacobian of their relationships; but I do like being inspired along the way, and my brain is enjoying getting to read new solutions to new problems again!

---

Going forward, my next steps are:
1. Recording & factoring historical data into AI reasoning
2. Capital Expenditure in AI & simulation (purchasing new ships, building a new station, warp rails, etc.)
3. Information mechanics in AI & simulation

2 and 3 are both highly-unexplored territory for me, so I'm excited to dive in. Information, in particular, is one of the few remaining 1.0 mechanics that really lacks in past or present implementation. I did have information itself implemented in LTC++, but none of the AI algorithms actually used information correctly. The ability of an AI agent to perform a job should depend on whether or not the agent actually knows about the location and/or associated object of the job. In addition, AI agents need to be able to place value on information that 'unlocks' new job/action possibilities, which strikes me as being very similar to capital expenditure in the sense that it's a one-time cost that provides continuous future benefit (it is inherently difficult to formulate a 'correct' value for such costs).

I'm hoping to have all of this (economy, simulation, high-level AI) in good shape by the end of the month or perhaps in another two weeks. That's an ambitious goal, to say the least -- we're talking about a pretty massive chunk of what makes LT LT here. Still, I think it's at least possible to have the framework and general strategies for all of this done by then. Naturally I will have to tweak constants and so forth when we playtest and realize that the AI is actually too smart and is ruining the player experience ( :ghost: :P ), but having all of the algorithmic bits and general solutions in place will certainly make me feel better about remaining dev effort.

That's all for today, back to coding, see you soon o7

~Josh




Link to original: viewtopic.php?f=30&t=6483
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: The Limit Theory Email/RSS News Thread

#48
Monday, April 30, 2018

Hi :wave:

I had so much progress last week that I felt it would be unwise to wait until this Friday to share. To that end, I started writing a log last Friday. Sadly for me, happily for you, it was too long to finish, so I finished it today! Hope you enjoy :)


Flow-Based Economic Simulation

Last time, I discussed economic volatility, and how simplistic models & AI can (and do) cause instability and constant cycles of overshooting, correction, over-correction, and on and on. Since then, I've implemented historical data tracking for market items, which allows AI agents to see and act on data aggregated over various time scales.

However, while giving more thought to colony dynamics and their relationship to LOD economy simulation (as mentioned last time, they are really the same problem), I began to see the whole problem in a new light. A day later, the economy was humming along with a stability and robustness never-before-seen in LT's history!

The insight is simple: it's easier to balance rates than absolute quantities. If you're watching the food supply of a colony, you'll need to observe it for a while to know whether there's a net surplus or deficit, especially if food is subject to lots of noisy processes happening on lots of different timescales (and almost everything interesting is). If, on the other hand, you could see all of those processes listed out with their 'average rate' (in food/day, for example), all you would need to do is sum those rates and you'd know whether, in the long run, there would be a surplus or deficit.

But it gets much better than that. Let's think of the entire economy as a flow network -- for a concrete analogy, a system of water pipes (or a circuit, whichever you prefer). At each 'node' (a colony, a station, ...), we can keep track of the flow of economic quantities. Perhaps colony A -- a densely-populated urban capital -- requires 300 tons of grain/day. So we keep track of a flow of grain, at A it has value -300. Think of this like a 'negative pressure' at A (a drain/sink). 10 different traders, each capable of moving (round-trip) 20 tons of grain/day, decide to haul grain from colony B -- a rural and primarily agricultural establishment -- which has a surplus of +200 tons grain/day, to A. Now we update net flow values: at A, we go from -300 to -100. At B, we go from +200 to 0. Obviously, this is a 'good decision' on the traders' part, because they have significantly reduced the total pressure in the economy, bringing it closer to perfect supply/demand equilibrium. Just like current flows naturally between voltage gradients, just as water flows naturally between pressure gradients, so too do economic quantities flow naturally between supply/demand pressure gradients. This whole analogy borders on common sense. And common sense tends to work well when one can find it :)

The most obvious concern that we might raise about this technique is that it heavily depends on accurate estimation of the result of various economic activities. If traders compute a flow value of +100 per unit time but are only able to deliver +10 in reality, the economy will settle into a wildly-inefficient equilibrium wherein markets are constantly understocked due to what we might playfully interpret as a 'pervasive overoptimism concerning how much can be delivered in a certain amount of time' on the part of the AI. ([clairvoyance] 'Joke' about Josh being a flow-based AI with this very issue :roll: [/clairvoyance]) The solution to this is two-fold: first, use real math to estimate things. The AI in LT is already quite accurate in its ability to estimate job impacts. It will virtually never get things wrong by more than a factor of 2, much less by an order of magnitude. Second, to refine accuracy even further, we can compute a corrective term for the calculated flow value based on market data! We can split the supply/demand terms in our flow calculations and use them as follows: suppose a market has computed flow of 15 supply of death sticks, 10 demand for death sticks (per day). Then we expect, on average, for market data to show about 10 trade volume per day, and about +5 total supply volume. If market data tells us that the average trade volume is 50 sticks/day with 0 change in total supply, we can guess that our flow calculation is probably wrong and that ~50 supply / ~50 demand is a better estimate for death sticks. Also, with that many death sticks trading per day, the folks at this market clearly need to go home and rethink their lives.

It's worth noting that, even with 100% accurate AI estimates, calculating flow corrections is still necessary since the player can have a sustained impact on the economy, but does not report this impact to market nodes like the AI does.

Anywho, that's a lot of theory talk, but...does it work? You bet! It works really, really well. In my simulations, the flow technique quickly finds optimal equilibria, even in complex systems where the optimal economic structure is quite complicated. What's more, since the AI is always thinking about how to optimize the flow / minimize the 'pressure' of the economy, we actually see some interesting dynamics play out as we change the number of assets operating in a system. Watch!

In the following simulation, 1000 ships is simply not enough to saturate the total water demand of 8 colonies (their demand is kept constant for the purposes of this simulation; in the real simulation some of those colonies would die out since the economy can't support them all). Still, the AI applies some fluid-dynamics-like reasoning to try to make sure that the colonies are each 'minimally undersupplied.' The resulting equilibrium is quite nontrivial, with some colonies being supplied exclusively through trade, while those in proximity to ice mining locations are supplied directly:

Image

(Note: I have a colony selected, and you can see the market EMAs (exponential moving averages) for water there; notice how the price has settled nicely to the 5-6 credit range and has remained fairly stable through most of history. Just as one would expect, the stability of these flow-based economies is crushingly-superior to my previous methodology!)

Notice how the AI totally ignores two entire ice fields, which it has (correctly) determined are essentially wastes of time in this system. Of course, when we apply more factors to the simulation, like diminishing returns for overpopulated fields, piracy, AI personality, and the like, we will see more interesting dynamics.

I've also introduced variation in size, speed, and cargo capacity to the simulation. The AI correctly takes things like top speed & cargo capacity into account when computing speculative flow values for activities like mining or trading, so in some cases you can actually see interesting patterns emerge from these considerations. In fact, in this shot, you can see one such pattern! Look at the four trade hubs, and the three trade routes connecting them. The traders are basically all tiny! Look at the miners. On average, the trade ships are smaller than the mining ships. Almost all of the trade ships are the minimum size, whereas we see a large variance in miners. I did not code anything that would directly cause or even suggest this behavior. So why does it happen? Given the various constants of this simulation, the AI has reasoned that certain ship properties are more important for trading, while others are more important for mining. Mining ships must sit idly as they extract water from ice. For them, speed is less important than cargo capacity...at least, that's my spot-analysis of what's going on. Note that this isn't indicative of any objective truth -- given different 'universal constants' in the simulation, I would expect the situation to change entirely. The point, though, is that the AI has taken the specifics of the simulation and figured out how to craft optimal behavior with them. Nice.

Now, if we crank up to 2000 ships, the situation changes:

Image

The colonies can now be supplied adequately, so there is less pressure on the AI to choose optimal water-supplying jobs. In fact, remember that the goal is to minimize differences in supply/demand (net flow) -- so the AI is going to (again, correctly!) select 'bad' jobs for some ships, because doing so ensures that colonies are not flooded with surpluses! This is the only reason for choosing to mine in the far ice fields, which you can see some ships are now doing. It's actually interesting to note that the AI is not applying 'rational capitalist' behavior here, but rather 'rational collectivist' behavior; some units are performing intentionally-suboptimal work in order that the whole can be optimal. Philosophical arguments aside, this works out well for our purposes of simulating a predominantly-AI-driven economy :geek:

Finally, in a 10,000-ship simulation, the economy is completely over-saturated:

Image

Every possible job is seeing heavy activity. Trade develops along most potential trade routes; mining is in full swing at every location. The analogy to a network of water pipes with way too much water flowing in is apt. We can see the spray of ships here saturating the bursting economy in a very literal way.

In summary: economic volatility is gone, equilibrium is here, and the AI is generally much more capable of setting up well-structured economies that take into account all of the nuances of the star system and game constants. Long live flow-based economics \o/ As I finish more game mechanics and implement the corresponding AI jobs for them, we will continue to see a richer and richer spectrum of emergent behaviors and economic configurations.


Information, Discovery, and Non-Omniscient AI

With one problem solved, I moved on to the next and began implementing information and discovery mechanics. I've already written quite a bit, so I won't go as deeply into this work, but I'm certain there will be more to come. At this point, I've implemented the fundamentals: entities can be made 'discoverable,' and if they are, a list of players that know about the entity is tracked. Furthermore, for the first time ever, the high-level AI is respecting this limitation on information, which means that an AI agent must know about a zone before it can begin mining there, must know about a market before it can trade there, must know about a wormhole before it can compute a course that uses said wormhole, etc!

This is a very exciting step toward delivering on one of the promises that's near and dear to my heart: exploration as a real, profitable job. With AI players restricted and only able to use known information in their high-level planning, the ability to profit from discoveries becomes a natural (even essential) game mechanic.

When discussing information mechanics in LT, I am frequently asked the question: "so that means that if you discover a new wormhole and sell the location to AI, the major trade routes, maybe even the entire economy, could change completely?" and of course my answer is: "You bet" :) Indeed, we're so deprived of this lovely dynamic behavior in single player games that something as simple as AI responding to new information seems downright magical. Like so many things, it is, in truth, far simpler to implement than people imagine :nerd:

So let's have some proof of that answer, yes? Here we have a 1500 agent simulation in the same system shown above. This time, however, restricted information is turned on. AI players are initially given information of all colonies, but only one ice field. Obviously, this makes the initial equilibrium very, very different from the one shown above, in which the AI knew about all fields.

Image

As expected, we see a completely new structure arise due to the information constraints. Again, the structure consists of some colonies being directly-supplied mining hubs, while others are connected through a trade network.

Now, I hit a button to give the AI information of all fields in the system. I have been a very busy explorer, mapping out these 5 other fields, which I now sell all at once to have maximal impact on the economy :) Shortly after the new information has been revealed to the AI, the economy begins to break down and disperse. It is preparing to change shape....

Image

And after a while, as expected, it comes back to the equilibrium shown in the previous section, since the two simulations are now operating with the same information:

Image

Voila! As desired, the discovery of new information can completely re-shape the economy! There's a lot more to talk about when it comes to this mechanic: what information does the AI have to begin with? How often does it discover new information without player interference? Will there be anything new under the sun by the time I arrive? It's too much to discuss at the moment. But I have given much thought to these questions this week and have solid plans for how most of this is going to work.

---

Just for fun, let's have one final shot with 100 tiny planets. You could imagine that they're stations instead of planets. Let's just see what the AI can come up with in an unrepresentatively-complex system:

Image

Amazing :shock: The AI creates a network not unlike several little hearts pumping blood through arteries, forming a mining 'core' around each ice field, then developing trade routes that fan out to reach the far colonies. All of this behavior is emergent and self-organizing. Why does each colony 'belong' to one and only one 'core'? Why do cores structure themselves like little spanning trees? Why do we sometimes see a far colony supplied by both a trade route to a near colony and direct mining from the core (and why do these 'far' mining operations always seem to be conducted by only the largest ships?) In each case, I'm sure we could spend quite some effort analyzing the situation and uncover why the choice makes sense, which pieces of the simulation have contributed to it being optimal, and so on. For me it is already enough to see this behavior and be happy that we have Real Stuff™ driving the game :geek: :thumbup:

Capital expenditure was my final todo item from last time, and, while I've developed some theory that I think will work nicely, I haven't yet implemented it and thus won't talk about it in this log. After all, I've already gone on for quite some time. But I will give you a little teaser and say: flow-based economics makes capital expenditure much more tractable and even affords a formulaic way to compute the best investment -- be it weapon upgrades, a new research project, or the construction of a new station -- at any point in time. It's still a challenging problem, but is much easier with flow information.

Given that I've spent a fair bit of time on this log, I'm not certain that I'll be posting as early as this Friday. We'll see; if I have lots of exciting developments then you'll hear from me again this week, otherwise count on next week. In the mean time, I'm also going to be getting a KS update out this week (but it won't really be exciting for those who have followed the logs).

Farewell o/

~Josh




Link to original: viewtopic.php?f=30&t=6491
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: The Limit Theory Email/RSS News Thread

#49
Friday, May 11th, 2018

Update time! Let's jump straight in.

What have I been working on lately? A little bit of everything, as usual.
  • A boatload of UI polish and new features.
  • A cleanup pass on the engine.
  • Completely overhauled the way we generate engine bindings for Lua.
  • Reorganized and simplified our Lua support code.
  • Implemented the 'control bar' for switching between e.g. ship control, command view, etc.
  • Refactored camera control to allow smooth transitions between different cameras.
  • Re-implemented the command view.
  • Designed zone control mecahnics.


UI Polish
UI elements now store their local position instead of their global position. The global position thing was sort of an experiment to see what it actually ends up looking like in practice. It certainly has a few pros. It's dead simple for one. Comparing positions and checking for intersection is simple. It doesn't make it much harder to support different resolutions, as one might initially expect. On the other hand, once you have something like scroll views it gets a little hairy.

My first thought was to have the scroll view modify the view matrix at the renderer level. This way child elements of the scroll view would never even know they were offset. This was nice since dealing with an offset didn't leak out of the scroll view itself, but it caused a performance hit on some machines due to an OpenGL quirk. Storing global space also meant parent elements would have to pass a delta position to all children when the parent moved. And adding children with a relative offset from the parent was trickier since sometimes we build chunks of UI before attaching them to the UI and therefore without knowing their global position.

Storing local position and origin simplifies all of that. Sure, it means we have to think about whether we're want to be in local or global space, but it ends up largely being pushed down into helper functions and we have have to do that for 3D objects anyway. It actually ended up reducing the amount of code in a few places in the UI elements themselves.

At Josh's request I also did some light refactoring of the inheritance model of UI Widgets. I wasn't happy with the inheritance to begin with and taking the time to stare at it as a whole and contemplate the pros and cons has utterly convinced me that inheritance is the wrong way to share code.


Lua Binding Generation
We run a script when compiling the engine that parses header files and outputs a set of Lua scripts that the game is able to load so it knows how to talk to the engine. Previously this was...less than ideal. The tool produced type information, but we had to manually write the bindings for each API. We had to manually define and flatten some structs. We had to annotate headers the tool wasn't able to parse correctly. Commented out code was parsed. LT specific helper functions couldn't be defined alongside the API functions.

Before the Global Game Jam Josh wrote a replacement parsing tool that was much simpler, yet more powerful. We used it at the jam and I liked they way it worked, but it was only 50% complete. Luckily, this time around it's in Lua instead of Python where I'm much more comfortable, so as one of my 'fun day' tasks I decided to finish the tool and migrate over to using it. And oh boy did it pay dividends. This tool handles everything.

We're able to automatically convert our C style engine interface into idiomatic Lua object code. The engine types are defined as opaque structs and every function that starts with 'TypeName_' is added to a metatable for the type. Functions that take a pointer to the type become object methods and the reset become 'static functions'. 'TypeName_ToString' functions are automatically bound to __tostring metamethods, which means print(engineType) just works. Structs visible to Lua are parsed, flattened, and sorted to put dependencies first. Commented code is ignored, preprocessor checks are evaluated, and warnings are emitted when preprocessor checks exist that may not match.

Function pointer typedefs are parsed. Enums with underscores are split into hierarchical tables. 'Metadata' is stored so other code can enumerate all engine types. Currently this is useful for creating CType entries for native engine types. The tool outputs a single 'loader file' that loads the engine DLL (taking into account 32/64 bit and debug/release configurations), and a binding file for each engine API. The whole thing returns a table hierarchy that can be used like so: PHX.TypeName.APIFunction(). And there are hook points defined so that, when loading a set of API bindings, the game can inject additional functions into a 'namespace' and have them be indistinguishable from true engine API. Previously we had quite a few 'helper scripts' which contained functions the game needed but didn't quite belong in the engine. Trying to remember if Lerp is in PHX.Math or Math is...dumb.

So what does this end up looking like? Well, here's the original C header
Spoiler:      SHOW
Image
And the generated bindings
Spoiler:      SHOW
Image
Note how Directory_Close and Directory_GetNext have been mapped to object methods close and getNext while everything else was mapped to non-method functions. onDef_Directory and onDef_Directory_t are the hooks for extensions. Here's what those extensions look like
Spoiler:      SHOW
Image
We don't ever have to think about bindings now. This tool is awesome.


Engine Cleanup
After fixing up the bindings I was reminded of, and annoyed by, just how haphazardly scripts were organized and loaded. We had Limit Theory scripts, Phoenix scripts, and general Lua utilities just clumped inside LT. Our other tools and testbeds always end up reimplementing the same general utilities because they aren't easily reused. I separated all the into 3 layers: Env, PHX, and LT and moved the first 2 into our shared assets folder. Env is general Lua utilities and PHX is engine bindings and extensions.

I also standardized a bunch of the Env scripts, added helpful functionality, and fleshed out unfinished ideas. My favorite products of that are requireAll and Namespaces. requireAll is a straightforward way to load all scripts in a directory recursively and return a hierarchical table. Under the hood it's using the built in require and package.path which means it works completely seamlessly alongside normal Lua. Namespaces let us inject and optionally flatten those tables into the Lua global symbol table. No prefixing a bunch of code with PHX or Env. PHX.Vec3f(0, 1, 0) gets simplified to Vec3f(0, 1, 0). But the PHX table still exists for disambiguating symbols when necessary. Previously we had manually written scripts that loaded every script in a directory (non-recursively) and returned a table. I especially enjoyed nuking those.

There was also a ton of smaller stuff involved like standardizing header layouts, macro name casing, simplifying ArrayList, tackling some old TODOs, separating LT and the 'launcher' code.

One of my favorites was updating the Lua stacktrace that is printed during a crash. It already printed the names of all functions on the stack, but now it prints local variables, function parameters, and upvalues. It uses any engine provided ToString functions or Lua provided __tostring metamethods for friendlier printing. And it highlights any nils using ANSI escape codes. Together, this means 9 times out of 10 we instantly know exactly what went wrong, rather than having to spend a couple minutes scanning the code for issues or trying to reproduce the crash. Seeing as Lua is awful and lets you crash at runtime because of mistyped variable name, this happens quite often and the extra output already saves us a ton of time.

These backlog, cleanup type tasks can be a nice way to relax after more difficult work. The reward-to-effort ratio is huge.

But...you're sick of infrastructure stuff, right?


Command Interface
Getting back to gameplay, I started working on re-implementing the command interface. I started by codifying the concept of a Control. From an earlier post you may recall that the simulation is an autonomous thing and the UI simply allows the player to poke the state of the simulation. Controls are the UI panels that accept player input and do the poking. There's a Control for each method of interaction with the simulation. For example, the ShipControl when piloting, the CommandControl when commanding a fleet, or the DebugControl that lets us view and edit internal machinery. Only one Control is active at a time, but a single Control can contain arbitrarily complex UI within it.

The first step toward implementing that was to add a MasterControl that determines which Controls are available and lets you switch between them. This is visible as a small bar at the top off the screen where you can change the active control, very similar to what was in the prototype. It auto-hides and has shortcut keys and all that jazz.

Switching out an active tree of widgets exposed a couple issues in the UI system. For this to work smoothly I added the ability to enable and disable widgets. Structurally this is a smooth transition that can happen with a fade or other animation. Previously we'd just destroy and recreate widgets as necessary because it's cheap, and honestly we could have continued doing that, but it ended up being cleaner to enable and disable as needed. This way Controls can maintain state when inactive instead of having to stash that information somewhere and re-load it next time.

I also reworked the way widgets are added to and removed from the hierarchy. We defer adds and removes so we don't have to worry about the list of widgets changing while we're in the middle of iterating though and updating them. Previously we processed adds and removes at the very end of the frame. That wasn't ideal for a few reasons. 1) We'd draw a removed widget for one more frame after it was removed. 2) We'd not draw an added widget until the next frame. 3) The first time a widget was updated it would not have a valid layout. This all stems from the order in which UI events are processed:

Code: Select all

Input
Update
Layout
Draw
By moving the add/remove logic from after draw to between update and layout we fix all 3 of those issues. I also added an extra mouse focus check after add/remove so there should never be any form of one frame delay on widgets appearing/disappearing, gaining/losing focus, extra/skipped updates, etc.

Next up was making sure switching between camera types was smooth. The ship control uses a 'chase camera' that follows close behind the ship. The command control uses an 'orbit camera' that can be freely rotated and moved. These camera types are actually just movement logic. We have a 'real camera' that handles the viewport and updating the rendering matrices. I modified the cameras to write position and rotation as the final output so it's simple to calculate an offset and lerp it to zero when switching cameras, which gives a perfectly smooth transition. This should have been extremely straightforward, but it turns out our rotation math is not consistent across all parts of the engine. I spent more time than I would have liked digging through our quaternions and matrices to understand what was going on. I didn't end up completely fixing it because it's tricky to do without breaking existing code and I didn't want to spend the time on it right then. I did write fixed versions of the broken code and added some tests to make it easier to suss out other issues when the time comes. This is a good candidate for my next 'fun day' task.

On the visual side I wanted to add the 'holographic view' of the previous command interface. I dug out the old holographic shader and implemented the ability to globally override rendering.

Then, of course, I had to get the meat of the control in: unit selection, setting and restoring unit groups, and issuing orders. Selection works in the obvious way: click and drag to select, hold ctrl to add to selection, shift to remove from selection, or both to invert selection. Since ships have this habit of moving around constantly I added a button to focus on the current selection. It moves the camera to the center of the objects and zooms to fit them on screen (taking into account their bounding boxes). And for fun I added a way to lock focus so the camera will follow selected objects when they move. It's quite satisfying to select your allies, order them to attack some poor miner, and sit back and watch it play out. It feels almost theatrical with the camera smoothly following the action.

Of course this all lead to more UI iteration. I ensured keyboard focus moves appropriately when dealing with menus appearing and disappearing. I added 'modal' windows that are automatically closed/cancelled when you interact with something behind them. I improved the way containers calculate their size during layout passes so things like context menus get clamped to the screen automatically. I combined the old 'refresh focus when widgets are added/removed' and the new 'refresh focus when widgets are enabled/disabled' an drastically simplified it.

Here it is in action. Note that the visuals are all placeholder, this hasn't had a beautification pass.
Spoiler:      SHOW

Design and Next Task
Now that we're solidly in gameplay I'm going to need to do occasional design work to help Josh flesh out some systems. To that end I did an initial design of how zone control is going to work. Josh then ordered me to play some Freelancer to ensure I understand the heritage of Limit Theory.

Next up on my list is docking mechanics. The first pass will be the infrastructure: keybindings for docking, knowing when it's possible to dock, swapping out the current control with a docking control (merchants, storage locker, etc), and changing to some fancier camera. The second pass will be iterating on that until it feels nice. And a third pass will add some transitions and generally just make it sexy.

Phew. That's a bit of a wall of text. I'll try to make the next one shorter.

P.S. Tess has gotten pretty big!
Spoiler:      SHOW
Image



Link to original: viewtopic.php?f=30&t=6498
Post

Re: The Limit Theory Email/RSS News Thread

#50
Monday, May 21, 2018

Hi! It's Monday, which means I'd rather write this log than code :ghost:

Since our last encounter, I've been racking my brain on the 'final piece' of the flow economy: capital expenditure (in particular, the upgrading of existing assets and the acquisition of new ones). CapEx is necessary to make the economy grow. We already have a sturdy equilibrium; now we require a force to grow it out of the void.

I must admit, I have been underwhelmed by my search for algorithmic harmony here. I have not had breakthroughs or clever thoughts. That being said, I believe that this is partly because the right answer to the problem is not clever at all, but actually quite dumb. I am intentionally writing this log prematurely so that I may think through it more in words before I implement it.

This problem is, as I mentioned previously, quite difficult. To know the value of a new asset, one must theoretically be able to predict the impact of that asset on the global economy. In particular, one must (again, only theoretically -- in practice we will approach it very differently) be able to predict the impact upon all other agents in the system. Like a game of chess, much of the difficulty in deciding what to do with your turn (or in LT, your credits) lies in speculatively simulating the decisions of others.

The difficult only really manifests at large scales -- a critical fact that I will go on to exploit vigorously in subsequent paragraphs. If we're talking about acquiring a new fighter escort, a new mining vessel, etc. we can make some reasonable estimates about what's going to happen based on what's already happening. Such an asset is only a small 'perturbation' to the existing economic landscape, hence, we do not need to be concerned that adding a lone new ship to our fleet will upset the entire dynamical system and subsequently render our decision wasteful. The same goes for investing in upgrades to existing assets, which is, in practice, even easier: we invest in our most successful assets and divest of those that no longer justify their upkeep, e.g. upgrading our prized bounty fighter's weaponry, hiring an escort wing for a mining barge that has struck diamond, selling off an old battleship that hasn't seen action in a year, and so on.

Now, consider the construction of a generic trade station. Or of a factory. Or really any such 'large' asset. The situation is not so easy. By their nature, such assets have the potential to disrupt the whole of the local dynamics -- they are more than just perturbations! Following this line of thought and trying to solve the problem of "should I construct a trade station here" in an algorithmically/mathematically satisfying manner is a great way to drive oneself to madness and computational despair as the recursion unfolds before one's eyes :) But fear not: if stations were made of sand, perturbations would mollify all our angst.

---

Perturbative Quantum Flow Economics
(Look, sometimes I just want to sound fancy, alright?)

Chris Martin wrote:Oh but if you never try, you'll never know
Just what you're worth

Suppose that, instead of building a large trade station, you built a tiny one. Let's say, for the sake of discussion, that you built a single 'quantum' of trade station (although it's not important for this discussion, we could rigorously define such a quantum as, for example, the capacity to handle one transaction per unit time, or perhaps the capacity to handle the transaction of one unit of matter per unit of time, etc...). It would barely impact anyone. Barely.

However, if this capacity for handling transactions is valuable at the spatial location at which we placed our trade station quantum, then we will, in fact, see it being used. The AI is constantly looking for ways to optimize economic 'pressure,' as discussed in previous logs. If that transactional capability presents an opportunity to do so, then it will be used, despite being a small opportunity. We will see, for example, one miner choosing to drop off at our station instead of a further destination, and perhaps another AI ship choosing to trade between our station and another node in the economy (to balance the flow). We can then see, via flow measurement, that our station is being used! Thus, by introducing a differential change to the system, we have extracted a measurement of the change's differential value. And that's all we need :geek:

You see, by taking our purchase of a new asset into the domain of the quantum -- that is, by making the smallest change that it is possible to make to the system, what we have actually done is converted the problem of reasoning about a new asset into the problem of reasoning about an existing one, making the assumption that it is effectively 'free' to purchase a single quantum (minimal discrete unit) of any given asset (this assumption is important and I will probe it further later).

Now we will either kill off our micro-station, or grow it, based on the value measurement obtain from flow data. The algorithm for doing so is the same one that we will use to upgrade any other asset. Eventually, if we continue investing in our station, we will reach a critical point at which additional capacity for transactions will remain unused due to providing no further benefit to the system. Our station has thus reached adulthood and we may leave it be :) It is as though our little quantum station feeds on economic flow until it has reached the limit of its usefulness, at which point it ceases growth. It does not have to be a station; naturally this logic extends to any asset, although the trick of breaking down any given asset type into a single 'quantum' of sufficiently-small size is non-trivial.

But what if the station is in a suboptimal location? Sure, maybe it was viable to put it at X, but what if having put it at Y would have made everyone's life even easier? I claim that it doesn't matter! Here's the beauty: sooner or later, another perturbation will come along, and if it's better than our station, we will slowly-but-surely lose business to it. Sooner or later, optimality will be evolved naturally through competition. Even if our station wasn't optimal, it was good enough to survive, and that meant that it provided value to the system. That's all that matters. If a day comes when the system is so finely-tuned that the suboptimality of its location actually matters, a perturbation will come along and unseat it, eventually growing into the station that will replace it. Such is the nature of competitive evolution. A business that provides something fundamentally new has an easy time growing. It is only later, when adequate competition comes along, that the new market is pushed toward efficiency.

In summary: if you could make a minimally-small investment, it would be easy to invest; just invest in random things, and then grow or shrink your investment according to performance. As long as a 'minimally-small investment' is negligible in comparison to your wealth, you'll be ok. While I don't recommend this advice for real-world businesses, it is a simple and elegant technique well-suited to game AI.

---

Having heard the basics, there are now several good questions to be asking:
1. What does 'minimal discrete unit' look like for stations, ships, warp rails, etc? Can all assets be made granular?
2. What of the fact that it's not actually free to purchase things, no matter how small? Doesn't this necessitate thought in our 'random' investment, even for 'quantum'-sized ones?
3. Just because an investment is viable does not make it the best, or even close to the best, use of our money?
4. I don't want to see all of this clutter in the game world. I don't want to live in the same sandbox in which AI players are relentlessly experimenting with tiny, bad ideas.

1 is a question of design. It can be made to work, though it has implications that need to be carefully considered for, e.g., capital ships.

2 necessitates either a 'refund' for initial investments, or a 'good-enough' heuristic algorithm for suggesting them in the first place. It requires further thought, but such quasi-random suggestions (followed by more careful analysis) are already at the heart of much of the AI, so it would seem a natural fit.

3 is a non-issue for the same reason that the suboptimal positioning discussed above is a non-issue.

4, despite not being a question, is a little troublesome, but can mostly be waved away by saying "things in the prime of their growth aren't always pretty, it's the result that matters." Indeed, it's no problem if this wild experimentation happens mostly in the historical simulation phase. However, it is true that systems 'on the frontier' of developed space could end up a bit chaotic, and I'm mostly OK with that.

---

So, does it actually work?

...

Find out next time! It took an annoyingly-long time to convince myself that there is no simple, tractable solution other than "just try it." In the process, I managed to get myself tangled in a variety of other mechanics that I'm working out in parallel, namely, faction formation, faction goals, and AI algorithms for choosing faction alignment. I had suspected that factions were involved in the answer to capital expenditure, or at least had hoped that they would ease the burden, but so far they have not. Still, it's nice to have some faction theory happening at the same time. Never hurts to be thinking about the big picture.

In the coming week I will be exploring the behavior of systems that employ this granular approach to expansion without concerning myself with the questions enumerated above. First we will take a peek and see if the dynamics are nice. If so, we will use any means necessary to justify them and resolve the questions :geek: With luck, it should be a week of many baby quanta.

Once again, I will inform of any major breakthroughs if they occur this week, else I look forward to having something to show for this theory next week. I apologize for not being clever enough this time, but then again, sometimes it is quite clever to be stupid :ghost: :monkey:

:wave:




Link to original: viewtopic.php?f=30&t=6513
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: The Limit Theory Email/RSS News Thread

#51
Monday, June 11, 2018

Hey everyone. I was, as you know, hoping to post a while back, but I'm afraid my most recent endeavors have found less success than last month's work. Despite my frustration with how long these problems are taking to solve, the work is, nonetheless, highly rewarding.


Multi-Commodity Economies with Productions

It should come as no surprise that I have been working on the expansion of the economy / system development / 'capital expenditure.' As detailed in my last log, my formative ideas on the subject were "you gotta try it." Sadly, a careful scrutiny of that thought process reveals an unfortunate blunder: I was thinking too much in terms of a trivial economy. I kept speaking about trade stations and how to position them. This is a hopelessly difficult problem, and it's no wonder that my only answer was "try it!"

Indeed, as embarrassingly obvious as this is in retrospect, it was only upon writing the code that I began to realize how pointless trade stations (and interesting developments in general) really are in a system where the economy consists of mining ice and selling it to colonies :ghost:

Now, if we were mining ice to take to an ice refinery, which produces water, various minerals, and trace amounts of Talvienium, and water is required as a coolant to nuclear reactors, which pump out the energy cells necessary to power Talvienium Warhead Factories, which of course supply everybody's favorite missiles, but alas, colonies also demand water, etc. -- now here is a setup where we can actually start to reason about new capital assets.

So, my time over the last few weeks has been spent primarily on implementing factories, production mechanics, and on getting the AI to a place where it can make such an economy work smoothly. This work is a great chance to begin scaling up the game content to a representative size/complexity, which is a major goal for the coming months.


Using Net Flow to Make Smart Choices

Now, back to the problem of deciding how to spend money. With a multifaceted economy, the question is actually much easier. With access to flow data, the algorithm becomes more-or-less common sense: we sum the flow values for the entire system ('net flow'), then choose the asset whose contribution to this sum would maximally reduce total pressure.

In my own bizarre terminology this sounds a little obtuse, but a concrete example will make it clear that this is, frankly, just common sense:

Code: Select all

  Gamma Centauri
    Ice Refinery
      - 50 ice/s
      + 100 water/s
      + 5 Talvienium/s
    Nuclear Reactor
      - 1 isotopes/s
      - 10 water/s
      + 100 energy cells/s
    Ballawhalla Prime
      - 50 water/s
      - 200 energy cells/s
    Ice Mining Barge 1
      + 20 ice/s @ Ice Refinery
    Ice Mining Barge 2
      + 20 ice/s @ Ice Refinery
    Water Trader 1
      - 10 water/s @ Ice Refinery
      + 10 water/s @ Nuclear Reactor

    TOTAL
      - 10 ice/s
      + 40 water/s
      - 1 isotopes/s
      + 5 Talvienium/s
      - 100 energy cells/s
(It is interesting to note, by the way, how the 'flexibility' of a mobile asset is represented above by the fact that we can use it to create a flow 'at' a specific location or between two locations, whereas a static asset like a factory is inherently its own sink/source location. Thinking about the economy in general as a graph, and mobile assets as allocable to edges in that graph, is a fruitful line of thinking :geek:)

Clearly, Gamma Centauri has several net flow problems that we could address: there's a slight ice shortage, but that's not nearly as pressing as the isotope shortage, since the nuclear reactor is going to be stalled indefinitely if we don't address that problem. We could use more energy cells, but building another nuclear reactor is out of the question unless we solve that isotope shortage first. Someone should do something with that Talvienium, because right now it's just going to pile up at the ice refinery.

Assuming there's a source of isotopes in the system, the obvious choice is to buy a new mining ship and send it off to go mine isotopes and deliver them to the nuclear reactor. After that, we should consider building another reactor to put that extra water to use and solve the energy shortage. Each of these changes will inevitably reshape parts of the economy, but at the end of the day, we can always take a new sum of flows and get a decent idea of what needs doing in the area.

As demonstrated by this example, flow data is useful for more than just decisions that involve a single node or a connection between two nodes; by summing flow data for all entities in a specific place, we can quickly determine the net flow for the whole, thus enabling reasoning about the global impact of various choices. Naturally, this strategy of hierarchical flow application can be applied more generally to zones, systems, and even entire regions. If we want the AI to think more globally, we can throw a bit of regional flow weighting into the decisions, such that AI players will address shortages/surplusses that aren't localized to a single system.


Fitting Prices to Flow and Vice-Versa

In all this talk of flow, we seem to have mostly sidestepped money and prices. But money is clearly a crucial piece of the economic puzzle. At the end of the day, everybody needs to get paid. How do we make sure that everyone gets paid when decisions and balancing are performed on the basis of resource flow rather than dollar bills? Moreover, how do we ensure that the flow of money 'conforms' to the flow-based model? The problem is harder than it may at first sound, because it involves bridging the gap between rates and instantaneous events.

Let's think about the initial decision to create a water trader for linking the ice refinery to the nuclear reactor (from our above scenario). Obviously it's a good decision that needs to happen in order for our economy to work. In flow terms, water flow at the refinery goes from +100 to +90, and at the reactor from -10 to 0. At both endpoints, flow is pushed toward 0 (a net flow of 0 is the ultimate goal), so the decision is a win-win. It's important to recognize the monetary implication here: water can be bought at the refinery for a lower price than the reactor will pay for it. Otherwise, the decision isn't profitable (which contradicts both common sense and our flow data). Evidently, resource flow shapes prices. Moreover, it is obvious from this thought experiment that pricing must be proportional to resource flow in order for price-based decisions and flow-based decisions to be equivalent. To be even more precise, since flow is a rate but prices are instantaneous, what this actually means is that average price must be proportional to resource flow. Price fluctuations that balance one another out are permissible.

Sadly, I am now reaching the end of that which I've actually worked out thus far. I'm not yet confident in my pricing algorithms, although I do know, generally-speaking, how to resolve the sustained / instantaneous dichotomy with a temporal pricing model, such that average price agrees with resource flow. With regard to the specifics, I am still developing ideas and watching how the (now significantly more-involved) economy reacts to new AI algorithms. Ultimately, I'm trying to get it all to a point where things stabilize to a good equilibrium. For my purposes, 'good' means that factories are achieving close to 100% uptime by stocking enough supplies and setting prices correctly to ensure regular supply deliveries, traders are continuously choosing profitable trade routes that alleviate demand, AI players are continuously monitoring the economy to change how assets are allocated/switch jobs when necessary, and so on. Interestingly, it is completely obvious when flow-based reasoning doesn't match price-based reasoning, because the AI will quickly go broke due to making trades that are flow-favorable yet not profitable :ghost: Again, with a correct pricing model, that should not happen (at least, never in the long run).

---

I apologize for not having pulled through with enough brainpower this time, but such is life. I am really hoping to have some better insights this week (but even if I don't, the brute-force method of trying a lot of things and seeing what works is close to completion, so perhaps it will all be resolved by sheer force of will...).

Until next time :wave:




Link to original: viewtopic.php?f=30&t=6528
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: The Limit Theory Email/RSS News Thread

#52
I'm a little late in getting this updated - I was on vacation for a week and missed two whole devlogs! I'm going to do a two-part update in this post - first with Josh's update, and then with Adam's.

Josh: Friday, July 13, 2018
JoshParnell wrote:
Fri Jul 13, 2018 8:11 pm
Hail, spacefarers!

In today's log, I'm going to afford myself a mental break from talking about the economy again and instead walk you through how to build a universe with interesting structure, which is something that I have been doing recently to take a breather from pricing, economy, credits...it was all starting to drive me a little mad. I spent roughly another two weeks on it all, then decided that I really needed to dip my toes into something else before I started having nightmares about market orders. That being said, I was still thirsting to do something with a 'big picture' feel to it, since I have spent so much time on little pictures, hence this excursion into universe generation! It was a fun process, so I will share it in enough detail that you should be able to build your own universes if you so desire :)

Starting with Star Soup

Let's jump right in. For our purposes, building a universe consists, essentially, of building a graph (a collection of vertices and edges linking those vertices). Vertices represent systems, edges represent wormhole or jump gate connections between systems.

The most basic starting point is a random collection of vertices, distributed uniformly over a space. This is as boring as boring can get:

Image

Still, we have to start somewhere...

Getting Connected with Kruskal's

If a uniformly-random distribution of points is the most basic way to generate vertices, then the most basic (sensible) way to generate edges is via a technique called the minimum spanning tree. For our purposes, what it means is that we want to 'connect the dots' in such a way that we are using as little 'distance' as possible. In other words, we want to make it possible to navigate from any system to any other system, but we also want to make the connectivity such that nearby systems are the most likely candidates for being connected. When you see it, you'll understand why we want to connect our systems like this ;)

Luckily it's easy to write the algorithm for doing this! There are two great, easy choices: Kruskal's and Prim's. Since the former is slightly more general and I have written it many times before, I will use Kruskal's. They produce the same results when building a single MST.

Here is our soup of stars, this time connected with the MST:

Image

Already looking better!

Notice how the connections are made in a very 'orderly' way. That's because of the MST. Connecting random stars will yield a far less attractive map.

Hierarchical Detail: Regions & Regional Substructure

Clearly, this universe is too boring. We would like to see more structure: stars within clusters within regions, etc. 'Real' physics aside, structure will make for far more interesting gameplay, and a better feeling of getting to explore an interesting universe.

Here's an idea: what if, instead of generating a bunch of uniform stars in a soup, we were to generate a few regions in a soup, then start 'attaching' stars to those regions? Let's try it. We'll generate 20 regions, then 1000 stars, each randomly attached to a region and with a 5% random exponential deviation from the regional center. Then we'll connect it all just like before.

The results are much more encouraging:

Image

Note that I have given each region a random color here, and stars inherit their region's color, purely for the sake of being able to see the regions on this map. We will obviously draw this whole thing in a prettier way for the in-game UI :)

There are lots of ways we can tune the generating parameters to change the structure. For example, while I like the rather chaotic nature of these regions, you can change the system position-within-region distribution to gaussian instead of exponential to 'tighten up' the regional clustering. Here is what gaussian with 0.7% deviation looks like, using the same seed as above:

Image

And, just for show, 50 regions with 2000 systems (back to exponential distribution):

Image

(Yes, I too wish the colors were prettier...but remember...no getting distracted by...graphics...... :monkey:)

Generating Shortcuts

At this point, things are looking quite nice. However, if you look carefully at the map and think about navigating around these systems, you may find that it seems quite tedious! Indeed, getting out of a distant 'corner' of space can take a lot of jumps. This is, in fact, by design! The minimum spanning tree is exactly that: it uses the least amount of 'connective line' that we could possibly use to make a connected universe. In particular, there is zero redundancy. Since it's a tree, there is also exactly one (non-overlapping) path from any point A to any other point B, never more. It's an efficient universe in terms of wormhole usage, but it's not very kind to the weary inter-regional merchant!

Looking at these maps, you can probably see 'obvious' places where you could draw an extra connection or two and really cut down on the amount of travel required to get around. For example, here's an annotated map where I've drawn in some obvious choices for extra connections that'd make things easier:

Image

We'd like to generate such 'shortcuts' automatically. But how? Teaching the computer to identify good shortcuts is actually not so easy. We need a way to mathematically define good shortcuts. Intuitively, we want to choose two systems that are topologically far apart (that is, the path between them is very long), but are physically close. You will indeed note that this is a characteristic of all the sample shortcuts I drew above: they bridge systems that are physically close to one-another, but very very far apart in terms of the 'path length' between them.

So, all we have to do, for each shortcut we want to make, is: find the pair of systems whose ratio of travel distance to physical distance is the highest, and connect them. Turns out, this algorithm works really well for the most part. (Note: in reality, I use travel distance divided by the square root of physical distance, which encourages the algorithm to think more globally rather than making intra-regional shortcuts). Here are the first three shortcuts that the algorithm selects on the above map:

Image

Hey! Look at that! Two of the three shortcuts the computer chose made appearances in my annotations :D (I swear I did not check before I annotated!) Sadly the algorithm did not come up with a space whale option, but it would be difficult to teach the computer to be as silly as me. We'll check out one more example, this time with 5 shortcuts:

Image

I really like these choices :thumbup:

The shortcutting algorithm serves to make the map's structure even more interesting, since we now have more options for getting around, but we have introduced those options in a very specific way based on our goal of providing the most 'bang for our buck' with each shortcut.

Unfortunately, this shortcut algorithm is also computationally expensive. Computing all path lengths on a graph is expensive, and doing so in an efficient way is a significantly harder algorithmic challenge than computing the MST. I've played with a few cheaper ways of computing shortcuts, but none have come up with as good of results. I'm sure that, with a bit more thought and cleverness we could solve this much more quickly. For now, I'm pleased with the results, and will optimize more as I am able.

(Bonus) 3D

It should come as no surprise that all of the algorithms I've discussed extend effortlessly into 3D. In fact, while building the generator, I used 3D math the whole time, but kept a configurable constant that let me collapse the third dimension at will.

Image

For Limit Theory, I have expressed that I will likely default to 2D universe maps; I prefer the simplicity. However, as demonstrated, it's effortless to enable 3D generation, so we can include that as a configuration option in the universe generator :nerd:

(Bonus) Making it Infinite with Boundary Stitching

But wait! Limit Theory advertises an 'infinite' universe! So far we have only seen finite ones. What's the deal? The deal is quite simple, in fact. To make a universe infinite, all you need to do is 'tile' a finite one -- that is, use your 'finite universe' generator to generate content for each 'cell' of the universe, then find a way to stitch them together. Think of these screenshots we have seen so far as single 'pixels' in the infinite picture of the universe. The only interesting problem with this approach is how to connect the cells. In particular, if we want to make sure that the universe actually does go on forever (and that we can actually get to new systems forever), then we must make sure that the entire grid is connected.

The way to do this is very easy: choose a star system to represent each border of your tile (in our case, for a 2D grid tiling, we could call them N, E, S, W, for example), then connect the borders of adjacent tiles appropriately (the N border-system of a tile will be connected to the S border-system of the tile above it, etc.) To choose which systems should be border systems, there's an obvious answer: the system that is closest to the border (duh?) In other words, the northmost system in a tile will be our N border. To make all this clear visually, we can draw our border systems on the map and extend lines outward to indicate where our universe tile will be stitched to neighboring tiles:

Image

So, when the player is getting to within topological proximity of one of those four border systems, we need to ensure that the appropriate adjacent tile in the universe is generated (and receives enough historical simulation to 'smooth out' border effects). The generator automatically identifies the border systems and flags them as such so that the engine will be able to handle preloading accordingly.

I have actually decided, over the course of LT development, that I don't want to play in an infinite universe, but would rather have a large, finite one (so that I am forced to get to 'know' it, rather than endlessly skipping town to the next cell). That being said, infinite universe generation is clearly one of the promises of LT, so I fully intend to support this tiling / stitching system despite my own preference to play without it (finite/infinite will be yet another universe configuration option).

By the way, in case it isn't clear, one very important property of the border stitching mechanism is that you don't have to generate the neighboring cell to know how it is connected to your current cell. This is in stark contrast to how we generated the finite universes above (we can't find the MST of an infinite graph, for obvious reasons!) This is the fundamental trade-off of infinite generation: you must have some topological regularity at some level. But, you see, we can play this clever trick of making each 'tile' of our regular grid a rather complex structure in its own right, which ensures that we get a much more interesting universe than if each cell in the grid corresponded to only one system.

(Update) Improving Local Connectivity

See this post for a simple technique to improve the local connectivity, which, as some have pointed out, is really too low with just the MST.

---

Alrighty, that took me a little too long to write, no surprises there, but you guys were patient to wait for it...so thank you :)

I will be getting back to my global economy work soon, but this multi-system work is also quite exciting, so I'm going to keep at it for a bit longer. I wanted to have a few more features like region names & system properties done for today, but I spent my time on the shortcut algorithm and the spatial faulting algorithm (which I cut from this devlog due to results not being that interesting) instead. Ah well! Perhaps next time :geek:

Hope you all have a great Friday! :wave:



Link to original: viewtopic.php?f=30&t=6565





Adam: Friday, July 20, 2018
AdamByrd wrote:
Fri Jul 20, 2018 12:50 pm
Friends. Compatriots. Limit Theoreticians. I come bearing a dev log.

As always, I've been bouncing around like a madman. Sprinkling a little feature dust here, vacuuming a few bugs there, reinforcing scaffolding, ensuring the house stays tidy as we scale, etc etc. Oh, and I may have finished an entire engine system along the way. First up, Docking!

Docking
Last time I showed off the command interface. A large part of that was ensuring we can switch between 'contexts' smoothly. UI gets swapped out, the camera animates, bindings are changed out, and so on. These are small things that are going to be leveraged frequently. After finishing the command interface I wanted to continue working on gameplay and also push on these features a little more to see how they hold up. Docking seemed like a good fit.

We wanted to start with just the core mechanics: fly close to a space station, press a button, auto-pilot to the docking port, and see UI menus for everything you can do at that station.

First, a dockable component. The UI simply searches for dockables and if you're in range presents a button prompt to begin docking. The act of docking actually removes the player's ship from the world and adds it to the station instead. Similarly, once docked an undock prompt is presented.

I extended the MasterControl I implemented last time to add 'control sets'. The UI looks at the player each frame to determine which control set ought to be active and automatically switches to it. Previously we could choose from piloting, commanding, and debug controls. Now, when the player is docked we swap to a control set with things like your storage locker, merchants, and the jobs board. Since we can already swap out UI trees easily this ended up being dead simple to implement.

With all the state control in place, I wanted to make it a smooth, physical docking operation where you literally fly into the station. The AI is implemented through an action stack. Actions are pushed onto the stack and AI simply run their current action each frame, popping them once complete. Conveniently, the player is no different. The action stack is just empty and the UI controls poke the player's state directly. (This actually surprised me when I built the command interface because I could select my own ship along with my allies when giving orders and my own ship would fly along, taking part like any other unit.) I added a new docking action that is essentially the 'move to location' action that also happens to reparent the player from the system to the station once at the destination. And boom, with a whopping 7 lines of code you now auto-pilot right on in.

If a station is destroyed a fraction of the damage is inflicted on docked entities and they're released from the station. If your ship survives the damage, great, you can limp away. If not, well, your attacker gets to rummage through the debris of your ship along with the station.

It's not a particularly big or complicated feature, but I'm thoroughly pleased with how brain-dead easy it was to implement. The way Josh structured AI and actions is excellent. There's a ton of power and flexibility there with no real complexity. The whole of implementing docking took a day, with about 3 hours being the 'real work' and the rest polish and iteration.


Physics
We've been putting off physics work for a while now. We had kinematics, parenting, a naive broadphase, and sphere-vs-sphere narrowphase, and naive raycasting, but we still needed a lot more work. The broadphase failed spectacularly at large vs small object checking, raycasting needed to be accelerated through the broadphase spatial grid, and we wanted more shapes (convex hulls and boxes) and numerical robustness. We also didn't have a way to actually respond to collisions outside of kinematics (for sparks, sound effects, damage, etc). None of that is particularly scary, but the sheer amount of work still needed was.

Josh decided we should investigate off-the-shelf physics engines and decide which path would strike the right balance of development time and quality. So I took some time to evaluate our options. The 3 most well known being Havok, PhysX and Bullet. Honestly, I don't like the way most Havok demos look and feel. The licensing for PhysX is messy and I'm just kind of assuming it's a behemoth. In all the comparisons I've looked at Bullet seems to be the most consistent. It's rarely the absolute best in terms of performance or simulation quality, but it's very consistently #2 and right on the heels of #1. Physics and Havok on the other hand are excellent in some situations and abysmal in others. The consistency of Bullet is extremely appealing to me.

So I decided to integrate the bare minimum amount functionality leveraging Bullet that would let us see it in action. That would let us gauge the quality in our own use case and the time/difficulty associated with using it. I decided to do this as an entirely separate physics API in the engine that mirrored the existing one but didn't replace it. This let us actually toggle back and forth between new physics and old physics with a single variable. This was fantastic for comparing the two. We were able to ensure that everything felt exactly the same in both engines. This meant we wouldn't have to re-tune ship controls if we did end up swapping out the physics and also highlighted anywhere our current physics implementation was doing things incorrectly. Luckily, nearly everything matched up easily: drag, restitution, friction, etc. However, our inertia was pretty far from accurate. This worried me since it made ships feel completely different. In the end I was able to fix the inertia calculation in the old physics and updating the forces and torques could be done analytically to keep the same feel.

Once everything was set up correctly Bullet ended up feeling identical. Implementing it wasn't particularly difficult, it has all the features we want, and it's reasonably performant. It looked like it would be faster to switch to Bullet and we were confident it would yield very acceptable results. I set out to officially migrate us over and to flesh out the rest of the physics API we need to finish the game.

Unfortunately, nothing is ever easy and I it wasn't long before I started finding Bullet's rough edges, quirks, and flaws. Constraints are unusably slow, compound objects and triggers have awful API, I found a couple outright bugs, memory allocation is a mess, and the documentation is a half step above worthless.

Still, with enough hammering, it works. We have all the features we were aiming for and the implementation isn't a complete disaster. Some of the new stuff we now have is shapecasting, overlap tests, triggers, collision groups and masks, convex hulls and box shapes, and per-system physics. It may have taken 5 weeks, but it's done now and it shouldn't need any major changes going forward.

Overall, it's a win and I think it was the right choice. However, in the future I would likely avoid Bullet. In a professional setting I'd go with PhysX and see if it fares better. On hobby stuff I'd suck it up and write my own. The real kicker though, is that Bullet isn't awful. The author has done some quality work and I'm happy Bullet exists, it just the API design that's a disaster. All it needs is about a month of someone with API design chops hammering on it and it would go from irritating to a joy to use.


Math
Along the way, something happened that brings me great joy. I finally fixed all the math in the engine! This has been sitting in the back of my brain driving my nuts for months now. I probably first realized we had serious issues back in December. It's one of those things the tends to show up at unexpected times when you're in the middle of some other task and fixing it is going to break a ton of other things and you don't even know where it's broken and no one knows how quaternions even work and it's going to take forever and you just want to finish this one task and...you get the idea. It continually gets pushed to the back burner.

When I ran into it again with the camera work in the last update I told Josh that the next time it comes up one of us is going to suck it up and fix it. And by 'one of us' I mean me because Josh hates quaternion and matrix code.

Well, when I was integrating Bullet I ran right back into these math issues almost immediately. When rendering we have to get the position and rotation of objects out of the physics engine. Bullet has a lovely function that fills a matrix in an OpenGL format. But our matrices were in some weird, partially transposed state due to our bugs so we couldn't even use that matrix without first doing some obscene hacks to get it to match our format.

I mentally prepared myself for a good 5 days of anguish digging through our math and tracking down every last issue. I spent a couple hours cataloging all known issues and our coordinate system conventions in every part of the rendering pipeline. In the end, the majority of our issues traced back to Matrix_GetRight/Up/Forward and Quat_GetRight/Up/Forward. Turns out almost all of the math was 'correct' except when we converted from axes to actual on-screen directions. Both sets of functions were wrong and wrong in different ways. This is what made it tricky. Once I realized that I simplified a bunch of our math and found a few other small mistakes and remove all the ugly hacks we had before. The tests I wrote last time were a tremendous help. In the end, I only spent a day on it. Victory.

The End.
And now we get to the sad part. This will be my final dev log. My journey with the awesomeness that is Limit Theory is at an end.

From the very beginning Josh and I discussed a finite end. My job was to help clear the last major hurdles and open up the path for Josh to grind away on gameplay. To that end, I think this has been highly successful. The hope was that we'd be able to ship during that time frame, but alas, we didn't make it.

My plan for the last few years has been to move to the west coast, maybe Seattle, and find work with a stable indie studio. As luck would have it, something decided to fall directly into my lap. Shortly before GDC, Blizzard reached out to me and, long story short, I'm joining their new shared engine team at the beginning of August.

I leaned away from working for a AAA studio because I was scared of the potential soul-sucky nature of it, but once I actually visited Blizzard, that changed. I've gotta say, they've built an amazing culture there. Everyone I spoke with was happier and more creatively empowered than almost every indie studio I've seen. And I can bring Tess to work.

For the past month I've been working part time, winding down. I'm happy I managed to get Bullet fully integrated before I leave. I'm incredibly proud of the work I've done here, and extremely grateful I've gotten to work alongside an awesome programmer and person like Josh. I've grown tremendously as a programmer in my time here, and I certainly wouldn't be in the situation I am if it hadn't been for Procedural Reality. It's been a fantastic 14 months.

It's bittersweet, as endings tend to be. I'll miss the insanely creative and detailed discussions and the uniquely welcoming and cerebral community you've all built. I'm also excited beyond words about what comes next.

Cheers,
Adam



Link to original: viewtopic.php?f=30&t=6565
Post

Re: The Limit Theory Email/RSS News Thread

#53
Note from Talvieno: This log of Josh's wasn't originally a devlog. I upgraded it to devlog status because I believe it merits its own spotlight. It is brief, but it has a picture and a short exchange with a forum member! I hope that if you haven't seen it before, you enjoy it.
Friday, August 11, 2018
JoshParnell wrote:
Sat Aug 11, 2018 6:54 pm
Well, I hate doing this, but: I need another week.

My recent work has consisted of moving a major game subsystem from Lua to C, which is an ongoing process as you all know. In doing so, I had to choose between one of two major, high-level architectures, each with very different strengths and weaknesses. Despite having high hopes in the beginning, after having the system solidly in-place, I have only just come to the conclusion that I made the wrong choice. Sadly, it was necessary to have the system up-and-running before I could make the determination that it wouldn't pan out. It's rather heartbreaking, and, while I had planned on simply writing an account of this (failed) work, I feel that my time would be better spent taking the weekend to recharge, implementing the system in the other architectural style next week, and writing about the results at that time. Frankly, I'm too disgruntled and exhausted from work to produce a decent devlog at the moment anyway.

Apologies! If lessons learned the hard way are a currency, then I am a very rich man :angel: On the brighter side, I'm sure it will make for a compelling story. Just...not right now :)
Hyperion wrote:
Sat Aug 11, 2018 6:59 pm
Another week? Unacceptable! I demand an 8000 word log within the next 15 minutes! :ghost:

Well could you at least say what system it is, and could we get a shiny or 2 to hold us over? You could say work on graphics for an hour or 2 tonight and still technically deliver something today ;) :monkey: Might help alleviate the heartbreak too :)
Actually...that kind of mentality always ends up breaking my heart even more, to be honest. And it's where my head was today, which is exactly why I'm choosing to wait -- doing things purely for the log always feels bad to me. I not only feel bad about unsuccessful work, but then I also feel bad about trying to compensate with work that I didn't actually work hard on and am not invested in, but will be judged on nonetheless. It's a dangerous, stressful habit that I hope never to revisit :ghost:

The subsystem is UI. It has been a long time coming, ever since PAX, really. Our UI system is in Lua, and was first built by Adam as a dev UI. I later asked him to scale it up to a full-featured one for player-facing stuff. As it always goes, only upon having that full-featured system did we begin to see the real issues with not having it engine-side. It has been holding me back from working effectively on gameplay that involves player interaction (which, after all, is arguably one of the more important features of an interactive game :lol: ). The choice was immediate-mode vs retained-mode.

And..

Image

Despite the hard work on this fast, native, engine-side implementation of an immediate-mode UI, I have come to believe that the cons outweigh the pros. Another +1 to Josh's life experience :V




Link to devlog thread: viewtopic.php?f=30&t=6576
Link to original: viewtopic.php?f=30&t=6562&start=90#p163880
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: The Limit Theory Email/RSS News Thread

#54
Friday, August 17, 2018

Hey everyone!

It's been a good week and, thankfully, I finally caught a break in my work on moving UI engine-side. Today's log will get a bit technical, but this is an interesting topic and I'd like to detail my experience with it in the hopes that it may be of use to other developers who face the same decision in their work.

Strap in, it's a long one...

GUI Architecture: Immediate or Retained?

I mentioned in last week's pseudo-log that there are effectively two major approaches to architecting a UI. The two (very different) styles are known as immediate mode (IM) and retained mode (RM). RM is more traditional and probably what you're used to if you've used a GUI framework directly from code before. IM, however, has enjoyed a reasonable amount of attention in the past few years, and I have seen more and more developers asking about the viability of it. This is no doubt in part due to Omar Cornut's excellent Dear ImGui library, which has demonstrated to many (including myself) the tremendous power that the IM paradigm offers for creating interfaces with minimal fuss.

Briefly, one can illustrate the difference in approach by glancing at some example usage code. Here's what some typical RM pseudocode would look like:

Code: Select all

g = GUI.Canvas()
g.add(GUI.Window("Universe Configuration")
  .add(GUI.Checkbox("Infinite Mode").bindTo(&config.infinite))
  .add(GUI.Slider("Average Systems Per Region", 10, 1000).bindTo(&config.regionSize))
  .add(GUI.Slider("Average System Connectivity", 1, 10).bindTo(&config.connectivity))
  .add(GUI.GroupHorizontal()
    .add(GUI.Button("Create", createUniverse))
    .add(GUI.Button("Cancel", cancelUniverse))
  )
)

...

g.update()
g.draw()

Again, that's just pseudocode, and there are a million ways to structure such APIs for convenience, but the general idea is that interface pieces are created like objects, and we control the UI by calling various functions on those objects. RM style is very much a classic, object-oriented approach to UI.

On the other hand, have a look at what IM looks like:

Code: Select all

GUI.BeginWindow("Universe Configuration")
  GUI.Checkbox("Infinite Mode", config.infinite)
  GUI.Slider("Average Systems Per Region", 10, 1000, config.regionSize)
  GUI.Slider("Average System Connectivity", 1, 10, config.connectivity)
  GUI.BeginGroupHorizontal()
    if (GUI.Button("Create")) createUniverse()
    if (GUI.Button("Cancel")) cancelUniverse()
  GUI.EndGroup()
GUI.EndWindow()

At first glance this may look similar, but, in fact, the IM approach could hardly be any more different! Here we are not dealing with objects. Instead, we're just making function calls. An immediate mode interface is inherently interwoven with the interface's data and logic. No widget demonstrates this more aptly than a simple button, which is perhaps the most famous example of the difference between IM and RM. Whereas an RMGUI button typically involves setting a 'callback' function (which the interface will call if the button is pressed), an IMGUI button is actually a function that returns true if pressed.

To really appreciate the massive ramifications of the choice between RM and IM requires that you implement a significantly complex interface in both styles. Luckily, I have done so, and will therefore lay out the most prominent pros/cons that I have found in my own interface-building work. Unfortunately, it turns out to be the case that both approaches come with very real pros and cons, and neither, in isolation, is ideal.

Immediate Mode

Both the strengths and weaknesses of IMGUIs come directly from the fact that a vanilla IMGUI does not exist separately from the data that it controls and the logic it executes.

Pros
  • UI code is extremely concise, easy-to-understand, and quick to write
  • The UI does not need any 'extra information' -- sliders/checkboxes/radio groups don't need pointers to the data they represent, buttons don't need callbacks for the actions they trigger
  • Changing data never requires special logic to inform the UI about changes, making it effortless to build UI for dynamic data that changes frequently (like game data!)
  • Easy to use from scripts; no special binding code necessary

Cons
  • Automatic layout is heavily-restricted
  • 1-frame delays are common and sometimes unavoidable; can result in 'popping' and added input latency
  • Higher CPU usage

Now, please note that each of these cons is very sensitive to implementation details and requires a lot more explaining to understand the full story, so please don't take the above list as a good "generalization" of IMGUIs. Especially the last point -- I have seen a great deal of misinformation about immediate mode performance characteristics online. Much of the difficulty in IMGUI implementation is concerned with mitigating these cons in various ways. That being said, it is fair to say that complex layout is a rather fundamental problem for a traditional IMGUI implementation.

Retained Mode

Again, the strengths and weaknesses of RMGUIs are a direct consequence of the fact that an RMGUI is built from first-class objects that exist in and of their own right.

Pros
  • Automatic layouts are easy and can be made arbitrarily-complex
  • No frame delays, minimal input latency
  • Minimal CPU usage (predicated, of course, on a well-optimized implementation)

Cons
  • UI code is often verbose
  • UI must have knowledge about the data it is displaying and must understand how to trigger functionality -- pointers and callback mechanisms are typical
  • Users must take special care to ensure the UI stays in-sync with data -- complex, frequently-changing data requires significant consideration to work properly
  • Usage from scripts requires special consideration; UI must know how to access script data & functions

Stuck Between an Asteroid and a Hard Place

Looking at the pros/cons of each paradigm, it's not hard to see that IMGUIs and RMGUIs are effectively polar opposites. Where one excels, the other lacks, and vice-versa. It is the absolute epitome of a difficult and highly-consequential trade-off.

Last week, I implemented an IMGUI in our engine. It was my first time implementing an IMGUI, and, while I understood the cons beforehand, I didn't know how they'd pan out in practice -- how well could I mitigate them? The answer turned out to be: quite well, but still not well enough. In particular, the restrictions on automatic layout forced a no-win choice between having to litter my UI code with fixed sizes, or having to accept noticeable, 1-frame-long pops / delays. It is an absolutely fundamental limitation of a standard IMGUI. I believe I used the word "heartbreaking" in one of my posts last week, and it was not an exagerration. The IM paradigm affords such incredible easy and clarity in creating game UI, yet the drawbacks were too much to stomach.

On the other hand, I've implemented RMGUIs many, many times before. I'm more than familiar with those cons. The added complexity in using RMGUI from script is, for me, something to avoid at all costs. After all, the impetus for this recent effort to move our UI code to C stemmed from a burning desire to view & interact with gameplay mechanics with minimal pain. For me, the RM paradigm flows counter to that goal!

If only we could have all the things. If only we could immedify our retained mode, or retain our immediate mode. If only.

As you've already guessed, it turns out: we can :)

Hybrid Mode GUI

Thus far, my discussion of IMGUIs has been rather specific to what I've called a 'standard' or 'vanilla' implementation. That's because, in reality, one can go much, much further under the hood of an IMGUI in order to mitigate or even defeat the stated weaknesses. At some point, the line between immediate and retained can start to blur...and in that lovely gray area lies the answer to all our problems. The paradigm that I will describe to you now could be considered as simply an 'advanced' IMGUI implementation, however, for the sake of clarity, I will call it 'hybrid mode' -- HMGUI :nerd:

Much of the beauty of IMGUI comes from the friendliness of the user-facing API. Calling a sequence of functions to implicitly map an interface onto a set of data and functionality is simply easier than creating explicit constructs to do so. At the same time, trying to perform standard internal GUI work like layout and input handling without having advance knowledge of the entire interface results in the inherent restrictions discussed above. But suppose we were to 'retain' all necessary information from our 'immediate'-style functions, and defer that internal work to after the entire interface has been specified? It's a winning combination.

The basic premise of an HMGUI is that, each frame, we will build a somewhat-traditional widget hierarchy under the hood as the user is issuing IM-style calls. We'll retain just enough information to be able to perform automatic layout on the UI and to handle input later. Once the user has finished calling into the API, we'll go back through our hierarchy and perform all the standard GUI logic: layout, input handling, and whatever else.

As a consequence of deferring the work, hybrid mode can handle the full gamut of complex, automatic layout functionality that one would expect from retained mode. We can have widgets stretch to fill available area, align themselves within a group, automatically compute group sizes, etc -- all without specifying explicit sizes and without a one-frame delay (one of which would be necessary under a pure IMGUI). We can have selectable widgets like buttons respond to mouse-over immediately, minimizing perceptual latency. We can layer widgets in arbitrarily-complex ways, performing on-the-fly z-reodering without fuss.

So...where's the catch? Surely there can be no free lunch. Well, HMGUI isn't really a free lunch: of the retained, immediate, and hybrid paradigms, hybrid mode takes the most work to implement. That's not surprising when you consider that, under the hood, HM is just a clever mixing of IM and RM, thus requiring much of the implementation work of both. That being said, from the perspective of the user of a hybrid API, it really is a free lunch :) And if you know me, you know that's exactly the kind of system I love. Push all of the hard work into the engine/systems, leave the game/application code as clean and simple as possible.

One might also point out that, of the three, HM consumes the most CPU time, due to the fact that it involves all of the CPU work of IM plus the layout work of RM. However, in practice, such code can be made so blazingly-fast that the point is moot (especially when written in well-optimized C ;) ). As always, performance or lackthereof is almost entirely the result of the implementation quality.

A Few Examples

I haven't implemented the more complex widgets yet, as I focused heavily on core details this week. I also haven't worked much on graphics. As we all know, making things shiny is a beloved hobby of mine, but best saved for...later :oops: Still, even with only basic widgets and fairly rudimentary rendering, a close look at HMGUI already reveals the superiority.

Take, for example, my little todo list from last week's IMGUI:

Image

There are a number of annoyances here. Checkboxes aren't correctly aligned with text. That's my fault, not a limitation of IMGUI, but it happened because writing the IMGUI code to manually align and lay things out is quite a tedious and error-prone endeavor. The code that shows this list is littered with 'magic' size constants -- the window width, for example, is a constant and would not grow based on the contents. Again, in IMGUI we have to accept such constants or the frame delay problem. Despite the simplicity of an immediate mode API, trying to achieve a polished, consistent look can quickly turn the code messy.

From this week's HMGUI:

Image

Consistent, polished, and the code for creating it is cleaner thanks to the fact that all the layout work is handled automatically. Everything is aligned, padded, and spaced with precision. I have even swapped the checkbox to right-justified to demonstrate automatic stretching (again, problematic for IMGUI). This window is sized automatically and will grow accordingly should I add new, longer todo items.

A bigger example:

Image

Sorry again for the rough graphics...but the beauty here is in the functionality. This stream-of-consciousness-style test window has more automatic layout going on than you can shake a stick at! This one is really not going to happen in a vanilla IMGUI implementation. At least, I wouldn't want to see the code for it :shock: In HMGUI, however, it's absolutely straightforward. Notice how even the embedded todo list has expanded the checkbox elements slightly due to the fact that the split code view on the bottom is dictating the window's width. It's all in the details! :)

Conclusion

GUI paradigms present a difficult choice for developers. The simplicity of an immediate mode API is tantalizing. Creating interfaces is a breeze, and the resulting increase in productivity should not be taken lightly. On the other hand, retained mode offers precise control over complex layouts that are difficult if not impossible to achieve in immediate mode. By combining the front-end elegance of the IM paradigm with the back-end power of RM, we can, thankfully, have the best of both worlds! Hybrid mode GUI is the way to go :)

I'm very satisfied to have finally found some success with this work, and I'm glad that I took the time to experiment with a new paradigm, as I would never have come to this solution without knowledge of both. Always a treat when failure leads to reward. Although the feature set of this new HMGUI implementation is still rather slim and the aesthetics quite programmer-artsy, the foundation is layed and the road has been paved. I already have enough power to get back to doing what I wanted to do in the first place: move forward with gameplay interaction. I'll be continuing the implementation of more advanced features as the need arises, so I'm sure we'll be hearing more about HMGUI in the future. For now, I'm happy to call the porting of another major system to C a success, and excited to move back into gameplay work with my new toys :D

Enjoy your Friday! :wave:




Link to original: viewtopic.php?f=30&t=6582
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford

Online Now

Users browsing this forum: No registered users and 8 guests

cron