Bomber in flight


Since I figure I've beaten the 'finalizing the Octember update' horse to death, instead I'll just throw in an interest tidbit from my day. After mind-numbing hours of work, I needed to take a break, so (for reasons unknown) I settled on watching a rather interesting video (seriously, it's only an hour long, that's not too much of a break, right!?

It really fascinated me. In a world where I had infinite time (such a tantalizing thought


It actually makes me think back to when I was working hardcore on the LT universe simulation architecture. I had developed some rather interesting compression technology to enable simulation of massive regions at lower computational cost. That technology involved something not so unfamiliar to quantum uncertainty: far away pieces of the universe would be compressed into grouped objects, wherein individual units of 'detailed' data would be lost, but the coarse, 'low-resolution image,' so to speak, would remain (and details would be filled in with controlled randomness when it was once again time to expand the coarse data). For example, a far-off battle simulation might result in the outcome that x damage was dealt to (X, Y, Z) group of objects (such as a fleet), but the detail of exactly which object had received the damage would not be resolved in memory until it was actually 'observed.' In case you're curious, I ultimately dropped this solution in favor of a more practical LOD mechanism, despite the fact that I still love the underlying theory of performing coarse simulation via hierarchical data compression. But here's the thing: ultimately, while I did develop a mechanism for compressing universes for LOD sim, the dimensionality of calculations remained the same. Unless one were to create exponentially larger groupings (hence, drop exponentially more data at long distances), the underlying time complexity of the algorithms would remain the same, up to a constant factor.
Now, on the other hand, if one were to develop a scheme in which the actual dimensionality of data were compressed -- one could, in principle, drop the time complexity of the simulation algorithm. For example, going from O(n^3) to O(n^2). This is not a mere constant factor, but an 'asymptotic' speed-up. If one could develop a mechanism for compressing a universe into a lower dimensionality, simulating it at that lower dimensionality, and then expanding it again upon observation, one could actually create the illusion of a full simulation, but while running that simulation asymptotically faster. Do you see why I am so fascinated by our reality?

Anyway. Perhaps that's a more interesting tidbit to read than "update's coming! I promise"


Truly, this place we call our universe is nothing short of an exquisite puzzle. I've absolutely no idea what's really going on here, but I feel quite thrilled just to be a part of it all

