Thursday, January 10, 2013
Summary
Another huge day for LT!!! On top of implementing the final optimizations on my collision algorithm (which turned out quite well, actually), I made a really big change to the way the game engine works today, and the result is a massive improvement in smoothness. So, let's talk about time steps.
In a game, you have some world, in which things are taking place. And they're taking place at a certain rate. To make things happen, you "simulate" them at a certain rate. So each frame, a bullet moves a little further, your ship turns a few more degrees, etc. An interesting, seemingly-innocent question arises: how fast do you simulate things? In other words, in your physics equations and such, what is your delta-t? The obvious answer: well, if you simulate once per frame, it's the amount of time between frames. Duh? Sure. So let's do that: let's simulate the game, each frame using the time between the previous frame and this frame as the amount of time for the calculations. Great, right? No. Why?
Jerky chaos ensures. Why? Because you're using a different simulation time every frame. Frametime is highly variable, even when everyone is standing absolutely still...you just never know how long that darn GPU is going to twiddle its thumbs at the end of each frame. But if we're timing it, shouldn't the simulation time line up with real time, and everything will be fine? Yeah, sure, sounds great in theory. Doesn't work in practice. They don't align, the result is jerky at best and unplayable at worst.
Ok, fine, next try: let's use a constant amount of time each frame. Great, now everything is consistent! Yeah, not really. Because your friend over there, with a GPU twice the size of yours and a framerate 4 times higher, is playing the game
literally 4 times faster than you. His ship is moving 4 times faster, his bank account is filling up 4 times faster, and the sound of him taunting your slow computer is probably at least 4 times more annoying than it was with a variable time step. Instead of lighting your friend's computer on fire, let's think up a more clever solution.
What if we smoothed out the frametime a bit...say, took some kind of exponentially-decaying average over a number of frames, and used a variable timestep? That way, we would cure the jerkiness, but could also prevent our obnoxious friends from playing the game faster than us. This is a fairly good idea, and is what LT was using until today. The result is certainly playable, and mostly pretty nice. It does, however, result in noticeable time distortions when frametime starts to change quickly. The result is not jerky, because we're smoothing...but you may notice, for example, time seems to slow down ever so slightly when you get really close to the surface of an asteroid (because the physics starts doing a lot of work). It makes for...a weird, somewhat disconcerting experience. Most of the time it's fine, but if you're in a dogfight where bullets are flying everywhere, it might become a bit jarring to keep feeling the tug of time dilation.
What do we do? The golden answer:
decouple simulation and rendering. Let's say that we'll
always simulate at, for example, 60 FPS. But we'll render at whatever frequency the GPU allows. This is a sure-fire win, right? Right. But the tricky part is immediately clear: what if we can render at 120FPS, but still only want to simulate at 60FPS (or, if you're uncomfortable going above monitor refresh rates, let's say 60/30)? Then we'll be rendering twice, but the simulation won't have moved: we'll render the exact same thing! Hence, we'll waste our energy. What to do? Well, we could just lock the rendering at 60FPS (or 30). A lot of games do that. But there's a better solution. What if we take the previous world state, and the current world state, and render a blended version, depending on when the render step occurs? Then, we could get a smooth animation of the world even if we're rendering at a frequency much higher than we're simulating! This is really an incredibly beautiful solution to the problem. And I have good news: it works magnificently well.
So, sorry for that massive summary, but now you understand the rather tricky little subject of time steps! I spent most of today rewriting all of what needed to be changed in order to allow the world simulation to keep track of a previous state, and allow the renderer to interpolate the state. It wasn't very hard, thanks to all this code cleaning I've been doing recently
But it took a lot of time to test all the various aspects of it.
The result is that
Limit Theory now plays really smoothly (and I'm still on my laptop...)!! Man. It's really great. I can't stress how happy I am with the feel of the game now...it's just so responsive and smooth (thanks to the interpolation), but also stable - no more time dilation when the physics engine starts grunting! Furthermore, I found that I don't even need to simulate the world at 60 Hz. For now, I've stepped down to 30, and it seems just fine! This means that LT will run even faster than before. With this improvement, the bottleneck is almost completely the GPU, which is awesome, and means that I'll have plenty of room to grow when I start consuming massive amounts of CPU simulating an infinite universe!!!
Alrighty. Sorry for going a bit crazy with that log
Hour Tally
Coding: 6.59
Composing: 0.01
Internet: 3.80
Testing: 3.10
Total Logged Time: 13.49