How the fuck do they code games so that everything moves smoothly and at a consistent rate regardless of the speed of the machine you run it on, and still have tons of shit interacting with each other somehow?
Maybe I need to find a good open-source candidate to have a look at.
Also, I wish I knew the correct terms to describe what I'm talking about.
Name:
Anonymous2010-05-16 18:53
Fixed tick engines are only optimal for fps=ticks. If the machine is faster, you're wasting power.
If the machine is slower and the programmer had a clue (50% chance, depends on whether the problem was evident enough), you are wasting power calculating ticks that you won't show, and also what you show won't be as smooth as possible because unless your fps divides the tick rate exactly there will be timing jitter.
If the machine is slower and the programmer was retarded, the game speed will actually slow down. As incredible as it seems, some people still have the guts to publish big titles with this problem (see below).
As other people have explained, with variable ticks network replication (and demo playback, same thing) works in a client-server manner where an authoritative server sends absolute (not relative) updates. Usually these updates come at low frequencies (10-30/s is typical), the client buffers two of them and lerps (interpolates) between them to generate smooth results (this adds 1/freq latency). If it doesn't interpolate (by design or due to network issues), it'll try to extrapolate and then correct when a new update arrives. Some single-player games do this too to decrease CPU power, as lerping is cheaper than fully calculating everything (and you only need to lerp stuff that is in the vicinity of the player).
Of course sending full updates is more expensive than sending inputs. Fixed tick engines can get away with sending just input and can support a ton of players since an human can't generate that much data, plus they're unaffected by the number of objects in the game world. However they usually don't support mid-game joining (this requires sending the current state wholly). These characteristics make them ideal for real time strategy games. Note that you can lerp in these engines too and present more frames than ticks (for animations and effects that don't affect the game). This requires programmers who aren't lazy bastards though.
As for delta times getting too high, engines usually impose a maximum delta time to process. This is good practice anyway to avoid other kind of problems.
Now, from memory:
* Doom (Doom2, etc): fixed ticrate 35Hz, no lerp, will run multiple ticks to maintain speed, demos and network only use user input
* Quake: variable ticks (1 tick per rendered frame), lerp for demos and network, sends full updates, client input has latency but camera angle is short-circuited, maximum tick is 100ms (therefore game will slow down below 10fps)
* QuakeWorld: variable ticks, lerp for demos and network, client input is processed on the client immediately and corrected if the server disagrees later
* Quake2: same as quakeworld, but server ticks run at 10Hz always, even on single player. Has noticeable latency when firing weapons for this reason, even when running at 1000fps. Timing changed from doubles to integers representing milliseconds
* Quake3 (and infinite derivates including MoH, CoD and their sequels...): same as Quake2, no limits (server runs at 20Hz by default, 40Hz on Quake Live), maximum tick is 200ms (full speed from 5fps and up), some physics are run at 16fps at a minimum (will run multiple times for tick if it's running slower)
* Doom3: worst of all worlds, 62.5Hz fixed ticrate (16ms/frame) and sends full updates, original plan was not to, but bugs caused the game to desynch
* GoldSrc/Source engine (modern versions, based off Quake1): same as QuakeWorld mostly, but also client-side prediction for some actions such as firing weapons, and lag compensation (server moves stuff to the places it was when the client did the action). Client-side prediction is for cosmetic effect only, server always has the last word. The result is that weapon spray and blood effects will differ between clients and server.
* Unreal Engine (U1, UT, UT2K3, millions of derivatives): Variable ticrate, lerping and prediction; changed a billion times already. Uses float time.
* StarCraft: fixed ticrate, selectable, 23.8Hz at maximum game speed (42ms frames), 15fps at normal speed. Might batch inputs together to lessen the network load (of course mouse pointer and screen scroll are fully client-side and can run faster than game speed).
* Diablo 1 and 2: fixed ticrate, 25Hz. Network code in Diablo1 is a best of its own, in Diablo2 it's just client/server. In Diablo2 the mouse pointer can be drawn as fast as possible between game updates (however it's not a hardware cursor) - same as SC.
* Serious Sam (& Second Encounter): Variable ticrate, uses player input for network synchronization and demos. Does send entire state when a new player connects. Clients are sent updates as fast as the server is rendering, the server framerate sets the ticrate (so a fast server can overwhelm a slow client, because clients have to process all of the updates even if they can't render them). Quite bizarre but runs really well in practice as long as the machines are within a reasonable range of performance. Allows hundreds of monsters on view while using modem bandwidth. Uses floats for time.
* Command & Conquer and sequels: Fixed ticrate with no speed control (slow fps = slow game, on networked games one slow client will slow down everybody). Game speed just means "minimum delay between tics" (just a dumb fps cap). Fastest game speed meant "as fast as your machine can process it". At some point the speed setting was removed and the limit set at 30Hz. Even some of the recent 3D games lacked speed control, so if your graphics card couldn't render them at 30fps, they would slow down. Pretty terrible, I remember moving the camera away to barren zones to make the game progress faster on occasions. No wonder EA bought them, they're made for each other.
* A lot of console games: fixed ticrates at the speed of the corresponding TV system, usually 60 or 30Hz. Lazy PAL releases run 20% slower at 50 or 25Hz. This happens even today.
* Some console games (most Wii games, incl. Super Mario Galaxy and Zelda whatever): same as above, but to keep the same speed on PAL, run two ticks every 5 frames. This causes a noticeable "jump" 5 times a second, looks pretty bad if you're looking for it. Fortunately you can set it to output 60Hz for compatible monitors.
My experience is that a variable ticrate engine (with lerping for clients when networking) is well worth it for smoothness and feels much better compared with a fixed ticrate engine even when rendering at its optimal rate. This might be because timing is pretty sucky on PCs though (console games that are hardlocked to the screen refresh rate are perfectly smooth too). In some games I mentioned the use floats or millisecond-integers for timing. This is important because the difference is actually noticeable (with milliseconds you get about 6% timing jitter at 60fps, and 12% at 120fps). It's not terrible (most people won't notice) but I'd rather have better precision.
>>32
You also can (and in fact, should) interpolate if the system is slower.
T = game tick; R = rendered frame
T T T T T T
R R R R
Here the ticrate is 30Hz and the rendering is ~20fps for example. If you pick up the nearest (well, previous) tick instead of interpolating, it won't look very good.
However this adds latency, so please make it variable tick (parametric) instead. It's not that hard. For continuous stuff you just multiply velocity by time. For periodic stuff (weapon firing), you have an accumulator where you add the time every frame. If the accumulator's value is larger than the period, you subtract as many whole periods as possible and perform the action that many times, keeping the remaining time in the accumulator.
There are some other fine details but on the whole I don't see how not being parametric would save a lot of work.