How the fuck do they code games so that everything moves smoothly and at a consistent rate regardless of the speed of the machine you run it on, and still have tons of shit interacting with each other somehow?
Maybe I need to find a good open-source candidate to have a look at.
Also, I wish I knew the correct terms to describe what I'm talking about.
Name:
Anonymous2010-05-16 5:20
Logic simulation doesn't usually work with concrete ticks but delta-time.
Name:
Anonymous2010-05-16 5:51
>>2
If the game is running on delta-time at one frame every two seconds wouldn't the physics differ from a computer running one frame every one second? Say the velocity is recalculated every cycle of the game loop based on acceleration- if fps = 2.0, this is calculated twice as often as fps = 1.0- resulting in a desync between the two machines even if the input is identical.
>>4
For say, a multiplayer game- how is this dealt with? And also if the computer is horrendously laggy, (one frame every two minutes)- you still wont be able to composite a video after recording a couple days of frames because every frame will be completely wack, (x += dx * timedelta) would make you fly through walls etc with a timedelta of 120.
Name:
Anonymous2010-05-16 6:07
>>5
Calculate velocity on server.
Send to clients. While no velocity is known(server didn't send it to client yet), client can try to calculate velocity by it, but after receiving use received values.
This is done for example in TeeWorlds. In freeze mod for example - when your tee is frozen, if you press jump client will render jump, but after receiving from server that tee actually never jumped, jump will be reverted.
>>5
For large time scales nothing is expected to be in sync. Over larger periods synchrony is maintained however. You're assuming that deltas are cumulative and sequential. This is a mistake.
For large time scales nothing is expected to be in sync.
should read: For small time scales ...
Name:
Anonymous2010-05-16 7:56
>>1
Either using delta time as others have said, or by limiting update speed.
The second method's good for consoles since the hardware's known and there's no real danger of everything going wrong, physics engines will perform a hell of a lot better too.
Using delta time is great for varied hardware though, even if physics engine's don't like it. It can become a problem in some cases though, for example, if delta time ends up getting too high intersection testing will fail with fast moving objects, and you'll need to instead calculate if something would hit at a given time.
Name:
Anonymous2010-05-16 12:06
>>9 if delta time ends up getting too high intersection testing will fail
Most good physics engines should calculate intersections at increasing granularity, preferring vector intersection equations over if x1+vx1 >= x2 as collisions become more likely (e.g. after bounding-box tests, pythagorean distance tests and simple directional vector tests have all returned true).
Even with high deltas, logic should say that if one object ended up on the other side of another after an update, it's pretty likely that they should have collided.
Fixed tick engines are only optimal for fps=ticks. If the machine is faster, you're wasting power.
If the machine is slower and the programmer had a clue (50% chance, depends on whether the problem was evident enough), you are wasting power calculating ticks that you won't show, and also what you show won't be as smooth as possible because unless your fps divides the tick rate exactly there will be timing jitter.
If the machine is slower and the programmer was retarded, the game speed will actually slow down. As incredible as it seems, some people still have the guts to publish big titles with this problem (see below).
As other people have explained, with variable ticks network replication (and demo playback, same thing) works in a client-server manner where an authoritative server sends absolute (not relative) updates. Usually these updates come at low frequencies (10-30/s is typical), the client buffers two of them and lerps (interpolates) between them to generate smooth results (this adds 1/freq latency). If it doesn't interpolate (by design or due to network issues), it'll try to extrapolate and then correct when a new update arrives. Some single-player games do this too to decrease CPU power, as lerping is cheaper than fully calculating everything (and you only need to lerp stuff that is in the vicinity of the player).
Of course sending full updates is more expensive than sending inputs. Fixed tick engines can get away with sending just input and can support a ton of players since an human can't generate that much data, plus they're unaffected by the number of objects in the game world. However they usually don't support mid-game joining (this requires sending the current state wholly). These characteristics make them ideal for real time strategy games. Note that you can lerp in these engines too and present more frames than ticks (for animations and effects that don't affect the game). This requires programmers who aren't lazy bastards though.
As for delta times getting too high, engines usually impose a maximum delta time to process. This is good practice anyway to avoid other kind of problems.
Now, from memory:
* Doom (Doom2, etc): fixed ticrate 35Hz, no lerp, will run multiple ticks to maintain speed, demos and network only use user input
* Quake: variable ticks (1 tick per rendered frame), lerp for demos and network, sends full updates, client input has latency but camera angle is short-circuited, maximum tick is 100ms (therefore game will slow down below 10fps)
* QuakeWorld: variable ticks, lerp for demos and network, client input is processed on the client immediately and corrected if the server disagrees later
* Quake2: same as quakeworld, but server ticks run at 10Hz always, even on single player. Has noticeable latency when firing weapons for this reason, even when running at 1000fps. Timing changed from doubles to integers representing milliseconds
* Quake3 (and infinite derivates including MoH, CoD and their sequels...): same as Quake2, no limits (server runs at 20Hz by default, 40Hz on Quake Live), maximum tick is 200ms (full speed from 5fps and up), some physics are run at 16fps at a minimum (will run multiple times for tick if it's running slower)
* Doom3: worst of all worlds, 62.5Hz fixed ticrate (16ms/frame) and sends full updates, original plan was not to, but bugs caused the game to desynch
* GoldSrc/Source engine (modern versions, based off Quake1): same as QuakeWorld mostly, but also client-side prediction for some actions such as firing weapons, and lag compensation (server moves stuff to the places it was when the client did the action). Client-side prediction is for cosmetic effect only, server always has the last word. The result is that weapon spray and blood effects will differ between clients and server.
* Unreal Engine (U1, UT, UT2K3, millions of derivatives): Variable ticrate, lerping and prediction; changed a billion times already. Uses float time.
* StarCraft: fixed ticrate, selectable, 23.8Hz at maximum game speed (42ms frames), 15fps at normal speed. Might batch inputs together to lessen the network load (of course mouse pointer and screen scroll are fully client-side and can run faster than game speed).
* Diablo 1 and 2: fixed ticrate, 25Hz. Network code in Diablo1 is a best of its own, in Diablo2 it's just client/server. In Diablo2 the mouse pointer can be drawn as fast as possible between game updates (however it's not a hardware cursor) - same as SC.
* Serious Sam (& Second Encounter): Variable ticrate, uses player input for network synchronization and demos. Does send entire state when a new player connects. Clients are sent updates as fast as the server is rendering, the server framerate sets the ticrate (so a fast server can overwhelm a slow client, because clients have to process all of the updates even if they can't render them). Quite bizarre but runs really well in practice as long as the machines are within a reasonable range of performance. Allows hundreds of monsters on view while using modem bandwidth. Uses floats for time.
* Command & Conquer and sequels: Fixed ticrate with no speed control (slow fps = slow game, on networked games one slow client will slow down everybody). Game speed just means "minimum delay between tics" (just a dumb fps cap). Fastest game speed meant "as fast as your machine can process it". At some point the speed setting was removed and the limit set at 30Hz. Even some of the recent 3D games lacked speed control, so if your graphics card couldn't render them at 30fps, they would slow down. Pretty terrible, I remember moving the camera away to barren zones to make the game progress faster on occasions. No wonder EA bought them, they're made for each other.
* A lot of console games: fixed ticrates at the speed of the corresponding TV system, usually 60 or 30Hz. Lazy PAL releases run 20% slower at 50 or 25Hz. This happens even today.
* Some console games (most Wii games, incl. Super Mario Galaxy and Zelda whatever): same as above, but to keep the same speed on PAL, run two ticks every 5 frames. This causes a noticeable "jump" 5 times a second, looks pretty bad if you're looking for it. Fortunately you can set it to output 60Hz for compatible monitors.
My experience is that a variable ticrate engine (with lerping for clients when networking) is well worth it for smoothness and feels much better compared with a fixed ticrate engine even when rendering at its optimal rate. This might be because timing is pretty sucky on PCs though (console games that are hardlocked to the screen refresh rate are perfectly smooth too). In some games I mentioned the use floats or millisecond-integers for timing. This is important because the difference is actually noticeable (with milliseconds you get about 6% timing jitter at 60fps, and 12% at 120fps). It's not terrible (most people won't notice) but I'd rather have better precision.
I still need to think about it a lot to wrap my brain around how to write a working example of a game (or just a small system) that does this kind of thing properly.
Any recommended reading would be appreciated.
Name:
Anonymous2010-05-16 20:41
>>13! I LOVE YOU!I LOVE YOUR POST!I BOOKMARKED IT FOR FUTURE REFERENCE!KEEP POSTING!
>>23
Yes, that's true. However most of these include physics and more importantly serve as examples of complete engines which handle the problem in OP. It's funny that the physics engines you list are used by engines I have deliberately omitted.
* A lot of console games: fixed ticrates at the speed of the corresponding TV system, usually 60 or 30Hz. Lazy PAL releases run 20% slower at 50 or 25Hz. This happens even today.
Yeah mario kart dd records page has different charts for 50 vs 60hz
>>30
Nothing to do with the physics engines themselves, but the engines using them which came to mind would make truly poor examples, eg.: blender game engine (w/ bullet) wouldn't answer the question at all.
Name:
Anonymous2010-05-17 10:56
The trick is to use fixed ticks and if the system is fast enough to render more frames you interpolate between the ticks.
>>32
You also can (and in fact, should) interpolate if the system is slower.
T = game tick; R = rendered frame
T T T T T T
R R R R
Here the ticrate is 30Hz and the rendering is ~20fps for example. If you pick up the nearest (well, previous) tick instead of interpolating, it won't look very good.
However this adds latency, so please make it variable tick (parametric) instead. It's not that hard. For continuous stuff you just multiply velocity by time. For periodic stuff (weapon firing), you have an accumulator where you add the time every frame. If the accumulator's value is larger than the period, you subtract as many whole periods as possible and perform the action that many times, keeping the remaining time in the accumulator.
There are some other fine details but on the whole I don't see how not being parametric would save a lot of work.
>>35
That's the cheapo way: fixed tic rate, render with time interpolation. It's reasonable, the problem is that it adds latency and the only advantage is that the logic can be a bit simpler (however you lose other desirable features such as the ability to do slow-motion properly).
He says the problem with the right solution ("Game Speed dependent on Variable FPS") is numerical instability. Well, so suck it up and design it correctly so huge errors don't pile up on a reasonable range of speeds, and limit times to that. That's what most properly designed games do. For example most recent Valve Source games are limited to 300fps by default. Most iD Tech 3 games come limited to 85fps, most Unreal Tech 3 are limited to 60 (that's a bit too low, but can be increased and increasing it doesn't cause any problems). Practical experimentation suggest that mathematical precision isn't an issue at all in practice.
Of course badly chosen units + shitty timing functions that have low precision and will return 0 regularly (like the GetTickCount() he suggests) = trouble. Big news.
Name:
Anonymous2010-05-18 20:38
>>38
Ah, so "Game Speed dependent on Variable FPS" demonstrates the right solution? I've read about the problems with GetTickCount() and some other methods of getting the time (ie. GetSystemTimeAsFileTime and QueryPerformanceCounter in Windows) but I don't recall the article saying which method is actually good to use. Anything you can tell me about that? If not then I'll see what Ogre3D or one of the other engines does about timing.