Multiplayer Platformer Log #3 – Authoritative Server

In the first developer log we talked about cheating and how the best way to prevent it, is making the server authoritative about the state of everything. That is what this log is about. Taking the custom simple physics engine we talked about in the second log, we are ready to start.

Server Loop

So how do we go about moving everything to server? Client has basic game loop where it moves character around based on input. We can do the same on the server.

We know we need to send player inputs to server, but what do we do with them? Apply them right away (moving that player further in time?). Caching them and moving everyone one step at a time? Should we send the game state (character positions) back to client right away?

If we moved everyone at the same time, what do we do if there’s no input for certain player? Do we use the last one? Do we skip the player?

I had a lot of these questions, so I did some research:

All of those filled in a piece of the puzzle for me and I ended up with the following model:

  • Clients send their inputs at 30 Hz, which seems like nice balance between precision and bandwidth to me.
  • Apply player inputs as they come, right away. This means players won’t be entirely in sync with each other, but the difference is negligible. This is mostly based on Networked Physics by Glenn Fiedler.
  • Both the client and the server assume each input took 1/30 of a second. That means the server isn’t trying to keep timers, it simply applies the input with delta time equal 1/30 sec.
  • Server won’t respond right away, instead it will have a game loop of its own where it will send all the states to everyone every 100 ms. Bandwidth is an expensive resource, and sending updates only ten times per second definitely helps there.

Basic Implementation

The client game loop looks similar to this (for simplicity, we are ignoring other players for now):

function onServerUpdate(x:Float, y:Float):Void {
	playerSprite.position.set(x, y);
}

function update():Void {
	accumulator += time.elapsedMS;//time since last update
	while (accumulator > 1000 / 30) {
		accumulator -= 1000 / 30;
		//pack input and send it to server
		websocket.sendString(getInputData());
	}
}

And the server:

static inline var deltaTime:Float = 1 / 30;

function onClientInput(client, input):Void {
	world.step(client.player, input, deltaTime);
}

//called every 100 ms
function update():Void {
	for (client in clients) {
		client.websocket.send(client.player.getData());
	}
}

Really simple isn’t it? This is the result (arrows show the inputs being send):

Issues

As you can see in the video, the movement is a bit choppy. That’s because we are only moving the player when we receive an update from the server, which happens only ten times per second. Which basically makes the game run at 10 FPS.

Additionally, this video was a perfect example – with zero ping and no packet loss. If we simulate real conditions, it gets a lot worse.
Here’s video with simulated 200 ms ping (randomized ± 10%) and 15% packet loss:

Notice the delay between the button being pressed (arrows get highlighted) and the character movement? And I don’t think I need to mention what the packet drop causes.

Client-Side Prediction

The solution to these issues is relatively simple:

  • Client marks each input with incrementing id.
  • Client applies the input right away, while also storing it. This means we are doing the same simulation that server does on the client. Shared codebase comes in handy here.
  • Server includes the id of last applied input in the update sent to clients.
  • On update, client throws away all stored inputs with id smaller or equal to the last applied on the server. Then simply reapplies all stored inputs on the state (position) received in this update. This is called server reconciliation.

Client code changes to this:

function onServerUpdate(data:PlayerData, lastInputId:Int):Void {
	//delete inputs that server already applied
	oldInputs.deleteBelowId(lastInputId);
	//reset to authoritative state
	player.setTo(data);
	//reapply all inputs since then
	for (inputData in oldInputs){
		world.step(player, inputData, World.deltaTime);
		//we moved the delta time to the World class
		//so that it's shared between the server and the client
	}
}

function update():Void {
	accumulator += time.elapsedMS;//time since last update
	while (accumulator > 1000 / 30) {
		accumulator -= 1000 / 30;
		//pack input and send it to server
		var inputData = getInputData();
		websocket.sendString(inputData);
		oldInputs.add(inputData);
		world.step(player, inputData, World.deltaTime);
		playerSprite.position.set(player.x, player.y);
	}
}

Server code is basically the same, it just sends the last applied input along in the update.

We have to keep in mind that for this to work, world.step must only simulate forward the player we pass in. Right now we don’t have any other entities, but if we did, reapplying stored inputs mustn’t move them forward in time.

This would still result in only 30 FPS since that’s how often we capture and apply inputs. What we can do, is simply implement linear interpolation for the player sprite, making the movement as smooth as possible, at a cost of 1/30 of a second input delay.

And this is the final result, with the same 200 ms ping and 15% packet loss applied:

Looks great right?

Server Reconciliation

Something you might have read about server reconciliation is that it causes snapping – sudden change of the character position. That might seem disturbing to players and is something we want to avoid.

The usual approach is to, instead of changing the position right away, rather calculate the difference and apply it over time. I didn’t implement it, because since we are using constant delta time and we don’t miss inputs like we might we UDP, the prediction is essentially always perfect.

Further Issues

We were doing all this to prevent cheating, but right now the player only needs to send inputs at faster (or slower) rate to gain an advantage. This is something we need to address.

Related to that, what if some client’s clock isn’t going at the same rate as the server’s? We need to keep this in mind when coming up with a solution.

For today though, the result is pretty great.