On cheating
Cheating is a problem as old as time.
The theoretical prevention is simple to state vaguely: "Harden" the protocol. Interactions need to be clearly defined. Players are only told what they need to know; they are restricted to legal actions as defined by the game rules.
This is relatively easy especially for well-defined simple board games, say, chess. The server can easily reject illegal moves; both players know the entire board state at all time. Yet the problem of cheating is not at all solved: Chess engines have outperformed even the very best human players since the days Kasparov lost to Deep Blue. Today any ordinary computer can obliterate a chess grandmaster. This brings us to our first fundamental problem:
Statistics and perfect bots
Trying to detect cheaters now becomes a probabilistic problem: Which plays seem "human-like", which ones "machine-like"? 1 There is no definitive answer to this. Any heuristic can be countered just as easily by suitable randomization based on a statistical evaluation of the moves real players play.
We run into our first tradeoff: Any attempt at detecting "suspiciously good" moves runs the risk of falsely flagging legitimately excellent players.
More generally, some aspects of a game can simply be solved better by a computer, a "bot", such that it is essentially indistinguishable from a good human player at the protocol level.
In a first-person shooter game, this may be a so-called "aimbot" for example: A bot which takes on the menial task of perfectly aiming at your enemies for you.
"Now wait!", you might say, having a particular implementation in mind. "Can't we detect sudden camera rotations?"
Okay, I say, and make the aimbot interpolate its rotations.
Still, you're not convinced. You can do statistics on how fast players move their mouses, how accurate they are, on and on, to distinguish these.
But whatever statistics you do, I can do too, and I can then randomize my aimbot to be a mix of computed perfection and random resembling the actions of good players, without being a good player myself.
What ensues is an arms race, a cat-and-mouse game: My cheat and your anticheat co-evolve, wasting both our time (and in the process, likely accidentally upsetting honest players). Yay.
Feasibility
Many computations are done on the client - most importantly, rendering. And here it is trivial for the client to cheat: Smoke can be made transparent on the client. Enemies can be highlighted, displayed on a minimap. The entire world can be lit up even when it is supposed to be pitch black midnight.
What can the server do? Very little. There are various reasons for why the server relies on client cooperation for rendering, chiefly among them performance needs, development effort and network latency.
The server could try to do some occlusion checks, but not only is this a bunch of work to implement, it also gets expensive quickly, and many things simply are supposed to be slightly visible - like an enemy in the distance. Short of sending a prerendered image - which is infeasible due to latency - there is not much you can do against clients which "enhance" the clientside rendering however they like - by ignoring lighting (known as "fullbright"), by highlighting enemies, by changing draw order and depth - thus improving visibility of objects that ought to be invisible or hardly visible to them.
In general, any attempt at putting the client on a "need-to-know" basis will visibly worsen the client experience: As you need to transfer much more data, responsiveness is reduced. You only send something once the client convinces you that it now is able to see it. And there are still massive opportunities for cheating: For example a client could simply remember and render parts of the map it once learned about as long as it pleases, even if they are not visible anymore, under the assumption that they likely haven't changed much.
We are presented with a second tradeoff: Strict anticheat comes at the cost of a good client experience. This is similar to the first tradeoff - but instead of applying mostly to the top players, it applies to the average player: To harden our game, we create a uniformly worse experience for everyone, not just for the few unlucky ones who happen to be falsely flagged as cheaters by an overzealous, inaccurate anticheat.
Most importantly, anticheat is typically a lot work for the game (or platform) developers to properly implement, especially if you have some large and complex legacy codebase which was not designed with anticheat in mind at all, need to keep backwards compatibility, and additionally need to support various different configurations for a plethora of features which have been added since.
This presents a third tradeoff between the quality of anticheat and the effort required to implement it. (Arguably this is the "economic tradeoff" underlying everything, but with anticheat it is especially pronounced.) This brings us straight to our next point:
Economics
The "cheating economy" is heavily skewed. It is much easier to poke holes in something - especially something which was not designed to be "robust" to begin with - than it is to ensure robustness.
Anticheat effectively becomes yet another major crosscutting concern - not to be confused with security - which, if done properly, would require significant development resources.
Security
Robustness against cheating and security tend to correlate, but are still two very different things. This is the difference between a "hacker" and a "cheater".
Cheating concerns player behavior that violates the game rules. For all the reasons mentioned above, no sane person would define the rules to be precisely what is effectively implemented. That is the case only for very few so-called "anarchy" game modes. Anarchy can be a fun game mode in its own right, but it usually turns out to be entirely different from the intended game mode. Cheating is the covert violation of the expectation of your cooperation. Say, getting a peek into someone else's cards at the poker table. Cheating is expected. It happens a lot.
Hacking is more severe, more serious. Hacking involves exploiting vulnerabilities in what is implemented. Hacking is unexpected. Nobody knew that some peculiar stack overflow would let you write that particular bit of data there such that then this mysterious piece of code does something funny, at the end of which you have control over something which nobody expected you to have control over ever until you did.
Hacking gives you control over something the other side assumed to be (reasonably) safe from you. Hacking typically requires a fair amount of technical skill. It happens extremely rarely. Games usually aren't worth it.
So no, having changed a single line in an open-source codebase and then recompiled your client doesn't make you a big-brained "hacker". It just makes you one of many cheaters. You have not developed a "hacked client". You have made a trivial cheat client.
Summed up in an exaggerated catchphrase: There is no hacking. Almost everything you're seeing in games is cheating: People modifying clients they ultimately control, which the game developers know they control. Not hacking. It is extremely rare that someone actually manages to take control of a game server.
Mitigations
Manual review
We must accept that anticheat will never be perfect. We will want moderators to be able to revisit anticheat actions. This requires the ability to record a player from the server's perspective. Ideally, such recordings should be active all the time, say, with the last 10 seconds remaining in memory, such that a player suspecting cheating - or automated anticheat - can simply retroactively ask for a recording to be saved to be revisited by a moderator later on.
Certificates
If you're designing features which involve network communication, require certificates: The client should always be able to prove to the server that it is allowed to make a certain action. Certificates may be omitted when it is easy enough for the server to enumerate valid actions, say, to return to our example, valid moves in chess, checking that the client's action is one of them.
Implementing certificate-based validation properly in practice is rather tedious, but the basic idea is rather simple: Whichever computation the client did, you must be able to verify on the server.
Let us give a simple example. Most games let you interact with in-world objects by pointing at them. [^simplicity] Let us assume that the pointability computations are reasonably simple, say, using axis-aligned bounding boxes (AABBs). Now suppose we want to validate an interaction.
First, let us consider what we would have to do if the client only reports the interaction: We have to check whether the interaction is at all possible. This means we basically have to ask "for all possible directions the player could point, is there one where no AABB obstructs the object's AABB?" This question can be answered, but it isn't cheap: You essentially need to project a bunch of AABBs, or shoot many rays, or something similar. But it's doable. This is just a linear visibility problem. But now it gets even trickier: Not only need you consider all possible directions, you also need to consider all possible positions. And this is where it gets funny. Ponder it a bit.
But there is a much simpler alternative approach if we demand a little more data from the client: Why should we have to tediously check whether such a position and direction, such a ray, exists? We simply reverse the burden of proof: The client should prove to us that they do exist, by sending us the ray. We first validate the position. We can then do a relatively cheap single raycast on the client. If we reach the object's AABB, and the intersection point is within range, we accept the interaction; otherwise we reject it. This is simple and effective. It's hopefully easy to see how this could be applied to many areas. Say, for player movement, the certificate would consist of a "path" the player took: Which inputs were made at which point in time?
There are some aspects complicating this though, namely problems of synchronicity - race conditions. The client may have a slightly outdated game state. You may want to allow that. So you would need to meticulously timestamp everything, and keep a short history of the game state. And then of course also comes the question of how to resolve "merge conflicts" when trying to merge the timelines of multiple different clients. The upshot is that server steps are pretty short, so this need not be a big problem.
Unskewing the economy
It may be worth trying to reverse the skewed economics. You can try to poke holes in cheat clients. Try to find telltale signs that something is a cheat client. I'm usually a big advocate of open source, yet in this case I would perhaps suggest not to open source your findings for economical reasons - suspicious behavior tends to be easy to patch if the developer knows where it is. Share your findings among trusted fellow hosts. Leave the cheat client developers guessing.
There are also some aspects of a game where it may be relatively easy to validate client actions, or to extend the protocol to deliver proper certificates. Pointing at objects as discussed above is one example; another example would be limiting player speed. These mitigations can be implemented with little effort and limit cheats a good bit; they tend to be worth it.
Still, in the bigger picture, everyone is just wasting time on a semi-intellectual cock fight.
Conclusion
Anticheat costs a lot to implement properly; there are plenty of tradeoffs to consider. Thus mitigations will always be compromises. Often anticheat will be - understandably - neglected: It is not necessary for a good game at first, it only tends to become a problem later on. And even if it were not neglected, an arms race would inevitably ensue.
Hence it is not surprising that often, it is attempted to simply socially prevent cheating through stigma, by frowning upon development, distribution and of course usage of tools for cheating (which, it must also be said, of course then further disincentivizes the development of proper anticheat).
I've come to the realization that online and offline alike, for all these reasons, people simply prefer not to waste their time playing a petty cat-and-mouse game with cheaters. 2
Cheating is just not a priority 3. The rest of us are just having fun without the cheaters.
And personally I'd rather work on entertaining the rest of us than the cheaters.
-
We consider cheating by letting someone else play in your stead less problematic, especially in games which require your full attention: That player is then occupied and is thus - for the most part - barred from playing at the same tournament, at the same time, heavily disincentivizing this strategy. Good players generally have little incentive to help bad players cheat. ↩
-
Unless they're into that sort of thing. In which case, go play anarchy. ↩
-
This attitude is of course not applicable to security, where we unfortunately can not afford to ignore the problem because the damage tends to extend well beyond singular players getting an unfair advantage in a game. ↩