Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Networking #12

Open
BearishSun opened this issue Mar 16, 2018 · 12 comments
Open

Networking #12

BearishSun opened this issue Mar 16, 2018 · 12 comments
Labels
OFFICIAL type: enhancement [MAJOR] Feature that takes a few weeks up to few months to implement
Milestone

Comments

@BearishSun
Copy link
Member

Integrate a library like RakNet. Add support for remote procedure calls for simple communication. Add ability to automatically replicate objects over clients (using the built-in RTTI system). Allow bs::f to run head-less, without any rendering or audio, so it can be ran as a server. This would involve creating a NullRenderAPI, NullRenderer and NullAudio systems that simply ignore all method calls, and return resonable values when asked for something, but under the hood don't do anything.

Perhaps also add support for higher-level features like lobbies, chat and matchmaking. But this is likely best left as a separate feature, to be done after the networking foundation is established.

@BearishSun BearishSun added type: enhancement [MAJOR] Feature that takes a few weeks up to few months to implement OFFICIAL labels Mar 16, 2018
@BearishSun BearishSun added this to the v1.2 milestone Mar 16, 2018
@iontom
Copy link

iontom commented Mar 20, 2018

You might tack using Telehash or whispernet into the long term. You could monetize and stay fully open source just by tokenizing bandwidth and incentivizing relay servers. A pre-allocated amount + a small transaction fee could fund you pretty well and also make the network more reliable. That's a big side project though.

https://github.com/telehash
https://github.com/ethereum/wiki/wiki/Whisper

@BearishSun
Copy link
Member Author

I'm not familiar with these types of libraries. Where do you think they would fit in networking wise?

@jonesmz
Copy link
Contributor

jonesmz commented Mar 31, 2018

My 5-minute review of the links iontom shared tells me that telehash would be suitable for in-game chat features, or potential even suitable for medium-latency but still "real-time" net-code that's still encrypted. Decentrailized, and encrypted, but not paranoia-insanity level security. Seems to want to be a transport layer for most types of communication.

The wisper link looks like it's trying to be an insanity-level paranoia communication solution. Includes store-and-forward features. Apparently the Etherium "cryptocoin" people want to use it for some kind of stock-exchange system, where buyers and sellers can post "I want to buy" and "I want to sell" type ads, which have time-limited lifetimes. Potentially this could be used for game server advertisements, like gamespy or steam.

I think there's some decentralization built in, which would allow for anyone who wants to host a game server browser to be able to.

I think, though, that you can like charge a small transaction fee to advertise a "transaction" on your server.

So for example:

You host a game browser server. Each game advert costs like, 60 seconds of compute time or something for a crypto coin type situation. Game developers who WANT to, can include an OPTIONAL plugin in their game which would allow the game to use YOUR game browser server instead of the game developers browser server (or whoever else is hosting one). So you make a bit of money when people choose to advertise their game on your game browser server because their computer does some computations toward a crypto coin, which you can sell.

I think, anyway. I could be completely wrong on what those libraries do.

@jonesmz
Copy link
Contributor

jonesmz commented Mar 31, 2018

@jonesmz
Copy link
Contributor

jonesmz commented Mar 31, 2018

Unfortunately, it's probably far too complicated to both abstract networking libraries into a plugin interface while simultaneously capturing all of the features with 100% performance.

However, I'd point out that there's plenty of room in the game net-code world for co-existing solutions. Perhaps someone smarter than I could find a way to abstract APIs while still having all that and a bag of chips.

For example, neither RakNet nor GameNetworkingSockets use SCTP, which is rather hard to believe, since SCTP is basically... perfect, for the use cases that games have. RakNet comes close, as seen here: SLikeSoft/SLikeNet#24 , and GameNetworkingSockets seem to have similar-ish feature set, but I'm not sure one way or another.

There's also concepts of state-synchronization protocols, which should be able to operate on any transport protocol. This is where I first heard of them: https://mosh.org/#techinfo . A sophisticated implementation of an SSP would need to be application specific, of course, but could allow for predicting remote-state locally and then later correcting mis-predictions cleanly. That could be important for some kinds of game to help mitigate round-trip latency.

Finally, you've also got to consider initial connection setup situations. I'd recommend using some subset of the WebRTC protocol, or at the very least the ICE (rfc5245) portion of it. (I implement the ICE protocol for my day job, so I might have tunnel vision). ICE does a fantastic job of setting up a communication channel, which can be raw UDP sockets, or SCTP over UDP (rfc6951). While WebRTC and/or ICE can't guarantee direct connections in certain situations (requiring a TURN server in some rare situations), and don't provide direct solutions for setting up message routing or mesh networking as such, they make a great "link-layer" (so to speak) backbone to build higher level behavior off of.

If you wanted to get crazy serious, you could use some type of game browser server to do the initial bootstraping of a WebRTC / ICE connection for any peers that are capable of supporting a direct connections between each other. If the graph of all peers is fully connected (some pathway between any two nodes exists, no matter how many hops), you can then dynamically measure performance between peers to establish a an "optimal" graph for relaying data between nodes in the game, with one (or more?) peers in the game serving as a source of truth for some appropriate aspect(s) of the game world. The exact behavior desired is going to be dependent on what the game's doing. Some games might require effectively lock-step client-server type behavior, others might need a more decentralized style of networking. Hub and spoke, star topology, multiple hubs-with-spokes connected in a star. All up to the game developer how to optimally implement.

If somehow the complete graph of peers is NOT able to be fully connected, you could support TURN servers to allow those users to have their traffic relayed by a third party (Horrible idea for latency, but sometimes it's necessary). Perhaps game developers would implement dedicated server support that would include TURN servers to handle that kind of thing.

@jonesmz
Copy link
Contributor

jonesmz commented Mar 31, 2018

Not that I want to barrage you with details, but speaking more about that Open Space Program game, I'd like to share a small discussion of how we'd (probably) approach networking for multiplayer.

We desire to eventually have support for multiple players, each launching and controlling multiple ships, in multiple solar systems (or, at the very least, a single solar system) with ships multiple-hours (or days, or months, or years) of "in-game" time of travel distance from each other, yadda yadda yadda, we're thinking the only viable model of handling things is to have each ship in it's own "bubble". A bubble being a 3d sphere that defines the area around a ship for physics, and another sphere a bit bigger than that that represents the marker for "bubbles are about to start colliding do something about it dang it".

If there were only a small number of ships in the game, it would be fine to have a single machine be the source of truth for the whole set of peers in the game, to make sure all the machines are on the same page with regard to what's happening in the world. However, we're trying to plan for a future where there might be literally thousands of ships, and dozens of players. Of course, we may never be able to get the network performance that would be needed to do all that, but good performance starts with good design, and network protocols is what I do for a living after all :-P

So we'd probably want to support having each "bubble"s source of truth live on an arbitrary peer, and balance the bubbles that need to be simulated between different peers based on which peers have which resources available and so on and so forth.

Now, when someone says "arbitrary peer" the first thing that someone thinks of (or at least I think of) is that there needs to be some kind of distributed decision making algorithm involved in deciding which peer of many becomes the "arbitrary choice". Of course, the solution to that might just be a pre-determined choice based on some metric that all peers know ahead of time. E.g. highest serial number wins, or something, but in a system that involves dynamic things happening dynamically on a dynamic number of peers.... eh, maybe it's not so simple.

So toward that end, networking support would at the least need to be designed in such a way that plugging in different distributed decision making algorithms isn't IMPOSSIBLE, but certainly it's not necessary to directly implement something like this, just don't prevent it, and I think it'll probably be difficult to make supporting that kind of thing impossible anyway. Just food for thought is all.

Ships that no players are actively interacting with (or viewed by) won't need to have any kind of physics happening, they'd just be run "on-rails", so their position in space would (probably) be a deterministic formula that would only be relevant to players looking at some kind of map-screen. If for some reason that isn't the case, probably each ship's position would likely be calculated using something like n-body physics by a single peer in the graph and periodically told to other peers. Since that kind of calculation isn't nearly as intense as simulating all of the physics of all of the objects in any particular "bubble" would be, we wouldn't want to distribute those calculations for all the "on-rails" ships to multiple peers unless it was becoming taxing on the peer doing it. But then again, thousands of ships "on rails / n-body physics" doesn't necessarily lend itself well to performance if only run on a single host. Blah blah blah, that's all probably rather obvious. Things will be broken up into smaller pieces as performance needs become clearer.

If a ship is only being interacted with by a single player, that player is the source of truth for that ship, as they're the only one running physics for it. Similarly, the only thing that source of truth needs to communicate to other peers would be things like change of velocity, or other interactions with things, as from the perspective of other peers, this ship is just "on rails" unless told otherwise by a change in velocity.

When multiple players are interacting with (or viewing) a ship (or collection of ships), each player needs to run the physics locally, of course, for sake of rendering to the screen, but one player needs to be a source of truth for the "bubble" to keep all the peers in sync and consistent. Just like a FPS, we don't want one player "going left" while everyone else shows them as "going straight", that kind of thing. So each player needs to distribute their keystrokes and such at low-latency, and the source of truth for that "bubble" needs to send out updates regularly to ensure no one's getting too out of sync.

As for synchronization, depending on exactly how ridiculous we want to get, we might even want to establish time syncing between the different peers in a given "bubble" in a similar way to how RTP and RTCP handle audio or video data. That way different peers, who are receiving events at different times can still re-assemble the scene, and the physics thereof, somewhat accurately, potentially up to the point of receiving an event that happened some #ms in the past, applying that event to the timeline, and re-simulating the scene based on that new event and any events that have already been reported that happened after the new event until "now". That might be overkill, I don't know shrug.

When two ships are being interacted with (or viewed by) by a player (or players) are near each other, their "bubbles" get merged into a single "bubble". If they were being simulated by the same machine, great easy problem to solve, just start simulating them as a single bubble. If they were being simulated by different machines, then a vote would happen and the simulation of the merged physics "bubble" would be taken over by a single machine, maybe not one of the original two, but always by a machine that has a player interacting with (or viewing) that physics bubble.

So anyway, as you can see, our use case involves a connected graph of peers, each of which will be interacting with (or viewing) zero or more "physics bubbles" at a time. The number of peers in the graph, and the topology of the graph might change, and we might not actually have a dedicated server involved in the system at all (and "save games" might be written out to something like google-drive, so that any player can start playing again without any other player, no need for a dedicated host). To support changing graph topologies, and different ship-bubbles being interacted with by different players, responsibilities of a given node would shift rather frequently based on player interactions.

So we're expecting to need a whole hell of a lot of custom networking code to support all of that.

Things get even MORE complicated than that, as part of the current work-in-progress design allowed for ships to live in "the future" from the perspective of other ships. But that's not really related to the networking side of things :-P

I hope that hearing about our use case is informative. :-)

@BearishSun
Copy link
Member Author

Thanks for the mini-reviews and all the other information, it'll come in handy.

That's certainly a complex use case :) But like with most things in bsf, I hope to design the networking in a way so it doesn't box anyone in with a particular approach. There are probably going to be multiple APIs on different levels (from basic reliable UDP, to RPC/replication to even higher level eventually), so if one doesn't suit you, you can always go down to the metal and implement the missing functionality there.

And I do wish to use bsf in a fairly multi-player heavy game myself, with support for a mega-server, backed by multiple servers and transparent transitions between them (similar to your bubble description I guess) so it's something I certainly something I need to consider.

For now, I'll just start with the basics and build it up from there, keeping your requirements in mind.

@nxrighthere
Copy link

nxrighthere commented Mar 20, 2019

@BearishSun Are you sure about RakNet integration? The library was abandoned for a while and ditched from a number of game engines due to various bugs and its complexity.

I would advice our ENet fork if you are interested. We maintain it primary for C and C#, but it should be okay to use it in C++ as well. Our users are quite happy with it. The code itself is a single header portable library amalgamated into 3867 SLOC. We don't use any synchronization mechanisms for multi-threading and keep stuff as flexible and performant as possible.

@BearishSun
Copy link
Member Author

Eventually we'll need RPC, replication, lobbies, matchmaking, NAT punchthrough, peer 2 peer, chat, VOIP and other high level features, all of which RakNet provides plugins for.

That's why I've decided against lower-level UDP libraries for now. Even though I doubt I'll use most of that functionality out of the box, it's nice to have as a reference point and know it works with the API.

I do plan on allowing different networking backends, so eventually if the design shows we can move to a simpler library I'll certainly consider it. ENet looks nice - does it work on mobile?

@nxrighthere
Copy link

Eventually we'll need RPC, replication, lobbies, matchmaking, NAT punchthrough, peer 2 peer, chat, VOIP and other high level features, all of which RakNet provides plugins for.

All this stuff can be implemented relatively easy except NAT punch-through, for this 3rd party solution will be required. In general, it's not hard to abstract ENet for this kind of things.

ENet looks nice - does it work on mobile?

Yes, it works on Android and iOS, tested by various people mostly from Unity's community.

@nxrighthere
Copy link

I made NetDynamics which supports ENet out of the box. Maybe it could be useful for you.

@BugBiteSquared
Copy link

BugBiteSquared commented Sep 11, 2019

A quick Google search turned this up regarding NAT hole punching with ENet: https://stackoverflow.com/questions/24634870/nat-hole-punching-with-enet

Funny thing is, I'm pretty sure the Godot engine uses ENet for a lot of its networking stuff just from looking through their GitHub:
https://github.com/godotengine/godot/tree/master/thirdparty/enet

I know that the maintainers of Godot are exceptionally picky about what third-party stuff they'll use & have rejected the use of things like bgfx flat-out in the past. If it passes muster with them then I imagine it should be pretty solid.

I'll look into seeing what else I can do to help with this feature. I'm not as familiar with network programming for games but I can bone up on it quickly & do some light work at least.

Edit:
I would like to further add yojimbo as another option for networking library. It's apparently very well-regarded & formerly sponsored by some large studios: https://github.com/networkprotocol/yojimbo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
OFFICIAL type: enhancement [MAJOR] Feature that takes a few weeks up to few months to implement
Projects
None yet
Development

No branches or pull requests

5 participants