-
Notifications
You must be signed in to change notification settings - Fork 7
/
Copy pathPLANNING
149 lines (87 loc) · 5.66 KB
/
PLANNING
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
PLANNING
------------------------------------------
What is the criteria for speed up or slowing down?
I think you take the average of how far ahead the client is delivering input over a period of time.
*or* do you take the minimum of how far ahead client is delivering input?
Which is most appropriate?
I think this is for small adjustments, not precise OSHIT moments so the client should never be
worried about delivering inputs late. If the client does do that then the client will be quickly
told to resynchronize.
------------------------------------------
Design code for adjustment to speed up and slow down.
There should be an adjust sequence.
Adjust starts at zero.
Each time a new adjustment is to be made adjust increases sequence.
When the client notices a new adjustment.
Client also sends adjustment sequence to server inside input, so server can
detect when the adjustment has been applied and measure the result.
Basically it is two part:
1. measure (normal)
2. adjustment (one second?)
3. measure (...)
4. adjustment
And so on. It's not a continuous process. It's a process where something is tried,
result are measured, then a decision is mode (speed up, slow down, stay the same).
------------------------------------------
Sync safety of 8 frames.
This is meant to give some warning if the guy is falling behind (due to network conditions changing,
or the client hitching or VSYNC is not quite 60fps or whatever.)
Note that if the server hitches, it means the client is now further AHEAD -- this is not great,
because the client needs more CPU for resimulation, and has greater lag on server objects,
but it does not cause client inputs to get dropped.
If he *is* falling behind then we need to speed him up with a small adjustment, eg. over the next
one second, please speed up n ticks.
Adjustments should not be continuous. They should be made, and then evaluated for a second,
and then another adjustment should be made if
-----------------------------------------
The more I think about it, the more it seems that the synchronization is wrong.
It is not measuring the minimum amount of time required for the client to deliver input to server.
It is measuring the round trip time from server -> client -> server.
This is good to get a conservative starting point for synchronization.
But the algorithm needs bracketing to converge on the actual value for inputs.
It's much better to do this bracketing during the synchronization stage
because there is no need to simulate or do any extra work while converging.
In contrast, if the convergence is left till after synchronization, then the
client is going to start really far ahead with lots of resimulation, which will
have a high CPU cost and could cause problems.
Design bracketing system for better synchronization.
First synchronization is conservative based on RTT...
Then binary search is used to refine this bracketing....
Try half-way between the conservative estimate.
Does this deliver inputs late? (define late as within a certain amount of safety frames...)
If yes, then consider halving again.
If not, then move halfway between current estimate (bad) and conservative estimate (good).
Repeat n times.
If a certain number of tries are exceeded -- deny client with "synchronization failure"
------------------------------------------
I see a binary search where the server tells the client to repeatedly adjust forward/back
like newton rhapson iteration, it starts with a full RTT reply and works out if any packets
are late.
Output from this is that it probably knows loosely the RTT of the client, but not the
one-way trip time from client->server which is what really counts.
Server then binary searches to find an appropriate delivery that gets packets on time
from client -> server with a moderate amount of safety.
After this baseline has been established the server just needs to correct for minor drift.
If a substantial desync in time is detected, eg. inputs are too early or too late,
then the client goes back into synchronization.
Controlling all of this is a sequence number that increases from server -> client
with each synchronization event.
This way it can be done without reliability system.
------------------------------------------
Synchronize should look like, one second where server tells client its current time
Client replies back to server with most recent time received.
Server looks at this and on average says, OK, you need to start at this tick
Trick is that the rest of the world is in the past, but the client is simulating
ahead his objects (and his input) up this amount ahead.
I am too tired to work this out now.
------------------------------------------
Perhaps client needs to exchange synchronize packets with server in connect phase
Client can't really start sending inputs to server until synchronization occurs,
because the client doesn't know what the proper tick should be.
Synchronization should probably last one second, but this should be configurable.
Perhaps synchronization could be done in parallel with sending down the data block
for the initial snapshot state. This seems like a smart idea. There could certainly
be some payload/work being done rather than
It seems like reconnects are smart, because otherwise we have to implement all that
logic to tear down and recreate inside the protocol, when instead it could just be
a fresh connect.