Recent changes caused a packet timeout to be either 1 second or double
the heartbeat interval time (the maximum of the two). This is kind of a
problem since packet timeout should be ideally decided based on the
peer's heartbeat interval time, not this instance's heartbeat interval
time.
Some "scaffolding" code should be added:
- Heartbeat packets should send the current instance's heartbeat
interval time.
- Max capacity of cached sent packets should be increased based on
the peer's heartbeat interval and the rate of received non-heartbeat
packets.
- Maybe have a function to tell UDPC the intended usage regarding
latency: focus on low-latency default behavior, or focus on high
latency low packet-send-rate behavior.
The current changes should be reverted on master and the new changes
should be held back for now until this is all set up.
event.conId.port,
#ifdef UDPC_LIBSODIUM_ENABLED
flags.test(2) && event.v.enableLibSodium != 0,
- sk, pk);
+ sk, pk
#else
false,
- sk, pk);
+ sk, pk
#endif
+ );
if(newCon.flags.test(5)) {
UDPC_CHECK_LOG(this,
UDPC_LoggingType::UDPC_ERROR,
}
// calculate sequence and ack
+ // TODO: Request tracked packets dropped off the current ack to be sent
+ // again.
bool isOutOfOrder = false;
uint32_t diff = 0;
if(seqID > iter->second.rseq) {