decompile wrote:Hey, I tried it out and it feels as good as before. Is the change noticeable?
Let's take for example a server with default 67 tickrate. Each tick would be at ~15ms interval. If running the frame takes 5ms, it then goes to sleep for 10ms. The previous workaround was going to sleep an extra 1ms on top no matter how long running the frame took. For the same time, it was becoming 5ms to run the frame, 1ms to yield, and 9ms to sleep. This is fine, until the frame takes 14ms or longer (lot of players, lot of plugins, etc.). In such case, it was still going to sleep an extra 1ms and exceeding the interval (dumplongticks
was very noisy about it). Now, we only yield when the server goes to sleep (if it sleeps for 10ms, we yield for 10ms, if it does not sleep, we don't, etc.). Not only this means we never accidentally exceed the interval, but we also make use of every bit of resource the server is not using yielding many time longer than before. Most of the time, this means we get close to direct execution time. For example, run the following with developer 2
Syntax: Select all
from paths import GAME_PATH
from threads import queued
from math import sqrt
from time import time
t = time()
list(GAME_PATH.walkfiles()) # I/O
[sum([sqrt(i) for i in range(1000)]) for i in range(100000)] # CPU
print(time() - t)
Then run it again with sp_thread_yielding 0
. For instance, if a routine takes 1 second when called directly, it would then takes (1 / 0.001) / <tickrate>
(~15s at 67 tickrate) to run when limited at 1ms per tick. Now, if the server goes to sleep 10ms per tick in average, it will execute 10 times faster, etc. So, you are not going to see any difference for calls that take < 1ms because we force a context switch and get it done in one frame, but for long calls it will be exponentially faster (depending of server activity at the time, of course).