IRCaBot 2.1.0
GPLv3 © acetone, 2021-2022
#saltr
/2024/09/06
dr|z3d hang on a second, if we're talking about queue disciplines, not all modern linux distributions use cubic.
dr|z3d that assertion needs to be slapped down.
dr|z3d maybe I've got the wrong end of the stick. :)
dr|z3d the other takeaway from the chat is "2Mb/s" !?!
dr|z3d if that's some current fixed limit for streaming speeds, we need to fix that.
dr|z3d that seems to chime with orignal's suggestion he was hitting a 200K/s limit.
dr|z3d (vs New Reno and Vegas congestion control)
onon_ You can see how pacing works on my test server
dr|z3d C2TCP looks interesting, should would with existing algorithms.
RN don't mention Vegas, you will wake the snex
RN ;)
dr|z3d what's the bitrate of your costa rica 240p video, onon_?
dr|z3d we should be able to play that without constant buffering, but it's more spinning wheel that anything else here.
onon_ 439 video + 48 audio
onon_ It's from YouTube
dr|z3d so around 512Kb/s give or take?
dr|z3d call it 0.5Mb/s
dr|z3d all your content is doing right now is demonstrating that we're not yet there wrt video streaming.
dr|z3d not knocking it, there may come a day when we chomp through that content without issue.
onon_ Sorry, I forgot that I tested the outgoing speed limit on this server. You can try again without the limit.
dr|z3d I don't know what I'm meant to be testing other than the fact that we're not video-streaming ready on the network yet. As a demonstrator of that, your content works well.
onon_ For video streaming, the i2p network is indeed too unstable. This is just a demonstration of what transfer speeds can be achieved using pacing.
dr|z3d what's the basic idea behind pacing?
dr|z3d ok, thanks
orignal so we agreed to 512 for max window size and 800 for max number of tags
dr|z3d who's we?
dr|z3d and we're talking about the streaming lib, yes?
dr|z3d / public static final int MAX_WINDOW_SIZE = 128;
dr|z3d public static final int MAX_WINDOW_SIZE = SystemVersion.isSlow() ? 256 : 512;
dr|z3d canon is 128 right now I think, + is 256 / 512.
dr|z3d / private static final int UNCHOKES_TO_SEND = 8;
dr|z3d private static final int UNCHOKES_TO_SEND = SystemVersion.isSlow() ? 8 : 16;
onon_ What congestion control does + version use?
dr|z3d same as canon
onon_ So that's who's overloading the network
dr|z3d you're blaming +, or your blaming java i2p generally?
onon_ If you use such a window size with such a bad algorithm, of course the intermediate nodes won't handle the load.
dr|z3d private boolean shouldWait(int unacked, int wsz) {
dr|z3d return _isChoked || unacked >= wsz ||
dr|z3d _activeResends.get() >= (wsz + 1) / 2 ||
dr|z3d _lastSendId.get() - _highestAckedThrough >= Math.min(MAX_WINDOW_SIZE, 2 * wsz);
dr|z3d we haven't established that Westwood+ is bad, yet.
dr|z3d you've made some noises about westwood+ vs cubic, but haven't presented any evidence.
onon_ Not in comparison with cubic, it's all about not using pacing
RN so westwood+ without pacing would be ok?
dr|z3d public void setChoked(boolean on) {
dr|z3d if (on != _isChoked) {
dr|z3d _isChoked = on;
dr|z3d if (_log.shouldWarn()) {_log.warn("Choked changed to " + on + " on " + this);}
dr|z3d if (on) {
dr|z3d congestionOccurred();
dr|z3d // When a receiver advertises a window size of 0, the sender stops sending data and starts the persist timer.
dr|z3d // The persist timer is used to protect TCP from a deadlock situation that could arise
dr|z3d // if a subsequent window size update from the receiver is lost,
dr|z3d // and the sender cannot send more data until receiving a new window size update from the receiver.
dr|z3d // When the persist timer expires, the TCP sender attempts recovery by sending a small packet
dr|z3d // so that the receiver responds by sending another acknowledgement containing the new window size.
dr|z3d // ...
dr|z3d // We don't do any of that, but we set the window size to 1, and let the retransmission
dr|z3d // of packets do the "attempted recovery".
dr|z3d _options.setWindowSize(1);
dr|z3d that's what we've got currently. maybe there are improvements to be had.
dr|z3d maybe I'm looking in the wrong place, but here we send 40 tags by default:
orignal we - i2pd team
orignal cap of 232Kbs doesn't inspire people to use I2P
dr|z3d I agree, we need more.
dr|z3d by at least a factor of 10.
orignal 128 is obsolete limit
not_bob_afk *** nods. But, I can get speeds above that with i2psnark. ***
dr|z3d sure you can, we're talking about a single stream..
dr|z3d is there anything we can tweak here to make things go faster? still wondering why our default tags to send in streaming connection options is 40.
dr|z3d //private static final int TREND_COUNT = 3;
dr|z3d /** RFC 5681 sec. 3.1 */
dr|z3d static final int INITIAL_WINDOW_SIZE = 3;
dr|z3d static final int DEFAULT_MAX_SENDS = 8;
dr|z3d public static final int DEFAULT_INITIAL_RTT = 8*1000;
dr|z3d private static final int MAX_RTT = 60*1000;
dr|z3d * Ref: RFC 5681 sec. 4.3, RFC 1122 sec. 4.2.3.3, ticket #2706
dr|z3d / private static final int DEFAULT_INITIAL_ACK_DELAY = 500;
dr|z3d private static final int DEFAULT_INITIAL_ACK_DELAY = 450;
dr|z3d static final int MIN_WINDOW_SIZE = 1;
dr|z3d private static final boolean DEFAULT_ANSWER_PINGS = true;
dr|z3d / private static final int DEFAULT_INACTIVITY_TIMEOUT = 90*1000;
dr|z3d private static final int DEFAULT_INACTIVITY_TIMEOUT = 75*1000;
dr|z3d private static final int DEFAULT_INACTIVITY_ACTION = INACTIVITY_ACTION_SEND;
dr|z3d private static final int DEFAULT_CONGESTION_AVOIDANCE_GROWTH_RATE_FACTOR = 1;
dr|z3d private static final int DEFAULT_SLOW_START_GROWTH_RATE_FACTOR = 1;
dr|z3d /** @since 0.9.34 */
dr|z3d private static final String DEFAULT_LIMIT_ACTION = "reset";
dr|z3d /** @since 0.9.34 */
dr|z3d public static final int DEFAULT_TAGS_TO_SEND = 40;
dr|z3d /** @since 0.9.34 */
dr|z3d public static final int DEFAULT_TAG_THRESHOLD = 30;
dr|z3d Not entirely convinved we're hard limited to 200KB/s however, I think I've seen more in Snark per-client.
dr|z3d But maybe we are?
onon_ It all depends on RTT, that is, on the length of the tunnel.
dr|z3d sure, tunnel length is a major factor.