~dr|z3d
@RN
@RN_
@StormyCloud
@T3s|4
@T3s|4_
@eyedeekay
@orignal
@postman
@zzz
%Liorar
+FreefallHeavens
+Leopold
+Xeha
+acetone
+bak83
+cancername
+cumlord
+hk
+profetikla
+uop23ip
+weko
An0nm0n
Arch
Danny
DeltaOreo
Irc2PGuest21357
Irc2PGuest21881
Irc2PGuest43426
Meow
Nausicaa
Onn4l7h
Onn4|7h
Over1
anon2
anu3
boonst
mareki2pb
not_bob_afk
plap
poriori_
shiver_
simprelay
solidx66
thetia
tr
u5657
mareki2p
Hi all, I have some problem with i2psnark. I managed to download the source code (from public internet). Find the problematic part. And fix it. Now, I would like to create new GitLab issue or pull request in order to discuss my issue in more detail. How do I do this? I already applied for new user account at git.idk.i2p, but I don't have access. I also registered new user account at i2pforum.i2p,
mareki2p
but it is not activated yet. There are (at least) two versions of i2psnark, one is from git.idk.i2p the other is from git.skank.i2p. The second one seems more up-to-date. I fixed the issue in both versions. But the second repository seems to require user account from the public internet, which I failed to create succefully. Could somebody help me with this please? Maybe dr|z3d knows what to
mareki2p
do. I have started discussion here discuss.i2p/viewtopic.php?t=163 my i2p email is marek@mail.i2p.
zzz
^^ eyedeekay
T3s|4
dr|z3d: the ^same was asked on #i2p-chat, if interested :)
eyedeekay
I'll find and approve your gitlab account marek, thanks for pinging me
zzz
I've started the netdb search refactor, added to my roadmap, but it's going to be a slog, target 2.8.0
eyedeekay
marek you should be able to log in and create/manipulate repositories now
mareki2p
Thank you eyedeekay, new GitLab issue and merge request created.
orignal
zzz, don't you think that 320 tags forward might be not enough?
dr|z3d
mareki2p: sorry about your issue with gitlab. if you tried to sign up with an @i2pmail.org e-mail, you'll have the same issue I have - they've blocked the mail gateway.
cumlord
oh good seems marek made his way here
cumlord
he was trying to compile snark from source to do something don't know if he figured it out
dr|z3d
yeah, I think he fixed a bug, cumlord, his merge request looks good.,
mareki2p
~dr|z3d, yes I tried to register to gitlab.com on public internet, but it failed, github.com succeeded
mareki2p
cumlord, thank you for your advice on that forum post, it helped me
dr|z3d
mareki2p: yeah, github works ok, gitlab is problematic. thanks for the bug report.
cumlord
good deal, np mareki2p glad you got it sorted
zzz
re: 320 tags, I have no data on that
orignal
but where did it come from?
orignal
why 320 rather tna, say, 500?
zzz
it was lower than that (160 maybe?) and you lobbied to increase it, I agreed
orignal
also another question
orignal
we have 64K max tagsets per session
orignal
and you have 4K tags per tagset
zzz
but with java streaming at the other end, it's impossible to send 320 unacked packets before the socket will error out. If you think otherwise, prove me wrong with data or analysis
orignal
but 4K is only like 6M
orignal
what's your max windows size?
orignal
ours in 1K nowe
zzz
128, which I believe is the spec
zzz
ofc with datagrams anything is possible but we have no hi speed datagram applications
orignal
see what happens when I watch video from youtube
orignal
and one intemediate router dies suddently
orignal
maybe just drop packets
orignal
as result I see massive drop and can't decryt a pakcet after
zzz
if you have i2pd on both ends I can't analyze that. But my code cannot send 320 unacked packets in a row before failing afaik
orignal
I belive becuse more than 320 packets got lost
zzz
well, prove it with data
orignal
yes, i2pd on both sides
orignal
will do
zzz
extend it to 5000, then log if it jumps more than 320
orignal
what does signal have for this?
zzz
5K iirc
orignal
so back to your 4K for each tagset
zzz
but if you can send hundreds of unacked packets, you probably need to fix your congetsion control
orignal
congestion control is fine and window size seems right
orignal
but a router in tunnel dies suddently
orignal
ofc after that it drop window size 1 or so
orignal
but next packet can't be decrypted
zzz
what was the window size when it died
zzz
on the youtube side
orignal
I belive like 500
orignal
approximately
zzz
I think your window size calculation is broken, not "fine"
orignal
well it was 1090p
orignal
1080
orignal
but 128 is not more that 200K per sec
orignal
how are you going to reach like few megs with 128?
zzz
depends on rtt and how often you ack
orignal
assume RTT is 1 sec
zzz
I've never seen it hit 128. that's why 500 sounds broken
orignal
maybe that's why I2P is slow
orignal
see you don't see it near 128 although the network is definitly capable for this
zzz
bw = window / rtt = (128 * 1812) / 1 = 232 KBps = 1.8 Mbit/s
orignal
do you have an idea why you never reach this limit?
zzz
ofc. because of drops which cause the window to be reduced
orignal
we are talking about kiloBytes
orignal
and 232 Kilobytes is too slow
zzz
then ack more often
orignal
too much overhear
orignal
also you can transfer only 256 Gb per session
orignal
if number of tags is 4K
orignal
that might be not enough for long live sessions
orignal
that stay for weeks
zzz
your name is on the spec next to mine
orignal
yes, but I use 8K, you use 4K
orignal
maybe we should use 16K?
orignal
per tagset
zzz
I don't remember any thing about that limit
orignal
tagset# is 2 bytes
zzz
if you think something should change, show up with data and test results ))
orignal
I think having 4K per tagset might be problem for torrents
zzz
well, run some tests and see
orignal
another thing that some services etsablish hunderds of connection
orignal
all of them go through the same session
zzz
but I also suggest you review all your RTT/RTO/windowing/retx stuff in streaming vs. the RFCs because a window of 500 sounds wildly unlikely
orignal
fine I will print out how often we change tagsets with youbube trafic
orignal
yes, we are still working on streaming
orignal
but remeber you need 500 if want to tranfer few megs per second
zzz
I just don't remember enough to have a debate over theory. If you're hitting some limit in real life, then collect the data
orignal
yes, I saw it yesterday
orignal
and my analysis says because lack of tags
zzz
dunno, I think people have hit 1 MBps with snark on one socket? can't remember
zzz
how many pkts will you retx at once?
orignal
need to check
orignal
pacer is used for this
orignal
e.g. no thing like "once"
zzz
if you keep it pretty low you can guarantee you won't run out of tags
orignal
yes, but I would not be able to watch youtube
orignal
I can now be it kill a router in tunnel
zzz
ofc if you have dumb code that retransmits 500 at once, you've shot your load, you're out of tags, and you're done
orignal
that's the situation now
zzz
well, actually, if your max window is > max tag lookahead, then you've shot your load even before you start retransmitting
orignal
yes
orignal
btw why we can't increase number of tags dynamcally
orignal
if you send 500 and they come 320 is not a porblem
orignal
because more tags will be gerenrate on each reccived packet
zzz
we do that, and I believe that strategy is documented in the proposal or the spec or both
orignal
will check
zzz
but even sending 500 is not a probelm with a 320 limit if they come roughly in-order
zzz
it's not the size of the window, it's the size of the gap
RN
mind the gap
snex
doxxed
onon_
zzz, Regardless of the window size, you need to use the pacing technique.
onon_
Your current СС algorithm overloads the intermediate nodes
zzz
interesting theory
onon_
I understand your distrust, but unfortunately it is true
zzz
only because you haven't offered any evidence. our current code matches the RFCs pretty closely; pacing would be a layer on top of that, and wouldn't be easy, but if you've implemented it and it helps, let's see the data
zzz
we were talking about making a i2pd-i2pd socket faster for youtube, but now it's java streaming's fault? ))
onon_
If java i2p streaming starts working through one of the nodes in the tunnel, this really creates some problems
onon_
The current i2pd algorithm relies heavily on measuring the delay. And in this case, it starts to work poorly
onon_
As far as I could understand, the current java i2p algorithm relies heavily on packet loss
onon_
Paying less attention to the increasing delays created by the algorithm itself
onon_
The situation is further aggravated by the fact that RED is not implemented on i2pd nodes.
zzz
handling loss, together with windowing and accurate RTT and RTO calculations, are the foundation of any congestion control algorithm, including ours
onon_
One of your developers clarified that you currently have the Westwood+ algorithm implemented
onon_
It's outdated.
zzz
for RTT/RTO/retx, if you're doing anything different from RFC 6298, you're doing it wrong. very wrong.
onon_
i2pd currently uses a modified version of cubic
onon_
And it works pretty fast.
zzz
tell that to linux, which uses westwood+. But there's plenty out there to choose from
zzz
did you benchmark cubic vs. westwood+ ?
onon_
All modern linux use cubic
zzz
if you say so, didn't know that.
onon_
In general, I conveyed the information as best I could. Consider working on this.
zzz
but anyway, apparently you guys are working on streaming, and had some questions on that and ratchet. We're not working on either and it's not on our roadmap right now
zzz
if y'all have some recommendations when you're done, I'll take notes
onon_
Agreed
zzz
and you can see all our streaming params for each socket in i2psnark, if you'd like to do side-by-side comparisons to a SAM bt client using i2pd streaming
zzz
we've also done extensive testing of streaming both on a testnet and point-to-point (w/o transports). testing on the real net is almost useless
orignal
ratchets is my question
orignal
because it's bottleneck now