@eyedeekay
+R4SAS
+RN
+RN_
+T3s|4
+Xeha
+acetone
+orignal
+weko
Irc2PGuest89954
Leopold_
Onn4l7h
Onn4|7h
T3s|4_
aargh2
anon2
cancername
eyedeekay_bnc
hk
not_bob_afk
profetikla
shiver_
u5657
x74a6
RN
does SAM get no indication of the requesting app?
RN
just trying to understand
RN
I would think once per app, but if SAM is blind then 1 sounds most logical to me
zzz
yes RN, there's nothing in the protocol to identify it other than a username when auth is enabled
zzz
eyedeekay, near the end of the meeting I said you probably want to do it later than the HELLO; I think that was bad advice
zzz
agreed that at the HELLO (#1) is by far the cleanest and easiest
zzz
zlatinb, good morning, ping re: SSU testnet
zlatinb
zzz: ping
zlatinb
what's up
zzz
so
zzz
as I said, 45 KBps, 0-hop, 25 msg delay, no drop
zzz
nothing obvious, but shitloads of dup tx
zzz
went back to SSU 1, same thing
zzz
went back to 1.7.0 release, same thing
zzz
you organized enough to find what your setup and results were for the last time you did SSU-only testing?
zlatinb
hmm that doesn't sound right
zzz
haven't looked too hard as to root cause, but it's stuck in fast retx mode
zlatinb
I'll have to look on gitlab, I usually publish results there, but of the top of my head 0 hop without delay was 100+ Mbits
zlatinb
25ms delay was definitely more than 45kb
zzz
not sure if it's a quirk of my test setup, which I'll explain here:
zzz
16 routers, same delay on all. Half floodfill, a couple firewalled
zzz
all standard NTCP+SSU 1, except for two, which are my HTTP client and server: SSU 1/2 only
zzz
the client and server are both 0-hop
zzz
I'm getting about 10-15% retx at the SSU layer
zlatinb
well lets try the simplest thing first:
zlatinb
for i in $(seq 1 15); do lxc-attach i2p$i -- /sbin/tc qdisc del dev eth0 root; done
zzz
for a 2 MB download, about 10,000 pkts, about 1000-1500 retx
zlatinb
then see what you get, should max out the CPU and be in the 10+MB/s
zzz
gotta fire it up for the day, no qdisc by default anyway
zzz
no change with 0 delay
zlatinb
then something isn't right
zlatinb
time to enable debug logging ... :-/
zzz
if pkts are dropping somewhere, I haven't found it yet
zlatinb
last time I ran the benchmarks was before the changes to pass priority and switch to a priority codel queue
zlatinb
is the rtt calculated correctly on the web console?
zzz
yes
zzz
I'm not seeing codel drops in any queue
zzz
let me change from 0-hop to 3-hop which will put me closer to your setup
zlatinb
it's harder to dig through logs of 3-hop tunnels
zlatinb
with 0 hop you know exactly where to look
zzz
just for a quick test
zzz
30 KBps
zlatinb
still too low
zlatinb
but not as drastically
zlatinb
also keep an eye on cpu usage when doing the 0-hop test, it does max out my 8-core xeon
zlatinb
zzz: also make sure you didn't by accident execute qdisc on your host laptop :)
zlatinb
/sbin/tc qdisc ls
zzz
no cpu problems at 45 KBps, verified qdisc not on host
zlatinb
something else is happening because when I was doing the interop testing I was able to download at expected speeds between java and i2pd
zzz
that was a udp-only test?
zlatinb
the java node was udp-only, i2pd doesn't support such mode
zlatinb
I could try SSU2 if you show me how to enable/force it
zzz
no, I want to fix SSU 1 first
zlatinb
did you ever change the bandwidth limits in the testnet?
zlatinb
cause 45 sounds alwfully close to the very slow default
zzz
oooh thats it
zzz
61 KBps out
zzz
thanks, will reconfigure, retest, and report
zzz
0 hop: 1.25 MBps w/ 25 ms delay; 9.25 MBps w/ no delay
zlatinb
that's more better
zzz
300 KBps w/ 25ms delay, 1% drop
zzz
zero dup rx/tx at the SSU2 layer when no qdisc dropping
zlatinb
1 or 0.1? 1% is high
zzz
it's zero-hop
zzz
zlatinb or eyedeekay would you please bump that reddit thread about i2pd < 2.41 crashing, I think I'm about ready to throw this back on the live network
R4SAS
zzz: just throw
eyedeekay
I'll bump it anyway
zzz
orignal, the problem was it trying to decode the "i" value in SSU? if we change it to "k" would that prevent the crashes?