@eyedeekay
+R4SAS
+RN
+RN_
+Xeha
+orignal
FreeRider
Irc2PGuest22478
Irc2PGuest48042
Onn4l7h
Onn4|7h
T3s|4_
aargh3
acetone_
anon4
eyedeekay_bnc
not_bob_afk
profetikla
shiver_1
u5657
weko_
x74a6
orignal
and with wrong signature ofc?
dr|z3d
not too many of those seen, I'm taking a hardline approach when I see them now. permaban.
dr|z3d
maybe 2 or 3 knocking around the network.
RN
toasters, toasters, toasters everywhere!
orignal
I know where they are coming from
orignal
i2pd can send only 2 fragments in SessionConfirmed
orignal
if RI is longer last part will not be sent
dr|z3d
yeah, understood, orignal, RI is being truncated because some clown has configured a stupid number of introducers on X tier router(s).
orignal
need to detect this siatuatiomn and raise error
dr|z3d
yeah, ideally just spit out an error and shutdown the router.
dr|z3d
check the size of the local RI and shutdown if it's too big. ERROR: Too many introducers configured, invalid RouterInfo generated. Shutting down...
dr|z3d
3 excessive introducer routers identified to date, all banlisted here. you want the hashes, zzz, or you don't care?
orignal
btw, I remeber I saw SessionConfirmed with more than 2 fragments
orignal
my question is why
zzz
I think our existing protections are sufficient, and if he's not overflowing the 3072 buffer, it would still be a valid RI and accessible via NTCP2 or outbound SSU2
dr|z3d
this is only for RIs that fail verification.
orignal
I think they have channged it
orignal
the problem that it doesn't go through is SSU2
orignal
zzz but where 2+ fragments come from?
orignal
i2pd never sends more then 2 in SessionConfirmed
zzz
I assume it's only sending 2, so it's truncated, and the sig fails, as you said yesterday
zzz
but maybe it's a DSM, not SSU2
orignal
no sometimes I see 3rd fragment
orignal
in SessionConfirmed
orignal
I'm not talking about that guy
orignal
I'm asking why Java even send 3 fragments?
zzz
if it calculates it's too big, of course. For IPv6 there's 133 bytes of overhead, so for a 1280 MTU that leaves 1147 bytes, so about 2294 bytes for two
zzz
that's pretty big but not impossible
orignal
then I should implement 2+ I guess
zzz
dmLeecZhuY-NEXZ1nZRYOexI8bzzRfrL8rFStCFaUn4= i2pd is 2109 bytes uncompressed right now
zzz
if you're going to publish those tunnel build stats you'll need it
zzz
turning off SSU 1 saves you a lot of space. 4 addresses with introducers was pretty big
zzz
you probably have 2.44.0 routers over two fragments if they're firewalled + ipv6
dr|z3d
what's the timeframe for switching off ssu1? mid-year?
zzz
see top of prop. 159
zzz
btw I see several i2pd routers with compressible padding
zzz
some probably stupid prestium
orignal
what is prestium?
zzz
no persistence, new RI and reseed every time
orignal
I thought I did something wrong with copressible padding
orignal
*com
zzz
looks fine to me, but I hope you tested it ))
orignal
I did
orignal
you should also see it on zzz.i2p
orignal
incoming connections
zzz
yeah, haven't looked at leasesets yet
zzz
0) Hi
zzz
hi
eyedeekay
hi
zzz
what's on the list for today?
orignal
hi
eyedeekay
I can do a short go-i2p update
orignal
I think recent high trafiic and number of transit tunnels
zzz
ok, you will be 1)
orignal
also release
zzz
I'll add:
zzz
2) is high traffic
zzz
3) is release
zzz
4) is my symmetric nat state machine proposal
zzz
5) is the D/E congestion caps proposal
dr|z3d
5) perhaps bitcoin progress vis-a-vis i2p/i2pd?
dr|z3d
or 6)
zzz
ok that's 6)
zzz
big list, let's cut it off there
zzz
1) go-i2p
dr|z3d
he may be lagged, in which case skip to 2 and return to 1 later.
eyedeekay
OK just a brief one, as zzz already knows I had a wild couple of weeks
eyedeekay
I added support I2P and bridging support to a pure-Go bittorrent library, and in order to support that, goSam, sam3, i2pkeys, and onramp all got feature releases last week
zzz
you still need some index-to-projects page eyedeekay
orignal
does it work?
eyedeekay
Yes it's great, super fast actually
orignal
we tried deluge and it doesn't
eyedeekay
Seeds, downloads, and webseeds
zzz
so there's some app that wraps the ilb?
eyedeekay
just a terminal one now, but github.com/eyedeekay/bttools
eyedeekay
It's a fork from the library I adapted
eyedeekay
Yeah I think that has to do with libtorrent-rasterbar
eyedeekay
I only superficially understand how python-C++ bindings work, not much help there
zzz
Vort found the cause
eyedeekay
Getting that fixed will be exciting to some users I think
zzz
kinda wild it broke 4 years ago
zzz
and what about go-i2p, which was the topic? :)
eyedeekay
I only just found out that Deluge offered some kind of option for it
eyedeekay
go-i2p itself didn't progress as much, Mostly I've been trying to get the obfuscation and de-obfuscation parts of the handshake for NTCP2 right
zzz
anything else? we gotta keep it moving, big agenda
eyedeekay
Let's call it there, it will take me too long, maybe I'll go to zzz.i2p and do a long-form one this time
eyedeekay
I find myself thinking of more to talk about than I expected
dr|z3d
ok, 2) high traffic.
zzz
ok then, anything else on 1) ?
eyedeekay
No thanks
zzz
2) high traffic
orignal
the situation looks better now
zzz
this is orignal's agenda item, go ahead
dr|z3d
still seeing i2pd routers hosting 18K tunnels, orinal?
dr|z3d
*orignal
orignal
so do we know what cuases it?
zzz
my theory remains the same. tunnel build spam
orignal
Transit Tunnels: 7241 on mine
orignal
twice less
orignal
I have another theory
orignal
duplicates in SSU2
zzz
you have a diffent theory?
orignal
about traffic
orignal
if same message were resent few times
zzz
maybe
dr|z3d
we've got 2 separate issues here. a) traffic ie bandwidth usage and b) transit tunnel hikes.
dr|z3d
they may or may not be related.
orignal
yes
orignal
I also think they are different
zzz
one thing for sure: i2pd is now a big part of the network. Congestion control is very important
dr|z3d
there's a sizeable chunk of transit requests coming from LU routers. that looks suspect.
zzz
java i2p congestion control alone can no longer "save" the network by itself
zzz
it is now possible for i2pd bugs or poor congestion control to take down the network
zzz
not saying that's what happened, but it's clearly possible
zzz
the bitcoin guys were creating 290 tunnels at startup
orignal
2.45.1 will contain duplicates drop
orignal
will release wednesday
zzz
ok
orignal
yes and you know I limit nymber if tunnel build requests sent at the time
zzz
I boldly predict expl. build success will be over 40% by next Monday
zzz
anything else on 2) ?
zzz
now my turn for lag...
zzz
3) release
orignal
no
zzz
go ahead orignal
orignal
we will release 2.45.1 wednesday
dr|z3d
I'm seeing a significant drop in tunnel request by throttling/temp banning L/U tier routers.
dr|z3d
with a scaled approach, reject first, then temp ban if the router persists, transit tunnels look a lot saner on routers I'm watching.
orignal
we seethat 2.45.0 improved th situation a lot
zzz
we should release tomorrow, a day late
orignal
becuse fixed too many bugs
orignal
fine
zzz
will see how it goes
zzz
anything else on 3) ?
zzz
yup, us too
orignal
no
zzz
I'm reviewing our diff, forgot about all the bug fixes from early december
zzz
tokens and stuff
orignal
another thing
zzz
it's a really big release for us, diff-size-wise
zzz
go ahead
orignal
one ukrainian guy showed me DatabaseStore of 47K
orignal
with LS1
zzz
wow
zzz
don't know how, but you can definitely do it with LS2
orignal
myabe the network is full of messages like that?
orignal
it was LS1
zzz
interesting
zzz
if I see one I'll let you know
orignal
yes and I have add the code to drop if LS exceeds 3K
zzz
anything else on 3) ?
orignal
no
zzz
4) nat state machine
zzz
mentioned a couple of times briefly in december
orignal
yes, I read it
zzz
really didn't get it fixed until ~12/26
zzz
and I have almost all of it implemented and in the release
zzz
so I'd like to put it into the SSU2 spec, but maybe it needs review and testing first
zzz
thoughts?
orignal
you should add it to the specs
zzz
ok, we can always change it later anyway
zzz
maybe with some thoughts on "full cone nat"
zzz
I'd like to actually test full cone nat and symmetric nat locally, but I don't want to f-up my firewall
zzz
maybe with an old firewall behind the outside one
zzz
anyway, if you have any edits/corrections/suggestions about it, let me know
orignal
use iptables
zzz
good idea
orignal
to simulate it
zzz
or I have an old openwrt box I can put behind my other box
zzz
and do iptables on that
zzz
anything else on 4) ?
zzz
5) congestion caps
zzz
I took the conversation from the other day and wrote it up
orignal
what's that?
zzz
which is the opposite of what you originally proposed with "high performance" cap, remember?
orignal
yes
orignal
let me read
zzz
so when the conversation ended, I just wrote it up where we left it
orignal
wait
orignal
so how would you like to publish it?
zzz
this would be in the main RI caps, like PfRD
orignal
got it
zzz
I believe that's what we were discussing, I just wrote it down the way I understood it.
zzz
dr|z3d, does the writeup reflect your memory?
orignal
so if we see such cap we should try to build a tunnel?
dr|z3d
yeah, that's about right, zzz.
zzz
no orignal, if you see a cap you should not try to build a tunnel, or at least try less often
zzz
they are "congestion" or "go away" caps
dr|z3d
if we're going to publish congestion caps, we could do with some method of determining how loaded the cpu cores are.
orignal
yes, that's what I reamnt
orignal
meant
orignal
exclude on
zzz
the one thing I added in the writeup is, if it's 'E' but too old, treat it as 'D'
orignal
yes, good idea
orignal
let's go ahead
dr|z3d
you got that in the writeup: "If this RI is older than 15 minutes, treat as 'D'"
zzz
right
zzz
I'd like to think about it a little more and decide in a week or two. I don't want to make any definite decision until after the relase
zzz
and I have more room in my head
zzz
++
orignal
btw, if "no transit" I aslwys raise "D"?
zzz
if you don't allow any transit?
dr|z3d
no transit being currently hosting no tunnels, or rejecting all transit requests?
dr|z3d
if the former, D is fine, if the latter, E makes more sense.
zzz
if the latter, we need a new letter
orignal
yes, I have such option
dr|z3d
maybe 'R' for no transit, aka reject.
zzz
because E turns into D after 15 minutes
orignal
I reject tunnel requests with code 30
zzz
R is reachable
dr|z3d
doh.
zzz
you could use code 50 to say go away forever
zzz
dr|z3d?
zzz
guess he's lagged out
zzz
weko, what's your 7) ?
dr|z3d
6) bitcoin.. sorry,
dr|z3d
well, we appear to be making progress with the bitcoin team, zzz and orignal have both nudged them in the right direction.
dr|z3d
they've updated their i2p documentation, though there's still more work to do there. (20 transit tunnels etc).
dr|z3d
orignal, zzz: anything to add?
zzz
I acked both the PRs
orignal
no
zzz
ok anything else on 6) ?
dr|z3d
not from me.
orignal
no
zzz
do we have time for 7) ?
dr|z3d
3 minutes.
dr|z3d
:)
zzz
7) weko
dr|z3d
makes the meeting a round hour.
zzz
go ahead
weko
Oh
weko
Maybe latter
weko
I not write yet
zzz
ok, next week, if you want
zzz
anything else for the meeting?
weko
It is suggestion about polling I2NP packets
zzz
oh wait. are we on one week or two week cycle?
eyedeekay
We were on 2 weeks before the holiday
zzz
orignal, one week or two?
zzz
yeah but then we had an extra one
zzz
?? you want a meeting next week or in two weeks?
orignal
2
dr|z3d
23rd Jan then.
orignal
two weeks
zzz
sounds good
zzz
happy release week
zzz
thanks everybody
weko
I suggest add I2NP "polling" message. Tunnel owner send "polling" in tunnel (directly with outbound tunnel or via our another inbound tunnel for inbound tunnels), and every transit router can add some metadata for owner. Then, "polling" will be received directly by router, or received by Endpoint and endpoint sent this packet to our inbound tunnel (we say tunnel(s) in packet)
weko
This is useful for:
weko
1) say what out router shutdown. We just need wait 30 sec (if is maximum delay for this messages 30 sec), and in this 30 seconds we add metadata "I am shutdowning...", and then tunnels owner change tunnels for their data without data's streams's lags. It is useful, because we need wait 30 sec (or time what we choose) instead of 10 minutes
weko
2) dynamic change of per-tunnels transit traffic limit. For example, transit router can say "I can transit 150 kbps for now, and then can say " I can transit 200kbps for now"
weko
3) We can use it as tunnels testing (we don't need send more packets without activity with this way).
weko
Oh I wrote it
weko
We add "I am shutdowning" in every polling message in our transit tunnels*
weko
What do you think about this feature?
dr|z3d
weko: as I mentioned before, telling 5K routers, or 10K routers that you're about to shutdown is an anonymity risk.
weko
dr|z3d: node status is not private info
dr|z3d
all I need to do as an attacker is make sure I'm always building tunnels through you, and then I just monitor your shutdowns and the netdb. game over.
weko
dr|z3d: we can do same thing now. Just send tunnel requests for getting node status (on/off)
weko
dr|z3d: you don't understand my suggestion, I think
dr|z3d
there's a reason it takes up to 11 minutes to shutdown. and that reason is predicated on anonymity.
dr|z3d
that takes significantly more effort than being told "I'm going offline now".
weko
Fact, what you say "I am shutdowning" in transit tunnels (not your tunnels) don't deanonimize you, because router online / offline status already public info
dr|z3d
if my router is restarting, there's maybe a 60s window where it's offline. during that period, an attacker can determine I'm offline by probing my port. that's a small window.
dr|z3d
I don't currently publish the fact that I'm about to go offline.
weko
We say tunnel's owner already public info. But we do this more operative with this.
weko
But anyone can check it
weko
If you answer, you offline. If not answer, you offline
dr|z3d
any proposal has to weigh up the benefits vs the risks. what you're proposing is out of convenience at great risk to the anonymity of the router operator. not worth it.
dr|z3d
there's nothing to stop you from aggressively restarting your router without waiting for transit tunnels to expire. the network can handle it.
weko
dr|z3d: what risks have publishing public info?
weko
dr|z3d: with lags
dr|z3d
just read again what I wrote above, weko. there's a reason the same logic's been in play for the last 20 years.