@eyedeekay
+R4SAS
+RN
+RN_
+T3s|4
+Xeha
+orignal
+weko
Irc2PGuest88897
Onn4l7h
Onn4|7h
T3s|4_
aargh2
acetone_
anon2
cancername
eyedeekay_bnc
hk
not_bob_afk
profetikla
shiver_
u5657
x74a6
dr|z3d
hopefully now you've brought his attention to the issue he'll understand just how damaging this transient dest feature is.
dr|z3d
what we do know is that we're in for a bumpy ride for a while until either the feature gets removed or we're forced to find other ways to defend against it.
dr|z3d
I also hope he took note of the recommendation to promote sharing of bandwidth and shakes off the idea that I2P's a network that can be used without giving back.
obscuratus
I've got to wonder if the bitcoin guys even tested this at all.
zzz
yeah they did. Vasil D did it a lot. They're a serious project
zzz
I think that i2pd is just ending up in local congestion collapse because it doesn't have any limits
zzz
so after everybody rejects it, it just spams the whole network
dr|z3d
is there any way we can identify the router(s) behind the spam requests?
zzz
grep -i 'hop throttle'|cut -d ' ' -f 15|sort|uniq -c | sort -rn
dr|z3d
because if this continues for much longer, the next response would be to session ban the routers in question, if they can be identified.
zzz
test results on ilita #dev
dr|z3d
not seeing the number of throttled routers in my logs that I'd expect. 10 in 25MB.
dr|z3d
or 8 if you remove the dupes.
dr|z3d
another thing. is dropping instead of rejecting contributing to the problem? wouldn't it be better to reject the requests?
obscuratus
Likewise, I'm not seeing many routers using the 'hop throttle' grep. Otherwise, I'm satuated on participating tunnels.
zzz
the theory is, reject for a while, so the creator gets the hint, then drop so the next guy doesn't accept and drive up his tunnel count
zzz
thats why the two thresholds
dr|z3d
ok, I get the reject, the drop I'm still not sure about. because you drop the tunnel request, tunnel build is therefore fail, which causes the router to generate a new tunnel request, and presumably these requests get amplified the more requests are dropped.
zzz
which is why a build limiter is required to avoid collapse
dr|z3d
right. so you place limits on the requests a router can make in a given period. or you don't.
dr|z3d
where are we limiting builds?
dr|z3d
and did we get buy in from orignal yet?
obscuratus
If this has already been mentioned, but there's an of interesting discussion of what these guys were thinking as they worked through their pull request.
dr|z3d
this situation is a great opportunity to take stock of throttles, limits, and various other protections. would rather it wasn't happening, but since it is, a good time to make sure things are as tight as they need to be.
obscuratus
Correct me if I'm misunderstanding, but they seem deeply concerned about people being able to associate their bitcoin traffic with a given b32/b33 address, no?
zzz
right now, this looks to me more like i2pd being driven into congestion collapse than a bitcoin problem
zzz
bitcoin could be nicer, but i2pd needs to rate limit builds
obscuratus
zzz: I saw you looking for testers earlier. Do you have sufficient testers on ilita?
zzz
yeah I think our guy over there did what we needed
orignal
no problem. will add it
orignal
however the problem is
orignal
I don't create new tunnel immeidetely if tunnel build fails
orignal
but after 10-20 seconds
orignal
so I don't send more requests if tunnel buiilds fail
zzz
I do think that's what is happening, you're getting pushed too hard and then you try to take down the whole network with build requests. I fixed the same problem in Java i2p in 2006. Please work with bitcoin to try to reproduce.
zzz
I see around 1000 requests an hour from some i2pd routers so something's not right
dr|z3d
I get that an unthrottled router handling whatever's thrown at it is an appealing idea, and for a while I was coding for I2P+ on that basis, but recent events have demonstrated that throttling isn't an optional extra.
dr|z3d
these bitcoin nodes, zzz, all L tier from the looks of things, no?
dr|z3d
I don't store L tier routerinfos on disk, so they're cleaned out on router restart, but they still very quickly end up being almost 50% of the netdb.
dr|z3d
I'm just wondering if we can do something like cap them at x percent of the netdb.
zzz
343 lb6qOr8pGDwB-nAycRn0W1YZCz5zjelRzeMTNIuUatM=]: XfR, but somebody probably building through him
dr|z3d
I thought i2pd didn't publish stats, but that router you just referenced is giving me stats and is apparently i2pd.
zzz
it doesn't. 343 was my count of throttles in half an hour
dr|z3d
no, I mean I'm looking at the netdb entry for that router.
dr|z3d
Stats:
dr|z3d
Capabilities: Xf Network ID: 2 LeaseSets: 135 Routers: 10196 Version: 0.9.56 First heard about: 12 hours ago Last heard about: 6 min ago Last heard from: 2382 ms ago
zzz
I think it publishes stats if floodfill
zzz
RouterInfo.h:const char ROUTER_INFO_PROPERTY_ROUTERS[] = "netdb.knownRouters";
dr|z3d
ah, ok, that'll be it then.
zzz
java would never have 10k routers
dr|z3d
does here, +-
zzz
pfft
dr|z3d
lol. you and your pffts.
dr|z3d
I'm currently expiring K/L tier routers early, I think I'm going to expire them even earlier.
dr|z3d
over an hour old? be gone.
dr|z3d
let's see if that helps tame all these crap L tiers in my netdb.
dr|z3d
> Invalid RouterInfo signature detected for [LFUJYg] ➜ Forged RouterInfo structure!
zzz
FYI I'll be mostly afk from about 3:00 eastern today to noon thursday. Don't let the place burn down while I'm gone
orignal
1000 requests per hour is nothing
orignal
<zzz> java would never have 10k routers
orignal
can you explain why?
orignal
if you a floodfill
zzz
maybe
zzz
we try very hard to keep it lower though
orignal
please explain how you clean on floddfill
orignal
it's just 1 hour and that's all
orignal
or I missed something?
zzz
yeah I think we do 45 minutes
orignal
then how come you don't have 10K?
orignal
you should have something close
zzz
nope, it's 60 minutes
zzz
maybe
zzz
I don't know, I don't run a floodfill most of the time
orignal
that's whyI'm curious
orignal
but your statemnt
orignal
that java would never have it
orignal
I throught you do somthing for it
zzz
most of the java ffs are about 2000-6000 right now
zzz
i2pd is about 9000-10000
zzz
I think you have more because you're accepting more tunnels
orignal
how is it related to number of tunnels?
zzz
you get more tunnel build requests, and you have to lookup the RI for the next hop
zzz
and you allow more connections.
orignal
probably
zzz
more connections, more tunnels -> more RIs
orignal
so you theoretically can have more than 10K routers
orignal
right?
zzz
yes
orignal
now why 1000 build request per hour is excessive?
zzz
from a single router to another router, as the next or previous hop, that's a lot
orignal
it's even less that 1 per second
zzz
for a pair of routers
zzz
that's how our "hop throttle" works
orignal
then what is not excessive?
zzz
we limit to 3% of our total tunnels, more or less
zzz
that's how we prevent bitcoin spammers
zzz
currently 12%. Reducing to 3% in next release.
zzz
We are in danger of the whole network collapsing
orignal
think for different side
orignal
it it was not a bitcoin but an advesary
orignal
who does it intentionally
orignal
just run a simple script that keeps creating SAM sessions
orignal
10 lines of code really
orignal
and they know how to eliminate limits on thier side
orignal
also please tell me what is code 10
orignal
people notice most of rejection codes are 10, not 30
zzz
sure, it could be intentional. But my theory, that I put in the bitcoin ticket, is that it is not. My theory is that you're sending too many build requests.
zzz
And I'm asking that you work with them to prove, or disprove, my theory
orignal
I don't send too many build requests
orignal
a client app does
orignal
I reply yesterday how it works
orignal
number of requsts only depends on number of destinations
zzz
you need to rate limit, no matter what the client does
orignal
and not on number of failed tunnels
orignal
then tell me what causes so may requests
orignal
it's clear that somebody has too many destinations
zzz
getting so many rejections is what causes so many requests. That's how you get to congestion collapse
orignal
the question is why?
orignal
either a bug in some SAM/Bob app
orignal
or it's an attack
orignal
no
zzz
it's basic feedback loop
orignal
that's what I'm trying to expain
orignal
there is no feedback loop
orignal
failed tunnel build doesn't cause new request
zzz
ok, then please work with bitcoin to try to reproduce it
zzz
/** probabalistic tunnel rejection due to a flood of requests */
zzz
/** probabalistic tunnel rejection due to a flood of requests */
zzz
public static final int TUNNEL_REJECT_PROBABALISTIC_REJECT = 10;
orignal
the max number of request is overall number of tunnels
zzz
I don't have the answers. I have a theory. As this is a serious issue, I ask that you work with bitcoin to investigate
zzz
I think bitcoin+i2pd could collapse the network as soon as this weekend
orignal
please tell me about bitcoin
zzz
what about it?
orignal
do they create new destinion for each peer or not?
zzz
yes. confirmed by jonatack last night on twitter.
orignal
then what do you want me to repoduce?
orignal
if they craate at least 16 dests with 10 tunnels each
orignal
160 requests per few minutes
zzz
the two guys are Jon Atack and Vasil Dimov
zzz
large number of tunnel requests
zzz
default quantity
zzz
I don't know the max dests. talk to them
orignal
based on my experience with BTC I usually see 16 peers
zzz
but if 90% of the requests are rejected or dropped, then how many?
orignal
still the same
zzz
Traca did some testing last night, see ilita #dev
zzz
please test and see
orignal
dest, 10 tunnels, all dropped, it tries again after 10-20 seconds
orignal
test what?
zzz
and if all rejected?
orignal
it doesn't care about rejecting
orignal
it cares how many tunnels we have so far
zzz
either test with bitcoin, or with something similar. 16 dests, 2 tunnels each
orignal
if we have 2 tunnels and numberof tunnels is 5
orignal
it tries to build 3 more
orignal
2 tunnels?
zzz
whatever your default is. they aren't setting quantity
orignal
deault is 5
orignal
that's why
orignal
5 for each side
zzz
and that's for a single peer
zzz
so please test that, 16 x 5
zzz
please add your comments to the github ticket
orignal
160 tunnels
orignal
that's what I said
orignal
will do
zzz
thank you :)
orignal
commented
orignal
another question
orignal
what you do with incoming request if the limit is exceeded?
orignal
send further with code 30 or drop
orignal
?
zzz
we try to reject if just over the limit. If more over the limit we will drop.
orignal
so, the origantor will not get response?
zzz
correct
orignal
the how long do you wait?
orignal
before you decide that request fails
orignal
*failed
zzz
looking...
orignal
because you can reach 13 requests limit very quickly
zzz
ok here we go
zzz
we have a "currently building" and a "recently building" list
zzz
new build goes in "currently building"
zzz
limit there is 13
zzz
after 5 seconds, we move it from "currently building" to "recently building", and start another build
zzz
it stays in "recently building" for another minute, so if we get the answer then the tunnel build succeeds
orignal
5 sec
orignal
got it
zzz
and on "slow" boxes (android, ARM) it's 10 seconds
orignal
that's all I needed
zzz
we also prioritize pools with no tunnels first. So everybody will get one
zzz
and we prioiritze expl. over client
dr|z3d
zzz: probabalistic reject and friends look like they're not being used?
dr|z3d
/** probabalistic tunnel rejection due to a flood of requests - essentially unused */
dr|z3d
public static final int TUNNEL_REJECT_PROBABALISTIC_REJECT = 10;
dr|z3d
/** tunnel rejection due to temporary cpu/job/tunnel overload - essentially unused */
dr|z3d
public static final int TUNNEL_REJECT_TRANSIENT_OVERLOAD = 20;
dr|z3d
/** tunnel rejection due to excess bandwidth usage */
dr|z3d
public static final int TUNNEL_REJECT_BANDWIDTH = 30;
dr|z3d
/** tunnel rejection due to system failure - essentially unused */
dr|z3d
public static final int TUNNEL_REJECT_CRIT = 50;
dr|z3d
the prob reject code in BuildHandler is commented out..
zzz
it's in throttle impl
zzz
not sure how much we pay attention to the code that comes back though
dr|z3d
RouterThrottleImpl also has the prob reject stuff commented out.
dr|z3d
well, some of it.
dr|z3d
but ok. thing is, the tunnelGrowthFactor stuff isn't slowing down growth much.
zzz
well, it's 1.3 which is, I think, 30% every 10 minutes. you can play with it if you like.
dr|z3d
might be worth reducing that if isSlow() and also setting a lower default maxParticipatingTunnels if isSlow(). The latter I already do.
dr|z3d
public static final int DEFAULT_MAX_TUNNELS = (SystemVersion.isSlow() || SystemVersion.getMaxMemory() < 512*1024*1024) ? 2*1000 : 8*1000;
dr|z3d
looking at that, I think I'll make isSlow a different case and bump up the max for < 512.
orignal
good point about no tunnels