@eyedeekay
+R4SAS
+RN
+RN_
+T3s|4
+Xeha
+acetone
+orignal
+weko
Irc2PGuest42386
Irc2PGuest5995
Leopold_
Onn4l7h
Onn4|7h
T3s|4_
aargh2
anon2
eyedeekay_bnc
hk
not_bob_afk
profetikla
shiver_
u5657
x74a6
zzz
whoa just got more evidence that it's a broad-based attack
obscuratus
<raised eyebrow emoji>
zzz
I have access to reseed request charts from one op
zzz
requests ~ doubled and have stayed there since, starting... drumroll.. Dec. 19
zzz
still could be stupid prestium I guess
zzz
but thats a stretch
obscuratus
Maybe suggesting transient routers?
obscuratus
Empiricism... Make a hypothesis, then start testing that hypothesis, and see if reality agrees.
obscuratus
I often jump to the suspicion of an "attack", but I've often been corrected by the adage: "Never blame on malevolence that which can be ascribed to incompetence."
zzz
wow I could overlay the reseed graph with the part tunnels graph, they line up really well
zzz
I'm hoping I can get some IPs from the op and we could ban them
dr|z3d
I like the idea of throttling tunnels during the grace period. +1 from me.
zzz
he reported a burst last summer and it was all coming from Turkmenistan IP block
dr|z3d
you see that ip earlier, zzz?
orignal
inteterseting
zzz
dr|z3d, you have any data from your reseed?
dr|z3d
don't keep much data, no. maybe nginx logs, but they might be regularly scrubbed. will look in a bit.
dr|z3d
reminder: 185.213.155.0/24
zzz
yeah obscuratus the only non-attack explanation would be live OS + 150 SAM tunnels + i2pd but I doubt prestium caught fire on 12/19
zzz
reason dr|z3d ?
dr|z3d
5 ips I've seen in that range, mostly on the same ip. and at least 3 were at the top of the list of part tunnels for a sustained period, all same ip.
dr|z3d
that and the fact that both ips in that range are flagged as abusive ips.
zzz
yeah but we throttle so its hard to tell if abuse without looking at the rejected count
zzz
yeah I don't know how to validate some 3rd party report though
zzz
I'll see if the IP pops up in any of my routers
dr|z3d
check for that range in your netdb. see how many routers you got. I saw 5, could be more.
dr|z3d
amount of requests was suspicious from those routers relative to everything else.
zzz
well, it's 4 XR
dr|z3d
there we go. there's also an XfR in that range.
zzz
don't see anything too alarming atm
zzz
I'm focused on code fixes, the whac a mole is too tedious
zzz
testing the grace period throttle, nobody snagged yet
dr|z3d
do keep nginx logs for reseeding.
dr|z3d
if you want them for analysis, I can make them available.
dr|z3d
fail2ban handles overage.
zzz
I don't want to do the analysis, but if you identify any blatant offenders we could consider banning them
zzz
thing is, for a month I've been banging the congestion drum
zzz
that we "tipped" into congestion, that we need better congestion management, etc
zzz
but none of that applies to reseeding
zzz
so it's either prestium in a boot loop or an attack
dr|z3d
ok, well, let's see what we can deduce.
obscuratus
If it was Prestium in a boot loop, wouldn't that show up in a sybil analysis when run on all routers?
obscuratus
Oh, maybe not if it starts in hidden mode or firewalled by default.
dr|z3d
SUNYSB seems to be misbehaving.
dr|z3d
big chunk of those routers all on the same /24 banned on the reseed host.
zzz
I can ban IPs but not ranges via the news
dr|z3d
so 239 ips currently banned by fail2ban. you have any tools to feed those to the netdb?
obscuratus
If this issue was arising from a single iP, or a range of IPs, an early step would be to tweak the sybil analysis to catch the behavior.
dr|z3d
SUNYSB is a trusted family, so whatever they're doing.
obscuratus
For my part, I haven't been able to pick up on a pattern in the IP address.
zzz
lets see if the other guy reports any obvious offender
obscuratus
While I think it's a different problem than the bandwidth/grace period problem I mention earlier, I'm still suspicious of all the LU routers in my NetDB.
obscuratus
Nearly all of them have: NTCP2: cost: 14 and SSU2: cost: 15
zzz
the increase was from 3k -> 6k/day, or from 2/minute to 4/minute
zzz
too fast to be a single bad prestium
obscuratus
I can envision an attack where the goal is to degrade the success/effectiveness of exploratory tunnels by flooding the network with transient LU routers. Since they don't have an IP, we wouldn't pick up on a pattern.
dr|z3d
I'm asking myself if using amazon's aws cloud servers for i2p is cost effective for average user.
dr|z3d
because a sizeable chunk of the banned ips are using amazon's cloud.
zzz
banned by failtoban or by the router?
dr|z3d
f2b
dr|z3d
hmm, now this is interesting. I'm looking at a pie chart of 100 of those ips by ISP.
dr|z3d
one ISP stands out above all others. Iran Telecommunication Company PJS
zzz
see we don't know if reseeders are real routers at all
zzz
I don't know anybody using AWS
zzz
I guess we could banhammer the whole thing
dr|z3d
no we don't, this is true. AWS is just a poor fit for I2P I'm thinking, but probably a great fit for trying to brute force the netdb via reseed hosts.
dr|z3d
actually, re Iran, disregard. the site I'm feeding the data to is doing a poor job with the pie charts.
zzz
no use banning AWS if it's just a wget loop running there
dr|z3d
indeed not.
zzz
the good news is AWS publishes their current IP ranges in json
zzz
the bad news is there's 7036 v4 ranges and 1726 v6 ranges
dr|z3d
:)
dr|z3d
don't think it's worth banning AWS just yet.
dr|z3d
not until we have substantive evidence they're on the network and the source of abuse. which we don't.
dr|z3d
the fact that they're all over the reseed servers is interesting, though. possible state-level brute forcing attempt.
zzz
I'll parse the json into a blocklist and see what pops up
dr|z3d
not entirely sure just yet if banning that range I mentioned earlier is affecting overall b/w and transit count on various routers, but it looks like it might be. significantly lower on both on several routers.
dr|z3d
for example, pre-ban, transit count about 50% higher than post ban on 1 router, uptime 4 hours.
dr|z3d
sorry, count about 100% higher, pre-ban.
dr|z3d
yeah, I'm somewhat confident that range is abusive.
dr|z3d
ok, another interesting pattern of sorts.
dr|z3d
2 distinct linode systems on different ip ranges both banning the same router.
dr|z3d
(or maybe not interesting at all)
zzz
obscuratus, I haven't gotten a single part. msg > 20 sec after expiration in 12 hours
obscuratus
zzz: The behavior I was seeing yesterday has all but stopped. Oddly, it stopped shortly after we started discussing it yesterday.
zzz
ok. if it comes back I can give you a patch to test
zzz
based on tests so far, I think we can reduce grace period to 90 sec
dr|z3d
ip range mentioned yesterday. 31337 VPN. It's named that. with first range blocklisted, another appeared, several routers at the top of part tunnel count. 100% dodgy.
obscuratus
It may be a coincidence, but there was a noticeable jump in network wide tunnel.BuildExploratorySuccess (from stats.i2p) that also coincided with the time I stopped seeing this behavior.
dr|z3d
31173 VPN:185.209.196.0/24
dr|z3d
31173 VPN:185.213.155.0/24
dr|z3d
huge drop in traffic and part tunnels with those 2 ranges blocked.
zzz
I've caught about 5 routers with my AWS blocklist, if anybody wants it. I got it down to about 2200 ranges
dr|z3d
2200 ranges! lol
zzz
it wa 7K but I removed dups and anything smaller than /24
zzz
it was all the way down to /32
dr|z3d
that's got to slow the router down, no?
dr|z3d
see what you've got in your netdb for those 2 ranges above, and if they're claiming top spots in the part tunnel stakes.
zzz
not really, I've tested with enormous public blocklists, and also with hidden mode in US which, last time I checked, blocked 130K ranges
zzz
our ranges storage and search is pretty efficient; it's the transient list that's not
dr|z3d
ok
dr|z3d
31173 VPN may also be part of Mullvad's VPN service, apparently.
orignal
so you think Turkmen donkeyfuckers are doing it?
dr|z3d
behave yourself, orignal!
dr|z3d
what I can tell you is that with those ranges I mentioned banned, transit tunnels all look totally fine.