IRCaBot 2.1.0
GPLv3 © acetone, 2021-2022
#i2p-dev
/2022/12/13
zzz here's my last snapshot at 6%
zzz 964 CyLg6w8lypk1gnAX-CdG8O4NCR86hq8ifge6QKXAoJg=]:
zzz 439 DtQsGzkbeR3nilr6ZvywR2O7-f0XaaV~YfHXohqwjgI=]:
zzz I'm moving on to test 3%
dr|z3d interesting ongoing high bandwidth on the sc router still.
dr|z3d re part tunnels, OXf4 worth keeping an eye on.
dr|z3d DtQs not registering as significant here. OXf4 currently top part tunnel requester.
dr|z3d thing is, even as the top user, it's only got a 1% share of all part tunnels.
dr|z3d interesting, seeing a ton of Failed to dispatch TunnelDataMessage (Inbound Endpoint: null) warnings right now after the sustained bandwidth.
zzz dr|z3d, re: "ourselves", both of mine were DatabaseLookupMessages... same for all of yours?
dr|z3d let's have a looksee..
dr|z3d confirmed, zzz. 8 instances in 1 file, all DbLookups.
zzz thank you, will set a trap to find the culprit
dr|z3d malicious, you think, or bug?
zzz almost certainly either our bug, or our bug not catching somebody else's bug
dr|z3d those bandwidth spikes on sc router, 11 hours this time.
zzz can't help with that, up to you guys to manage
dr|z3d not expecting help, just reporting what I'm observing.
dr|z3d at this stage it's more curious than anything else.
zzz pretty sure 'failed to dispatch' are bloom filter hits, a small number is to be expected and is fine
dr|z3d ok, in normal operation they're not so frequent, they just happened to be coming up a lot after the bandwidth session.
zzz you could always increase the filter size if you think it's too much and you have the ram for it
dr|z3d pretty sure the filter size is already pretty generous.
dr|z3d } else if (maxMemory >= 1024*1024*1024L) {
dr|z3d // 8 MB
dr|z3d // appx 80K part. tunnels or 960K req/hr
dr|z3d m = 25;
dr|z3d that should do it, no?
zzz graph this stat to keep an eye on it router.decayingBloomFilter.TunnelIVV.dups
dr|z3d ok, will do, thanks.
zzz looks like canon maxes out at 32MB m=27 for ram > 512MB && sharebw > 8MBps
dr|z3d what have I got here, let me se.. BloomIVValidator.java you're looking at?
zzz yup
dr|z3d there are some "huge" blooms available, does canon take advantage of those?
dr|z3d } else if (KBps >= MIN_SHARE_KBPS_FOR_HUGE4_BLOOM && maxMemory >= MIN_MEM_FOR_HUGE4_BLOOM) {
dr|z3d _filter = new DecayingBloomFilter(ctx, HALFLIFE_MS, 16, "TunnelIVV", 28); // 64MB fixed
zzz as I said, we max out at 27
zzz so are you on 25 or 28?
dr|z3d private static final int MIN_SHARE_KBPS_FOR_HUGE4_BLOOM = 16384;
dr|z3d private static final long MIN_MEM_FOR_HUGE4_BLOOM = 1024*1024*1024l;
dr|z3d so, 28.
dr|z3d do we need to go bigger? :)
dr|z3d and how do the values there correspond to the m values in BuildMessageProcessor? that's not entirely clear to me.
dr|z3d well, 28 in BloomIVValidator, 25 in BuildMessageProcessor is what I've got.
zzz report the dup rate from the stat
zzz the fail to dispatch is the ivvalidator, pretty sure
dr|z3d Lifetime average is 1.0
dr|z3d -> 1.0 (8,083,624 events)
dr|z3d full dump of the stats:
dr|z3d 60 sec rate: Average: 0.999 • Highest average: 1.0 • Average event count: 7,047.623
dr|z3d Events in peak period: 171581 • Events in this period (ended 53 sec ago): 242
dr|z3d Graph Data Graph Event Count Export Data as XML
dr|z3d 60 min rate: Average: 1.0 • Highest average: 1.0 • Average event count: 425,453.895
dr|z3d Events in peak period: 1939538 • Events in this period (ended 7 min ago): 17835
dr|z3d Lifetime average: 1.0 (8,083,624 events)
dr|z3d does mean much to me.
dr|z3d *doesn't
zzz that's the TunnelIVV.dups stat?
dr|z3d and just for the lulz:
dr|z3d bw.sendRate Low-level send rate (B/s)
dr|z3d 60 sec rate: Average: 2,690,419.5 • Highest average: 46,081,708.0 • Average event count: 1.199
zzz so 10,000 is 1%, so 30K is pretty brutal
zzz but it's possible i2pd SSU2 is letting dups through and they're real false positives, we'd have to ask them
zzz *real positives, not false
dr|z3d I can only speculate that the high point somehow corresponds with the bandwidth spikes, but I'm a little hazy on what I'm looking at here :)
dr|z3d I mean, post-spikes, the router seemed to drop traffic and part tunnels for a while before recovering.
dr|z3d I'm also not clear on how the m value in BuildMessageProcessor relates to the (presumably?) m values in BloomIVValidator
dr|z3d should they be synced?
zzz they don't relate
zzz two different filters
dr|z3d ok, good.
zzz here's the theoretical false-pos rate:
zzz * Following stats for m=27, k=9:
zzz * 8192 1.1E-5; 10240 5.6E-5; 12288 2.0E-4; 14336 5.8E-4; 16384 0.14%
zzz so you're way over that, but we've always been over
zzz might be i2pd, dunno
dr|z3d so nothing I've explicitly done to break things? :)
zzz have to wait to ask b/c I already have a q pending in #ls2 and orignal buffer size is 1
dr|z3d haha, ok
zzz don't think so, if you're sure you're on 28
dr|z3d yeah, max is 28 here.
dr|z3d aka HUGE4_BLOOM
dr|z3d I've noticed that if you make m too high in BuilmessageProcessor, traffic ramps down, not up.
dr|z3d it looks like 25 is about as high as you want to go.
zzz dunno, haven't looked at that in years
zzz so far so good with 3% throttle:
zzz 1321 DtQsGzkbeR3nilr6ZvywR2O7-f0XaaV~YfHXohqwjgI=]:
zzz 1 -l9VwbZd1kNH0qs8r2KquI5Qt~I1O3-fnXITbanhM7Y=]:
zzz 1 IP8aEEZu2cMnXTfGKob9LX6BvohqYoP-sKUZaBdBrds=]:
dr|z3d those are tunnel drop counts?
zzz hop throttle reject or drop
dr|z3d gotcha
dr|z3d looking at the %age stats here, I don't see anything right now over 1% usage.
dr|z3d so I'm pretty sure 3% will be fine, generous even.
obscuratus Here's my list of hop throttle rejects after running over about 18 hours.
obscuratus It has about 54 items, so to big to paste.
zzz at 3%?
obscuratus Yes, running at 3% for the last 16 hours or so.
zzz ok, added it up, that's 5.5% false positive assuming only the first two are bad guys
obscuratus I've manually spot checked the profiles of the other minor leaders on that list, and most of them aren't really great peers, even if they aren't necessarily bad guys.
obscuratus For example, EftNWv has only agreed to 1 tunnel ( tunnels.lifetimeAgreedTo=1), but has failed 42 times (tunnels.lifetimeFailed=42)
obscuratus I see this pattern alot if I manually run down their profiles.
dr|z3d well, that's an interesting point. maybe some of those datapoints can be used to determine if a peer's requests should be dropped or the peer itself banned. there's a bunch of profile stats that might come in handy.
dr|z3d and aside from printing tables, not a lot of that data is currently being used afaik.
obscuratus Are the contents of the profile a hard SPEC? Could we add the throttler drops to the profile?
obscuratus Are tunnel requests?
obscuratus s/Are/Or/
dr|z3d not a hard spec, I was just thinking the same thing.
obscuratus Presumably, profile lookups would be slow-ish. Might not want to do it 100 times a second.
dr|z3d with that profile data to hand, you might even be able to persuade zzz to plot the data on a table in the console "Top 10 abusers du jour". Or something. :)
dr|z3d profiles are stored in ram I think.
dr|z3d and then periodically written to disk, not periodically enough, perhaps :)
zzz yeah obscuratus you're on the right track with your analysis
zzz I looked at #3-10 on the ilst
zzz 1 RI not found
zzz 3 slow or unreachable java
zzz 3 i2pd that we know doesn't have good profiling
zzz and 1 (#10) is a XfR Java, probably the highest one on the list that might be unjustly there
zzz so we're all good with 3%?
dr|z3d 3% sounds reasonable. what about a negative bonus for Xf peers?
dr|z3d or just X.
obscuratus It works well-enough on my router once I'm up and running at steady-state. I'll keep an eye on it next time I start a router from scratch, and it has to get integrated.
dr|z3d and it would be nice to see peers that are being actively throttled on a table somewhere, perhaps underneath the new status table on /peers?
obscuratus You'd want the throttler to get out of the way while you're getting integrated. The MIN_LIMIT should take care of that, though.
dr|z3d you could defer the throttler for a few minutes after startup, also.
obscuratus We might already have all we need with the MIN_LIMIT. I'll keep an eye on it.
zzz let's see how it goes
dr|z3d a table with actively throttled peers could have the standard edit button to effect a local ban for the duration of the session. and view profile etc. usual stuff, but it could be useful, obviating the need to grep through logs to see what's happening.
obscuratus With our current MIN_LIMIT, you need to request more than 4 tunnels in under 4 minutes. That's probably pretty rare.
obscuratus Wait, it's under 3 minutes, isn't it?
zzz 11/3
obscuratus Yeah, just under 4 minutes.
obscuratus It's worth noting (if my rusty math can be trusted) that the MIN_LIMIT will be the limiting factor until you get 400 participating tunnels when the PERCENT_LIMIT is at 3.
zzz dr|z3d, I'm pushing the "ourselves" log change, because it might be days or weeks before I see another one