IRCaBot 2.1.0
GPLv3 © acetone, 2021-2022
#i2p-dev
/2025/05/02
@eyedeekay
&kytv
&zzz
+R4SAS
+RN
+StormyCloud
+acetone
+altonen
+dr|z3d
+hagen
+hk
+mareki2p
+orignal
+postman
+radakayot
+segfault
+snex
+weko
+wodencafe
Arch
Danny
DeltaOreo
FreeB
FreefallHeavens_
Irc2PGuest12011
Irc2PGuest12735
Irc2PGuest17807
Irc2PGuest18076
Irc2PGuest59134
Onn4l7h
Onn4|7h
Sisyphus
Sleepy
T3s|4_
T3s|4__
Teeed
aeiou
ardu
b3t4f4c3__
boonst
cumlord
death
dr4wd3_
eyedeekay_bnc
not_bob_afk
onon_
phil
phobos
pisslord
poriori
profetikla
qend-irc2p
rapidash
shiver_
solidx66
thetia
u5657
uop23ip
w8rabbit
x74a6
zzz those two IPs are definitely banhammer candidates, they are persistent
dr|z3d so maybe a counter and incremental ban times?
dr|z3d or you're just thinking add them to the blocklist and be done with it?
zzz the change today catches it but perhaps a newsfeed hammer is appropriate since the release is a month out
dr|z3d yeah, I've just pulled in that commit, I think I might increase the ban period, and some explictly logging there to indicate the temp ban is probably a good idea.
dr|z3d *explicit
zzz eventpumper warn logging you will see them
dr|z3d 74.164.x.x on your offender radar?
dr|z3d it's not that I can't see the blocks, more the question that it's not explicitly stated in the logs that the offending ip is being temp banned.
orignal zzz do you drop LeaseSet with cryptotype=1 now?
orignal in address I mean
zzz WARN [NTCP Pumper ] ter.transport.ntcp.EventPumper: Blocking accept of IP with count 3: 110.137.220.12
zzz ^^ temp ban
zzz it's interesting that two of them spun up at once, one AWS canada and one indonesia
zzz we did implement the full 'probing resistance' recommendations from the NTCP2 spec, that helps
orignal because I see ton of "Destination: Couldn't find published LeaseSet for "
zzz orignal, LS with type 1 keys?
orignal yes, type 1 in crypto key type in cert
zzz oh in the cert. looking...
orignal seems you have changed something recently
orignal because it worked fine for years beofre
dr|z3d >> WARN [ NTCP Pumper] …ntcp.EventPumper: Blocking NTCP connection attempt from: 185.207.242.63 (Count: 2)
dr|z3d If we're blocking for 43 minutes, then we probably only want to see the logged event the first time.
zzz not debating log policies, do what you want
dr|z3d sure, sure, just floating the idea, no debate required :)
orignal I just don't want to change the address
zzz I don't think I changed anything orignal, but give me a b32 and I'll try it
orignal jw4533ydxwgmna6wrvw6n55h3aem3vrqilhxjkmpgrmjfhldlscq
orignal it fails on publishing
zzz I can't get the LS
zzz but FYI, most ffs out there are i2pd since a year ago, due to a bug in the auto-ff-enable code where we required SSU 1 to enable !!!
zzz we will have lots more ffs after the release
orignal so you think that i2pd drops such LSes?
orignal interesting
zzz don't know
orignal but it worked fine until couple weeks ago
zzz try force-storing it to a ff you control, if you have a way to do that
orignal will check
orignal maybe it's my bug
zzz we might double or triple the ffs after the release. It's a really dumb bug on my side
zzz as for enc types and cert handling, I have a ton of changes locally for PQ but none of that is checked in
orignal will inverstigate anyway
orignal because it might be related to my PQ changes
orignal forgot about that crap with crypto type 1
orignal // Java floodfill never sends confirmation back for unknown crypto type
orignal I see this in comment
orignal is it true?
orignal looks like I'm trying to publish on a FF that's not online or something
zzz will research and let you know
orignal so, cam we assume there are only 2 valid values for destinatiion in this field 0 and 255?
zzz orignal, I have a type 1 dest from you in 2020 in my addressbook, I also created a new one and added it, and I found a code comment that we allow types 1-3
zzz so the basic parsing still works. I can't easily verify that a LS store will work
orignal so, Java does send reply to 1. right?
orignal also publishing failure for type 1 is my bug
zzz I can't test it for sure, but I don't see any problems or recent changes
orignal no it was my recent change about PQ
orignal however that code without confirmation is old one
orignal I should remove it
orignal jw4533ydxwgmna6wrvw6n55h3aem3vrqilhxjkmpgrmjfhldlscq works fine now and no issue with publishing
zzz got the LS
zzz we report: Unsupported encryption options
zzz remote: fatal: unable to write loose object file: No space left on device
zzz error: remote unpack failed: unpack-objects abnormal exit
zzz ^^ eyedeekay StormyCloud
StormyCloud Am I being summoned
zzz so we need to upgrade to the stormy-large instance? :)
StormyCloud Ah space, eyedeekay let me know if it’s a space issue or configuration thing
orignal but do you add such LeaseSets into netdb even with unknown encryption option?
zzz yes
eyedeekay Ah it's a config issue, I was keeping cached data for too long, should not be long to fix
eyedeekay OK, changes made to prevent that one happening again, I'm going to review the rest of the schedulable maintenance today and make sure it's all configured to run frequently enough
eyedeekay Postmortem: Found that gitea keeps copies of every repo "archive" it's generated until a cron job runs to delete them. It generates an archive in advance of a person needing them at what appears to be checkin time. This results in hundreds of zips and tar.gz copies of the code laying around in /var/lib/gitea/data/repo-archives. The cron job did not run often enough to delete the old ones before the disk
eyedeekay exploded. Increased frequency of all cleanup cron jobs.
eyedeekay Problem was likely exacerbated by mass-mirroring as part of the migration process, tons of checkins on tons of repos at once
eyedeekay Now back to fixing the base CI container so apt knows where the packages are :/
zzz thanks eyedeekay
eyedeekay No problem, sorry about the breakdown
zzz minor blip
dr|z3d yeah, watch out for those archives, it quickly suck your storage dry :)
eyedeekay Just glad to find that the config options I needed were already there
RN Irc2PGuest47673, tell us about your snark issue: OS, install method, router version, etc
Irc2PGuest47673 Hi. I just noticed a small problem in i2pSnark. Torrents are divided into several pages. If you are on any page with inactive torrents, the bottom bar will display the current download speed (correctly) but will not show the upload speed (incorrectly). When you switch to a page with at least one active torrent, both speeds are displayed correctly. I am using GNU/Linux with version
Irc2PGuest47673 i2psnark-standalone 2.8.0 from the skank.i2p eepsite
RN dr|z3d, this one's for you (at least at start)
zzz thanks for the report Irc2PGuest47673