~dr|z3d
@RN_
@StormyCloud
@T3s|4
@eyedeekay
@orignal
@postman
@zzz
%snex
+BeepyBee
+FreefallHeavens
+Onn4l7h
+Onn4|7h
+Over
+leopold
+nyaa2pguy
+onon_
+profetikla
+qend-irc2p
+r00tobo
+uop23ip
+waffles
Arch
Danny
H20
Irc2PGuest21366
Irc2PGuest31627
Irc2PGuest49393
Irc2PGuest68429
Irc2PGuest98458
Meow
Stormycloud_
ac9f
acetone_
anontor
duck
gelleger1
halloy13412
mahlay
makoto
n1
nZDoYBkF
nilbog
not_bob_afk
ntty
poriori_
r00tobo[2]
rambler3
shiver_
simprelay
solidx66
thetia
u5657
user1
vivid_reader56
zer0bitz
dr|z3d
hmm
dr|z3d
not entirely sure 256 job runners is as great as you think it is.
cumlord
also strange performance issue about 14-15 hrs in another another router only running snark and 3 server tunnels, maybe it's transient, watchdog came out at some point and only has a handful of transit now but still working
dr|z3d
has that got the same 256 job runners?
cumlord
nope, i noticed with 128 job runners it makes thousands of those errors right after startup, and on 24 it seemed to do the same thing
cumlord
on 24 it seemed to do the same thing as 64 but 12-24hrs after start*
cumlord
the router with "strange performance issue' currently has 128 job runners, the zzzot router has 64
dr|z3d
generally speaking, 4 * available threads seems to be close to the sweet spot.
cumlord
i have 3 others with 64 job runners and another 3 with 128, all with 2g to jvm though, no issues
dr|z3d
sometimes less, sometimes more depending on thread count.
dr|z3d
so the issue is specific to zzzot is what you're saying?
cumlord
alright i'll keep that in mind
cumlord
i think so
dr|z3d
iirc, I think we allocate 32 job runners by default in +
cumlord
zzzot hasn't been stalling like in earlier builds though
cumlord
sounds like a reasonable number
dr|z3d
as to your issue, it sounds like somewhere we're hitting a limit to the amount of messages we're processing, and then messages get lost. just a guess.
cumlord
thats sort of what I’m guessing, it seems like the job queue gets backed up real far somewhere
cumlord
maybe not super noticeable until 12 hrs in and there’s 9k jobs lined up
dr|z3d
bring the job runners down.
cumlord
yup will do
dr|z3d
you can experiment with the optimal number, but I'm pretty sure 128 isn't it.
dr|z3d
if your job processing gets too lagged, you'll start dropping jobs.. so maybe that aligns with the error logs your seeing.
dr|z3d
*you're
cumlord
probably not, 256 looked fine for a couple hours and let it ramp up speed faster (I thought) but also speeds up Armageddon
dr|z3d
:)
dr|z3d
I should probably look into the job runner code and see if we can't do some dynamic allocation based on load. currently it's set and forget.
cumlord
idk how complicated that’d be but would put stop people like me from doing dumb
dr|z3d
no harm in experimenting, just be careful when you go off-piste :)
dr|z3d
if the default is 32, then try small increments, soak test..
uop23ip
onon_, my results of testing i2pd-qbittorrent pair download speed:
uop23ip
sum of 2 peers - can't see how much for each
uop23ip
started good 200+, got most in 200-600 range with 700 peaks. sometimes falls down 10-50 for shorter time. average got around 200
uop23ip
i would give it 30-60% more. not 1MB/s archieved, maybe bad network weather.
uop23ip
tried to dl with XD to compare, but have no luck so far others run fine. it can't find peers. maybe tracker issue but i put all trackers into ini. maybe they xd qbit don't like eachother
cumlord
lol i it made sense at the time but i was basing it off of another system XD
uop23ip
and onon_ tell your wife that you sold a car today, that she can order the pool and can stop fucking this fat asshole larry :D
dr|z3d
oO
dr|z3d
stay off the crack pipe, uop23ip
uop23ip
just joking ofc, but he sounds more and more like a car salesman with the days :)
dr|z3d
that's not entirely untrue, the i2pd pimping has got a bit out of hand.
cumlord
couple others have complained about mysterious i2pd router deaths
cumlord
similar with what i saw too per peer though, qbit-qbit both i2pd/i2p+ max i usually got was around 7-800, would hold around 400, idr the average
cumlord
strong preferance for other qbit peers?
uop23ip
maybe. what i got still in mind from hw's torrent implementation a while ago, was that "chocking" was a real issue. but i have no real clue either
uop23ip
just for info: happened to get an oom with standalone snark+ some time ago. It doesn't explode or makes the sound kaboom. just throws an exception and pushes cpu usage up.
uop23ip
i2pd never crashed for me since 20+ installs over time, so far fine with me. never tested it as transit. Don't know what those people do to have it crashed
uop23ip
probably not using ulimit - the fix for most iirc :)
cumlord
one hosts a site, another does lot of torrent uploading
cumlord
lot of trouble with scrapers
cumlord
forget about ulimit thing I’ll pass along