@RN_
@orignal
@postman
%Liorar
+Onn4l7h
+Over
+f00b4r
+leopold_
+marek22k
+nyaa2pguy
+profetikla
+qend-irc2p
+r00tobo
Irc2PGuest30010
Teeed
acetone_
makoto
nZDoYBkF_
not_bob_afk
o3d3_
poriori
r00tobo[2]
solidx66
nyaa2pguy
is there a setting to change the max piece size limit? D: Pieces are too large in "Queen.of.Mars.S01.1080p.AMZN.WEB-DL.DD+2.0.H.264-playWEB" (256 MiB)! Limit is 32 MiB
nyaa2pguy
in i2psnark / snark+ that is
dr|z3d
no.
dr|z3d
iirc, max piece size is 32MB.
dr|z3d
I refer you to snex's previous comment about piece sizes larger than that.
nyaa2pguy
aw dang. sorry forgot about that (I only remembered the piece quantity limit).
dr|z3d
add a comment to the torrent if it's possible, dissuade the uploader from creating torrents that can't be downloaded.
dr|z3d
of note: github.com/rndusr/torf/issues/38
dr|z3d
it might be that we have to (reluctantly) increase max piece (block) size to 64MB. thoughts, zzz?
dr|z3d
if we did support 64MB block size, it would probably only be for download, not for creation of new torrents in snark.
not_bob
Why not increase it for creating torents too?
dr|z3d
64MB blocks aren't great as they are.
dr|z3d
bigger blocks, more dataloss potential, less efficiency.
zzz
if everybody else is 64MB we may as well do it too
dr|z3d
yeah, for downloading, but do we really want to generate 64MB pieces?
zzz
it won't unless it has to
dr|z3d
"has to" being shorthand for "yeah, we'll allow 1TB+ torrents on the network" :)
snex
why would a larger torrent need larger pieces?
snex
i think these are more just idiots who do stupid things on purpose
dr|z3d
because we also enforce a max pieces limit.
snex
probably the same people who seed to 99.98% and then go away forever
snex
why would there be a max pieces limit
dr|z3d
because sooner or later you run out of file descriptors and/or oom.
snex
memory use is the same whether you do more pieces or larger pieces
snex
and nobody has run out of file descriptors since 1992
snex
> cat /proc/sys/fs/file-max
snex
> 9223372036854775807
snex
if you can make a torrent that defeats that number, ill be very impressed
dr|z3d
I'm putting you in the "alternative facts" bin.
dr|z3d
ulimit -a is what you want to be looking at.
dr|z3d
or perhaps ulimit -l
dr|z3d
either way, it's very easy to exceed that limit (open files) if you don't enforce a max pieces limit and the user hasn't bumped it.
snex
at ulimit 1024 (the default), 32MB piece size gives you a 32G max torrent size. theres no way some dumb amazon webrip movie is that big
dr|z3d
so, my bad, descriptors was probably south of what I meant.
dr|z3d
you want more, configure more. it's an option.
dr|z3d
sometimes you have to protect users from themselves.
nyaa2pguy
I looked closer at that dumb amazon webrip and it's dumber than you'd expect. For some reason the uploader did 256MB piece sizes, LOL.
dr|z3d
yeah, well, we definitely won't be supported those :)
dr|z3d
*supporting
nyaa2pguy
64MB would be a nice bump up though, there's one torrent in my qbittorrent that I'm seeding on i2p that uses 4116 x 64 MiB (it's an opus music collection)
snex
reported to mpaa
nyaa2pguy
none of it is american music
dr|z3d
the "for some reason" is probably "bigger equals more good"
dr|z3d
*none of it is copyright, all of it is public domain
dr|z3d
(is the correct response)
nyaa2pguy
yes, of course
snex
then you will be exonerated after they investigate, detain, etc
dr|z3d
enough, snex.
dr|z3d
you want to be the copyright police, do it somewhere else, not here. kthx.
zzz
huge pieces are bad but dunno if 64MB is any more bad than 32
zzz
the file descriptors really depends on number of peers, not number of pieces, because we're only downloading one piece at a time per peer
cumlord
i think there's something that causes leasesets to silently fail if the router is at some level of being overloaded, and maybe a memory leak?
cumlord
busy routers with lots of server destinations or one busy one (zzzot) additional server tunnels might never publish the leaseset, inbound tunnels do basically no traffic and leaseset never appears in netdb from other routers, but it works locally
cumlord
sometimes if you restart the tunnel several times it will publish, and things seem to get spicy after some amount of time with job lag. it's better now than before, probably about 36-48 hours, on routers with more memory in VM i've been able to let it go for about 80 hours so far without crashing and burning
cumlord
first 24-48hrs though very smooth though minus the weird invisible ls thing that doesn't always happen
nyaa2pguy
is that what it might be if my remote site randomly dies for 10 minutes (with my local router saying there's congestion), then suddenly works fine again
nyaa2pguy
there's maybe congestion*
nyaa2pguy
or i guess whatever that behaviour is happens on a lot of sites for me
cumlord
not sure, what i'm seeing might be specific for i2p+ dev build (on e62e3b10)
cumlord
have you tried restarting the http tunnel on your local router to see if you can load it?
zzz
may be a plus thing, thought we fixed all the LS publish issues in canon a year or so back
zzz
dr|z3d, if zzzmirror.i2p is yours, oddly it's 5 months out of date on some pages and the home page, but not others