~dr|z3d
@RN
@RN_
@StormyCloud
@T3s|4
@T3s|4_
@eyedeekay
@not_bob
@orignal
@postman
@zzz
%Liorar
+FreefallHeavens
+Over
+Xeha
+acetone
+cumlord
+hk
+onon_
+poriori
+profetikla
+r00tobo
+uop23ip
+weko
An0nm0n
Arch
Danny
DeltaOreo
Irc2PGuest18250
Irc2PGuest53061
Irc2PGuest60340
Irc2PGuest708
Meow
Nausicaa
Onn4l7h
Onn4|7h
anon3
anu3
boonst
carried6590
mareki2pb
plap
shiver_
simprelay
solidx66
thetia
u5657
y2kboy23
dr|z3d snark refresh is broken. Seeing "Uncaught TypeError: filterbar is null" in the dev console.
y2kboy23
well, once I enabled it, it started refreshing
T3s|4
y2kboy23: do you mean once I enabled the 'Torrent filter bar'?
dr|z3d
y2kboy23: thanks, will fix that.
dr|z3d
on the plus side, no pun intended, you got exposed to a feature you obviously weren't using :)
dr|z3d
T3s|4: have a look at snark's configs, you'll see what y2kboy23 means.
y2kboy23
Sorry. With the filter bar off, the refresh is broken.
zzz
dr|z3d, I think you're in desperate need of concurrency help?
dr|z3d
if you're offering, zzz, I'll take any help I can get, thanks. it's what I'm working on right now.
zzz
yup
dr|z3d
synchronized isn't cutting it. I was going to move the removal of stale entries inside the writer, but haven't got there yet.
zzz
first of all, I recommend the concurrency section of Thinking In Java
zzz
but here's a tl;dr
dr|z3d
concurrentHashMap I tried, no dice there either.
zzz
when using non-thread-safe data structures such as HashMap, ArrayList, etc., there's only 3 options to avoid ConcurrentModificationExceptions and weird bugs
zzz
1) ensure ALL accesses (get(), put(), remove(), clear(), etc) are from a single thread
zzz
2) synchronize around ALL accesses with the same lock
zzz
3) switch to a thread-safe-datastructure such as ConcurrentHashMap
zzz
there's an additional requirement when removing an element inside a loop
zzz
let me look at your takes, just a sec
dr|z3d
you've summarized nicely what I've discovered. I think I was/am probably omitting the file lock element with synchronized which is why it's not working as intended.
zzz
no. you need to synch at _every_single_place_ you touch rdnscache
zzz
and yes, to remove inside a loop, you must use iter.remove(), and you must never add within a loop if you're looping over that thing
dr|z3d
yeah, got that. my local copy has that implemented.
dr|z3d
the synchronized everywhere part.
zzz
ConcurrentHashMap should also work, subject to the loop requirements above
zzz
that's it, have fun
dr|z3d
thanks, appreciate the insights.
dr|z3d
this is the current iteration of the relevant methods, fwiw. cake.i2p/view/nzTmK9ETeA_N28jAPcD81cTS1ZRg8dr2c2UJh2B0W_OpEmw4X4aU/nzTmK9ETeA.txt
dr|z3d
I noticed you got the rough end of Opicaak earlier. Ugly :|
zzz
you can see why java 5's ConcurrentHashMap was a godsend
dr|z3d
Yeah, I obviously didn't do it right when I tried a ConcurrentHashMap, it got rid of the errors but also didn't bother to read in the filecache to memory.
dr|z3d
I'm going to add a couple more synchronized to that code block I just caked and see what gives.
dr|z3d
the error took a while to manifest because stale entries in the file cache are only removed when the # entries exceeds the max configured size, 32K in most cases.
zzz
you also need to synch readRDNSCacheFromFile, and lots of spots that are not in that pastebin
dr|z3d
yup, done that, testing code.
dr|z3d
what's your appraisal of the general idea? nuts? or? :)
zzz
mildly nuts
dr|z3d
:)
dr|z3d
It makes for a) a much more informative console when looking at tables and the netdb.
dr|z3d
and b) with a file cache, zero delay once the cache is populated and minimal lookups.
zzz
one other subtlety: public synchronized static foo() is a DIFFERENT lock than public synchronized bar()
dr|z3d
so all methods being synchronized on the same file need to be either static, or not.
zzz
yup
dr|z3d
ok, good to know, thanks. I'm all static for the file cache methods.
zzz
the former is the same as synchronized(BlahBlah.class) { ... } and the latter is the same as synchronized(this) { ... }
zzz
the best practice would be to do neither, but to synch on the object in question, i.e. synchronized(rdnsCache) { ... } everywhere
zzz
but your way will work
dr|z3d
current revision, will take a few minutes for errors to generate, or not: cake.i2p/view/DlyObgeFeT_j54qTDyqRWx3EtCyrr8RLhpXFAPckk_jZUw0X25KO/DlyObgeFeT.txt
dr|z3d
a fully populated 32K entry file cache is ~1MB, which isn't too bad.
dr|z3d
as I thought, still the same error.
dr|z3d
this method:
dr|z3d
private static synchronized void cleanupRDNSCache() {
dr|z3d
List<String> keysToRemove = new ArrayList<>();
dr|z3d
Iterator<Map.Entry<String, CacheEntry>> it = rdnsCache.entrySet().iterator();
dr|z3d
while (it.hasNext()) {
dr|z3d
Map.Entry<String, CacheEntry> entry = it.next();
dr|z3d
if (rdnsCache.size() > MAX_RDNS_CACHE_SIZE) {
dr|z3d
keysToRemove.add(entry.getKey());
dr|z3d
it.remove();
dr|z3d
}
dr|z3d
}
dr|z3d
rdnsCache.keySet().removeAll(keysToRemove);
dr|z3d
}
dr|z3d
really does not like Map.Entry<String, CacheEntry> entry = it.next()
zzz
well, for one, you either need it.remove() or the whole keysToRemove thing, dont do both
zzz
it.remove() is a lot more efficient
dr|z3d
ok, I'll try that, thanks.
zzz
but that doesn't explain why it's not working
T3s|4
dr|z3d: yep, on the snarkified 'plus' side, seems to have been working / is working as expected :)
zzz
did you sync the methods below cleanupRDNSCache() that you did not include in the paste?
dr|z3d
thanks for reporting back, T3s|4. y2kboy23's issue was easily fixed, I put something in last minute without testing the badge refresh properly.
T3s|4
great
dr|z3d
zzz: this one, yes, but I have a feeling there's another one that needs the syncrhonized treatment:
dr|z3d
public static synchronized int countRdnsCacheEntries() {
dr|z3d
return rdnsCache.size();
dr|z3d
}
dr|z3d
like probably this one: public String getCanonicalHostName(String ipAddress) {
zzz
bingo
dr|z3d
although not sure about that, and it's not a static method.
dr|z3d
let's see if it moans if it's converted to static :)
zzz
you'll have to take out the logging
dr|z3d
yeah, I was using system.err where the logging wasn't working.
dr|z3d
will remove it once the whole thing is working properly, don't want to be polluting wrapper logs.
zzz
FYI, java overhead for a Map entry is about 120 bytes, and the way you did it is even more, so your size estimate is at least 4X low
dr|z3d
talking about the on-disk cache.
dr|z3d
I figured the in-mem cache would be larger, 4x filecache, not bad. 4MB when it's full on a >=512M router, acceptable.
zzz
why in the world is the map IP->CacheEntry(IP, hostname) rather than just IP->hostname ??? Why store the key inside the value?
zzz
that's adding about another 56 bytes per entry
dr|z3d
that's a good question. :)
dr|z3d
so I had to implement a lock object to bridge the static and non-static methods.
dr|z3d
testing code, let's see if it works.
dr|z3d
so it looks like this might be working, thanks for the help, zzz. I'll have a crack at the cache entry storage next. cake.i2p/view/1np26pHE10_RGppfzDvzYLxRIyE76Z6QMKwagrzv3_CPRrXEIH6b/1np26pHE10.txt
zzz
ip->hostname would disallow null hostnames; not sure if you want to store those as a kind of negative lookup cache mechanism
zzz
you'd have to switch to a separate negative lookup cache, or use something else like "" to indicate you already looked it up
dr|z3d
ok, what about this? cake.i2p/view/Ru3HYf2KEC_WSQey62aX0ewHout1M0PFmPhzK0YsC_NNbeUkYvrk/Ru3HYf2KEC.txt
zzz
ja, though as I said if you're doing that, do the best practice of syncing on rdnsCache itself (and make it final), no need for a separate lock
zzz
but your way works fine
dr|z3d
that latest cake should address the ip/hostname issue in the mem cache.
dr|z3d
the lock stuff is working a.ok, no more concurrent mod errors in console.
dr|z3d
noted re syncing on the cache, thanks.
zzz
no, my suggestion is Map<String, String> rdnsCache, making it a map of IP -> hostname, and nuking the CacheEntry class completely
zzz
this would vastly simplify it but would be a big refactor
zzz
there should never be a need to store the key inside the value, that's what a Map does for you, it associates a key to a value
dr|z3d
yeah, right, so I've removed that, the ip is now the key.
zzz
ok, you're fast, but it's not in the last cake above
dr|z3d
oh, then I must be doing it wrong. I thought: rdnsCache.put(cacheEntry.getIpAddress(), cacheEntry); would address it.
dr|z3d
more work to do, thanks. will ping you again when I'm sure I've fixed it, though possibly not with a total refactor just yet.
zzz
I'm saying you don't need class CacheEntry at all. rdnsCache.put(ip, hostname); hostname = rdnsCache.get(ip)
zzz
but yes it's a big change to something you just got working, wouldn't blame you if you didn't, it's just not great code
zzz
I'm saying you don't need class CacheEntry at all. rdnsCache.put(ip, hostname); hostname = rdnsCache.get(ip)
zzz
but yes it's a big change to something you just got working, wouldn't blame you if you didn't, it's just not great code
zzz
the map "keeps" the key and value "together". You don't need to put them both in a CacheEntry class to do that
dr|z3d
hey, it works, we're not all java sexperts like you :)
dr|z3d
but thanks for the feedback, will look at a refactor maybe in the next release cycle.
Irc2PGuest60863
it might be useful to have a CacheEntry if you want to keep timestamp with the hostname
Irc2PGuest60863
(which is usually how caches work, remove the oldest or lru)
Irc2PGuest60863
(also pretty sure jdk has a built in lru cache)
dr|z3d
I don't want a timestamp with the hostname, just makes the cache bigger. We're storing the cache entries oldest first, so they get removed when the cache is full.
Irc2PGuest60863
Blinded message
Irc2PGuest60863
what makes you think you're storing the oldest first?
Irc2PGuest60863
though admittedly I just glanced at the code
zzz
right
zzz
a HashMap don't do that
Irc2PGuest60863
I could've sworn java has a built in class that does this
zzz
LinkedHashMap
Irc2PGuest60863
yeah LinkedHashMap gets you 90% of the way there I think
dr|z3d
patches welcome, as is the common refrain :)
Irc2PGuest60863
what version of java do you use anyways? is the router still restricted to java8?
zzz
our LHMCache class gets you the other 10%
dr|z3d
Java8 is what we're currently requiring.
dr|z3d
LinkedHashMap rework: cake.i2p/view/OeWflKR7pq_vHxjlY4nMau8ENK1FKYdiSyBYaw3aL_WpEqsyDjBf/OeWflKR7pq.txt
dr|z3d
I feel like you want to pull this code into canon, zzz :)
Irc2PGuest60863
how do you retype the code so fast hehe
zzz
we have a no nuts policy in canon
dr|z3d
you need to re-evaluate the code. this is lovely, ripe fruit :)
Irc2PGuest60863
dr|z3d: I worry about that writeRNDSCacheToFile method which calls isDuplicateEntry on every line which re-reads the file?
Irc2PGuest60863
but the code seems fine. If I was really reviewing the code I might say it's not... OO enough. But some people don't like my OO style. Also obtaining a lock to write stuff to disk is rarely necessary
Irc2PGuest60863
in java 99% of the time you're better off making a copy of a data structure and then writing that then synchronizing
Irc2PGuest60863
then again the cost of synchronization is so low these days eh
Irc2PGuest60863
I'll also always wonder about the use of statics within the router codebase
Irc2PGuest60863
none of that code needs to be static afaict
dr|z3d
fire up git, pull the code, see if you can improve it.
Irc2PGuest60863
hehe he said fire up git
Irc2PGuest60863
git's an exe
dr|z3d
I'll push the latest revision once I've determined it's not hosing stuff.
Irc2PGuest60863
dr|z3d: I'll improve it if you can tell me why I haven't been able to load postman all day heh
dr|z3d
what version are you running?
dr|z3d
of i2p+?
Irc2PGuest60863
I'm actually genuinely curious what's going on. Before I thought it was a matter of router integration but it's been 27 hours
dr|z3d
58+ perchance?
Irc2PGuest60863
dr|z3d: yeah I'm running the latest version of i2p+ as usual. everything else seems to be working fine... except postman
dr|z3d
is that 58+ or something more recent?
Irc2PGuest60863
62+
dr|z3d
because 65+ is current right now.
Irc2PGuest60863
really I guess I could upgrade again. you think it might help
dr|z3d
regarding postman, I don't know. I'd wait a few if you are going to upgrade, I'll push a new update with the RDNS changes once I'm confident they're working as intended.
dr|z3d
I'm reverting the linkedHashMap and key/value changes for now, not working as intended, file cache is getting overwritten. Need to look at that again later.
dr|z3d
ok, -66+ uploaded.
zzz
postman working for me on canon trunk
Irc2PGuest60863
dr|z3d: my code would probably look something like cake.i2p/view/kTjN0xzM5L_fE4jYPdWorY2UN4LyPsrURKWMtBJMr_deminNcWcT/kTjN0xzM5L.txt
Irc2PGuest60863
I will note that (1) you don't want to copy the tmp file and then delete it. Instead you want to use an atomic file move. That code as is is buggy and will break because copy isn't atomic.
Irc2PGuest60863
(2) you don't need t lock the map to write it to disk, just create a copy. Map<String,String> copyToWriteToDisk = new HashMap<>(ipToHostname) does the trick
Irc2PGuest60863
(3) I still don't understand what isDuplicateentry is doing in that code. it's pretty strange.
Irc2PGuest60863
lastly I'm never a fan of developers creating their own adhoc file formats, hehe. just use xml. But that dear friends is a story for another time
dr|z3d
ip,hostname
dr|z3d
it's pretty simple, keeps the file as small as possible, and is human readable.
Irc2PGuest60863
yeah probably, in this case, I'm just never a fan of people inventing file formats
dr|z3d
it's .txt, it's not a new format.
Irc2PGuest60863
there's strange edge case you haven't thought of that would be handled by an existing file format
Irc2PGuest60863
even using a Java Properties Writer might be... safer
Irc2PGuest60863
also I might keep the Cache Entry class. The reason is you probably want to remember when you did the resolution and got the hostname. During initialization when you read in the file I would ignore any cache entries older than 30 days
dr|z3d
as for isDuplicateEntry, it's meant to check for dupes.. the whole point of a simple format is to allow you to manually edit it, merge files if required.
Irc2PGuest60863
stuff like that happens when somebody installs i2p, stops running it, then comes back 2 years later and restarts the router and now the info in your cache might be wrong
Irc2PGuest60863
dr|z3d: why do you need to check for dupes in a map?
dr|z3d
there's sufficient churn to not need to care about the timestamp.
dr|z3d
you don't, you need to check for dupes in the file.
Irc2PGuest60863
dr|z3d: but if the only thing you ever write to the file is a map how could dupes get in the file?
dr|z3d
I just told you.
dr|z3d
you might manually merge files.
Irc2PGuest60863
hehe "sufficient churn to not need to care about the timestamp" == famous last words
Irc2PGuest60863
dr|z3d: when you read the file into a map in memory dupes will be eliminated
Irc2PGuest60863
when you write the file to disk there can be no dupes
Irc2PGuest60863
so all that code has to do is make sure to read the file and then write the file and use the ATOMIC_MOVE option not the copy operation
Irc2PGuest60863
and yes you've invented a new file format. just understand that's a risky thing to do. most file formats in i2p for example are accompanied by a formal spec. but yeah whatever
dr|z3d
open file, see format. :)
dr|z3d
which is the point. human readable. I don't want xml or anything else which obfuscates the information.
Irc2PGuest60863
xml is much more human readable than a custom format
dr|z3d
actually, no. not in this case.
Irc2PGuest60863
but I digress, my main point was that you shouldn't need to check for duplicates
Irc2PGuest60863
and a timestamp isn't a crazy idea
dr|z3d
ok, thanks, noted. I'll put that theory to the test later.
dr|z3d
I initially had timestamps, just found them to be unnecessary, first in, first out is how the cache _should_ function.
Irc2PGuest60863
I'd also suggest that storing cached data in a config directory is strange. It's at best tmp data. the sort of thing you dont' really want to keep if you for example switch router versions. not sure the router policy on this but it's usually a good idea to distinguish between configuration data and runtime/"workspace" data and "tmp" data
dr|z3d
no, it's definitely meant to be in .i2p/
Irc2PGuest60863
dr|z3d: timestamps aren't for eviction. they're a form of sanity checking. if you read in cache data which is, say, 2 years old, then it should all be thrown away
Irc2PGuest60863
you're assuming ppl are regularly restarting their routers... which is an assumption that won't always hold
dr|z3d
first in, first out. like I said, there's enough churn to remove stale entries soon enough. the hostname data isn't critical.
dr|z3d
no assumption relating to router uptime.
dr|z3d
We just don't want to store cache data in a volatile location, it's in .i2p because it needs to be persistent.
zzz
'dupes impossible in a map' is not a theory
zzz
'why put the key on the value side of a key/value store' is neither a Java question nor a programming question
zzz
I doubt the file is in ~/.i2p but if it is it's not on purpose
zzz
the file format objection is silly but if you claim it's a csv file you might win the argument
zzz
we have his ATOMIC_MOVE in our utils as FileUtil.rename()
zzz
"xml is much more human-readable than csv" is an insane take
dr|z3d
did I miss part of this conversation. not sure.
dr|z3d
happy with the file format, that's not going to change.
zzz
no, just comments on the last half of the rdns discussion where ircguestxxx got involved
dr|z3d
location for the file is ~/.i2p/ in purpose.
dr|z3d
*on purpose
zzz
do me a favor and look if it's there
zzz
on your disk
dr|z3d
it is.
dr|z3d
always has been. by design.
zzz
not ~/i2p/ ?
dr|z3d
no
zzz
ah never mind, I see where you did it
zzz
missed it because it's not how I would have done it
zzz
see, told you it wouldn't be long before I was wrong
dr|z3d
:)
zzz
ofc ~/.i2p is the right place, as he said he doesn't know what the 'router policy' is
dr|z3d
you're right about xml, you'd have to be smoking crack to suggest that was more human readable than what I've implemented :)
zzz
no, not just more -- "much more"
zzz
you also presumably yanked that code from one of the 20 times I've done it, so I'll take your side on that one
dr|z3d
right. and /tmp/ is definitely the wrong place.
dr|z3d
I forget, but probably :)
dr|z3d
the main thing right now is that it works and doesn't cause console errors. as you've pointed out, it could be made much more efficient and I'll have another pass at it post release, probably.
dr|z3d
it's the difference between waiting 30s or more for one of the pages containing large peer tables to load and having the page load almost instantly. so I consider it a win, all things considered.
dr|z3d
oh, and if it's not obvious, ircguestxxx is mesh.
dr|z3d
who still hasn't reported back on the update notifications he specifically asked for.