@eyedeekay
&eche|on
&kytv
&zzz
+R4SAS
+RN
+RN_
+T3s|4
+dr|z3d
+hk
+lbt
+not_bob_afk
+orignal
+postman
+radakayot
+segfault
+weko
An0nm0n
Arch
BravoOreo
Danny
FreefallHeavens_
Irc2PGuest59134
Irc2PGuest83914
Irc2PGuest90017
Irc2PGuest99921
Irc2PGuest99941
Leopold
Nausicaa
Onn4l7h
Onn4|7h
Over
Sisyphus
Sleepy
SoniEx2
T3s|4_
acetone_
anon2
b3t4f4c3__
boonst
cumlord
dr4wd3_
eyedeekay_bnc
l337s
mareki2p_
poriori
profetikla
qend-irc2p
r3med1tz-
rapidash
shiver_
solidx66
u5657_1
uop23ip
w8rabbit
wodencafe2
x74a6
eyedeekay
Clearnet primarily, mostly via Tor exits and known VPN endpoints
lbt
eyedeekay: gitlab seems 502 again :(
eyedeekay
Yup just getting on to fix it again
eyedeekay
Backup should not have run though that is confusing...
eyedeekay
Oh, root was running the same backup script as the user, so disabling one did nothing to disable the other
eyedeekay
backup path was hardcoded so I don't get any free disk space back out of it but at least I know why it was still running the backup
eyedeekay
Should be back in about 5 minutes
dr|z3d
have you checked the size of your journal lately, eyedeekay?
dr|z3d
try: journalctl --disk-usage
eyedeekay
good question, no I haven't
eyedeekay
Kinda big, 3.5g, but not that much
eyedeekay
Once gitlab gets booted I've got a bunch of cached docker image layers I'll be able to ditch though
dr|z3d
journalctl --vacuum-time=1d will limit the size to the last day's worth.
dr|z3d
it's also worth running du -h in /var/log/
dr|z3d
also check /var/cache/apt/archives/ to make sure you don't have a cache of all your package updates.
eyedeekay
I do `du -h -d 1 /var/log` that every time because every once in a while I get a log explosion on my laptop and it screws up my day, hasn't happened on this server yet but the server runs stable and my laptop runs sid
eyedeekay
but I have apt-get autoclean running with my unattended-upgrades so I never have any archived apt packages
dr|z3d
ok, good.
eyedeekay
I just never clear the docker build cache while gitlab is down, don't want to delete the wrong thing by accident and having a running container protects it from deletion
dr|z3d
yeah, those docker caches can get huge in a very short period.
eyedeekay
Yeah looks like I've got a lot in it this time
dr|z3d
on a local box/laptop, you might consider mounting /var/log via tmpfs if it's causing ongoing issues.
eyedeekay
Good thought, I'll look into that
dr|z3d
here's a selection of folders you can mount with tmpfs, tweak to taste:
dr|z3d
tmpfs /dev/shm tmpfs defaults,mode=1777 0 0
dr|z3d
tmpfs /run tmpfs defaults,mode=1777 0 0
dr|z3d
tmpfs /tmp tmpfs defaults,mode=1777 0 0
dr|z3d
tmpfs /var/cache/apt/archives tmpfs defaults,mode=1777 0 0
dr|z3d
tmpfs /var/spool tmpfs defaults,mode=1777 0 0
dr|z3d
tmpfs /var/log tmpfs defaults,mode=1777 0 0
dr|z3d
tmpfs /var/tmp tmpfs defaults,mode=1777 0 0
dr|z3d
you'd want to edit fstab, then rm -rf the contents of the folders you've chosen to mount, and then mount -a
dr|z3d
and then, ideally, reboot.
eyedeekay
thanks
dr|z3d
helps with wear and twear on solid state devices.
dr|z3d
aside from being faster generally.
eyedeekay
Seems like a good idea and I don't see anything stopping me, probably will do this today
eyedeekay
Thanks again
dr|z3d
you're welcome
dr|z3d
re /var/log/, anything that may require a persistent log file (e.g. fail2ban), configure to store the file elsewhere for persistence.
zzz
apologies to eyedeekay and Digital Ocean, as he informed me, they are a good company, and we are not on some low-rent $10 plan as I had naievely assumed ))
eyedeekay
They're OK. I would appreciate a more flexible plan layout from them though, eventually(soon) I'll have to look for a provider who can offer that