@eyedeekay
&zzz
+R4SAS
+RN
+RN_
+StormyCloud
+T3s|4
+altonen
+dr|z3d
+eche|off
+hagen
+hk
+orignal
+postman
+qend-irc2p
+snex
+wodencafe
Arch
BubbRubb1
Daddy_1
Danny
DeltaOreo
FreeB
FreefallHeavens
HowardPlayzOfAdmin
Irc2PGuest38625
Irc2PGuest74288
Irc2PGuest89334
Leopold
Onn4l7h
Onn4|7h
Sisyphus_
Sleepy
SlippyJoe
T3s|4_
Teeed
acetone_
ardu
asdfasdf2
b3t4f4c3__
bak83_
cumlord
death
dr4wd3
duanin2
eyedeekay_bnc
hypnosis
mareki2p_
not_bob_afk
poriori
profetikla
r00tobo_BNC
r3med1tz-
rapidash
shiver_
solidx66_
u5657
uop23ip
w8rabbit
weko_
wew_
wew__
x74a6
zzz
eyedeekay, gitea down
eyedeekay
It was back by the time I looked at it
zzz
ok, the message was a little different than usual
zzz
remote:
zzz
remote: error:
zzz
remote: error: Internal Server Connection Error
zzz
remote: error:
zzz
error:
zzz
error: Failed to execute git command
zzz
error:
zzz
remote: . Processing 1 references
zzz
send-pack: unexpected disconnect while reading sideband packet
zzz
error: error in sideband demultiplexer
eyedeekay
Hm, that's new. All this stuff that's been seems to come down to a few locking issues in the interaction between git and gitea and instinctively, this seems like maybe related manifestation of that thing. I've been archiving the log every time it goes down to find it but most problems just go away after cleaning up the locks and restarting
eyedeekay
The locking is also demonstrably the cause, malfunctions begin to occur when git operations compete for a lock
eyedeekay
So now we're automatically cleaning locks every restart, and automatically restarting when lock competition causes a timeout, which takes about 2-5 minutes because it looks for false positives before killing/restarting and takes a backup, with restarts happening about 1-2 times a day, if you hit it in exactly that window which I think you did it probably acted weird
eyedeekay
See if I can find you in the logs...
zzz
no need, I leave it to you
eyedeekay
Well I'm curious, and if I can find the event maybe it's another clue