A deployment cannot actually attach the same persistent storage across
nodes. We have to switch to a StatefulSet for any and all programs that
need shared persistent storage.
I tried applying this config, but it didn't attach the same volume/disk
as before, so I had to revert to the Deployment for now.
After the new pod wasn't able to be scheduled due to insufficient CPU
resources, I checked the current usage and it was well below the
requested amount. Lowering request and limit fixed the deployment issue.
It was previously set to extensions/v1beta1. I have discovered that when
the Deployment is created as a extensions/v1beta1, it causes the
existing pod to be killed immediately when doing a rolling update. When
the Deployment was created as apps/v1, a rolling update behaves as
expected: a new pod is created, and the old one is only terminated once
the new pod is ready to serve traffic.
The existing Deployment resource will need to be deleted and recreated:
kubectl delete deployment gitea-server
kubectl apply -f gitea-server.yaml
Applying the file without deleting it first will not fix the issue with
rolling updates. It will cause a short downtime
The Docker image is used in the initialization process, to copy
everything in the custom folder to the Gitea data dir (mounted as a
persistent volume). It is built using Packer and is based on the busybox
image, so we can use its minimalist shell system to copy files and set
permissions
Based on recent usage stats. If these are not set, the scheduler's
capacity check doesn't work and it will place new pods on nodes that are
actually not free enough for them.
For now it is only labels, but adding anything supported will work
(robots.txt, public files, templates, etc)
The content will be copied to the /data/gitea/ folder that is a mounted
persistent volume
https://docs.gitea.io/en-us/customizing-gitea/