It was previously set to extensions/v1beta1. I have discovered that when
the Deployment is created as a extensions/v1beta1, it causes the
existing pod to be killed immediately when doing a rolling update. When
the Deployment was created as apps/v1, a rolling update behaves as
expected: a new pod is created, and the old one is only terminated once
the new pod is ready to serve traffic.
The existing Deployment resource will need to be deleted and recreated:
kubectl delete deployment gitea-server
kubectl apply -f gitea-server.yaml
Applying the file without deleting it first will not fix the issue with
rolling updates. It will cause a short downtime
The Docker image is used in the initialization process, to copy
everything in the custom folder to the Gitea data dir (mounted as a
persistent volume). It is built using Packer and is based on the busybox
image, so we can use its minimalist shell system to copy files and set
permissions
Based on recent usage stats. If these are not set, the scheduler's
capacity check doesn't work and it will place new pods on nodes that are
actually not free enough for them.
For now it is only labels, but adding anything supported will work
(robots.txt, public files, templates, etc)
The content will be copied to the /data/gitea/ folder that is a mounted
persistent volume
https://docs.gitea.io/en-us/customizing-gitea/
This includes all the resources currently running on https://gitea.kosmos.org
It sets up a persistent data volume for the MySQL database, one for the
Gitea data, that Gitea calls the custom folder (config, attachment,
avatars, logs, etc). We mount that persistent data volume as
/data/gitea. It also creates a Let's Encrypt certificate for
gitea.kosmos.org, also saved to the custom folder.
This also includes two scripts:
* `./script/get_secrets` downloads the secrets to the local filesystem so
they can be edited
* `./script/replace_secrets` deletes the remote secrets and creates them
again from the local ones in kubernetes/config/*
Closes#6