WIP: StatefulSet config for gitea-server #41
Loading…
x
Reference in New Issue
Block a user
No description provided.
Delete Branch "feature/stateful-set"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
A
Deployment
cannot actually attach the same persistent storage across nodes. We have to switch to aStatefulSet
for all programs that need shared persistent storage.I tried applying this config, but it didn't attach the same volume/disk as before, so I had to revert to the Deployment for now.
Relevant docs: https://cloud.google.com/kubernetes-engine/docs/how-to/stateful-apps
I tried specifying the volume ID for the new claims explicitly, but that also didn't work. So I posted a question on StackOverflow, which Google apparently uses as the official support site for GKE:
https://stackoverflow.com/questions/59436416/migrate-from-deployment-to-statefulset-without-losing-persistent-volume-access
OK, so I spent quite some time trying out all kinds of different configs until finally learning that GCE persistent disks simply don't support
ReadWriteMany
to begin with.The GKE docs go out of their way to deceive you about this and to not explicitly mention that you cannot actually mount any normal GKE persistent volume on multiple pods/nodes. (Just for additional info: the same is the case with Digital Ocean's default storage class, which is based on their block storage.)
Apparently, the only way to get shared file storage between pods is to deploy either your own NFS/Gluster/etc. or to cough up a bunch of money and use Google Cloud Filestore, for which there is a storage class, and which can indeed be mounted on multiple pods.
For us, that's not an option, as Filestore pricing begins with 1TB minimum capacity, so the very cheapest option available costs around $205 per month.
@gregkare So what this basically means is that Kubernetes is not a great place to run something like Gitea, which has to store files on disk and cannot use an object storage instead. Unless the project is big enough to justify the $0.30/GB cost of Google Filestore and the minimum purchase of 1TB storage/month.
If we choose to keep our Gitea on GKE, then we just cannot have zero-downtime deployments. Which may still be better than running it in a different way, but it sure isn't what we're used to from the rest of our infrastructure.
I just had one other idea that could make sense, which is using larger nodes and force-scheduling all
gitea-server
pods/replicas on the same node. This way, at least the rolling updates should work, and it would cost much less than the Filestore solution.Good idea, I think scheduling
gitea-server
pods/replicas on the same node should fit with our current nodes, but we'll have to try to be sureI think if it would fit right now, we wouldn't have had the upgrade issues, due to pods being scheduled on a different node.