WIP: StatefulSet config for gitea-server #41

Closed
raucao wants to merge 1 commits from feature/stateful-set into master
Owner

A Deployment cannot actually attach the same persistent storage across nodes. We have to switch to a StatefulSet for all programs that need shared persistent storage.

I tried applying this config, but it didn't attach the same volume/disk as before, so I had to revert to the Deployment for now.

Relevant docs: https://cloud.google.com/kubernetes-engine/docs/how-to/stateful-apps

A `Deployment` cannot actually attach the same persistent storage across nodes. We have to switch to a `StatefulSet` for all programs that need shared persistent storage. I tried applying this config, but it didn't attach the same volume/disk as before, so I had to revert to the Deployment for now. Relevant docs: https://cloud.google.com/kubernetes-engine/docs/how-to/stateful-apps
Author
Owner

I tried specifying the volume ID for the new claims explicitly, but that also didn't work. So I posted a question on StackOverflow, which Google apparently uses as the official support site for GKE:

https://stackoverflow.com/questions/59436416/migrate-from-deployment-to-statefulset-without-losing-persistent-volume-access

I tried specifying the volume ID for the new claims explicitly, but that also didn't work. So I posted a question on StackOverflow, which Google apparently uses as the official support site for GKE: https://stackoverflow.com/questions/59436416/migrate-from-deployment-to-statefulset-without-losing-persistent-volume-access
Author
Owner

OK, so I spent quite some time trying out all kinds of different configs until finally learning that GCE persistent disks simply don't support ReadWriteMany to begin with.

The GKE docs go out of their way to deceive you about this and to not explicitly mention that you cannot actually mount any normal GKE persistent volume on multiple pods/nodes. (Just for additional info: the same is the case with Digital Ocean's default storage class, which is based on their block storage.)

Apparently, the only way to get shared file storage between pods is to deploy either your own NFS/Gluster/etc. or to cough up a bunch of money and use Google Cloud Filestore, for which there is a storage class, and which can indeed be mounted on multiple pods.

For us, that's not an option, as Filestore pricing begins with 1TB minimum capacity, so the very cheapest option available costs around $205 per month.

OK, so I spent quite some time trying out all kinds of different configs until finally learning that GCE persistent disks simply don't support `ReadWriteMany` to begin with. The GKE docs go out of their way to deceive you about this and to not explicitly mention that you cannot actually mount *any* normal GKE persistent volume on multiple pods/nodes. (Just for additional info: the same is the case with Digital Ocean's default storage class, which is based on their block storage.) Apparently, the only way to get shared file storage between pods is to deploy either your own NFS/Gluster/etc. or to cough up a bunch of money and [use Google Cloud Filestore](https://cloud.google.com/filestore/docs/accessing-fileshares), for which there is a storage class, and which can indeed be mounted on multiple pods. For us, that's not an option, as Filestore pricing begins with 1TB minimum capacity, so the very cheapest option available costs around $205 per month.
Author
Owner

@gregkare So what this basically means is that Kubernetes is not a great place to run something like Gitea, which has to store files on disk and cannot use an object storage instead. Unless the project is big enough to justify the $0.30/GB cost of Google Filestore and the minimum purchase of 1TB storage/month.

If we choose to keep our Gitea on GKE, then we just cannot have zero-downtime deployments. Which may still be better than running it in a different way, but it sure isn't what we're used to from the rest of our infrastructure.

@gregkare So what this basically means is that Kubernetes is not a great place to run something like Gitea, which has to store files on disk and cannot use an object storage instead. Unless the project is big enough to justify the $0.30/GB cost of Google Filestore and the minimum purchase of 1TB storage/month. If we choose to keep our Gitea on GKE, then we just cannot have zero-downtime deployments. Which may still be better than running it in a different way, but it sure isn't what we're used to from the rest of our infrastructure.
Author
Owner

I just had one other idea that could make sense, which is using larger nodes and force-scheduling all gitea-server pods/replicas on the same node. This way, at least the rolling updates should work, and it would cost much less than the Filestore solution.

I just had one other idea that could make sense, which is using larger nodes and force-scheduling all `gitea-server` pods/replicas on the same node. This way, at least the rolling updates should work, and it would cost much less than the Filestore solution.
Owner

Good idea, I think scheduling gitea-server pods/replicas on the same node should fit with our current nodes, but we'll have to try to be sure

Good idea, I think scheduling `gitea-server` pods/replicas on the same node should fit with our current nodes, but we'll have to try to be sure
Author
Owner

I think if it would fit right now, we wouldn't have had the upgrade issues, due to pods being scheduled on a different node.

I think if it would fit right now, we wouldn't have had the upgrade issues, due to pods being scheduled on a different node.
raucao closed this pull request 2020-07-30 10:58:41 +00:00
This repo is archived. You cannot comment on pull requests.
No description provided.