OK, so I spent quite some time trying out all kinds of different configs until finally learning that GCE persistent disks simply don't support ReadWriteMany to begin with.
The GKE docs go out of their way to deceive you about this and to not explicitly mention that you cannot actually mount any normal GKE persistent volume on multiple pods/nodes. (Just for additional info: the same is the case with Digital Ocean's default storage class, which is based on their block storage.)
Apparently, the only way to get shared file storage between pods is to deploy either your own NFS/Gluster/etc. or to cough up a bunch of money and use Google Cloud Filestore, for which there is a storage class, and which can indeed be mounted on multiple pods.
For us, that's not an option, as Filestore pricing begins with 1TB minimum capacity, so the very cheapest option available costs around $205 per month.
@gregkare So what this basically means is that Kubernetes is not a great place to run something like Gitea, which has to store files on disk and cannot use an object storage instead. Unless the project is big enough to justify the $0.30/GB cost of Google Filestore and the minimum purchase of 1TB storage/month.
If we choose to keep our Gitea on GKE, then we just cannot have zero-downtime deployments. Which may still be better than running it in a different way, but it sure isn't what we're used to from the rest of our infrastructure.
I just had one other idea that could make sense, which is using larger nodes and force-scheduling all gitea-server pods/replicas on the same node. This way, at least the rolling updates should work, and it would cost much less than the Filestore solution.
Good idea, I think scheduling gitea-server pods/replicas on the same node should fit with our current nodes, but we'll have to try to be sure
I think if it would fit right now, we wouldn't have had the upgrade issues, due to pods being scheduled on a different node.