Fix rolling upgrades on k18s #32

Manually merged
greg merged 1 commits from deployment_api_version into master 2019-08-07 09:20:51 +00:00
Owner

It was previously set to extensions/v1beta1. I have discovered that when the Deployment is created as a extensions/v1beta1, it causes the existing pod to be killed immediately when doing a rolling update. When the Deployment was created as apps/v1, a rolling update behaves as expected: a new pod is created, and the old one is only terminated once the new pod is ready to serve traffic.

The existing Deployment resource will need to be deleted and recreated:

kubectl delete deployment gitea-server
kubectl apply -f gitea-server.yaml

Applying the file without deleting it first will not fix the issue with
rolling updates. It will cause a short downtime

@raucao What is a good time for you to pair on this migration?

It was previously set to extensions/v1beta1. I have discovered that when the Deployment is created as a extensions/v1beta1, it causes the existing pod to be killed immediately when doing a rolling update. When the Deployment was created as apps/v1, a rolling update behaves as expected: a new pod is created, and the old one is only terminated once the new pod is ready to serve traffic. The existing Deployment resource will need to be deleted and recreated: kubectl delete deployment gitea-server kubectl apply -f gitea-server.yaml Applying the file without deleting it first will not fix the issue with rolling updates. It will cause a short downtime @raucao What is a good time for you to pair on this migration?
greg added the
kredits-1
ops
labels 2019-08-06 10:53:09 +00:00
Owner

The title here should be "Fix rolling upgrades on k18s" imo.

Please coordinate times and such on chat. This is not a chat platform. Thanks.

The title here should be "Fix rolling upgrades on k18s" imo. Please coordinate times and such on chat. This is not a chat platform. Thanks.
greg changed title from Bump the api version for the Deployment resource to apps/v1 to Fix rolling upgrades on k18s 2019-08-06 11:21:34 +00:00
greg self-assigned this 2019-08-07 09:17:34 +00:00
galfert was assigned by greg 2019-08-07 09:17:37 +00:00
Author
Owner

We have performed the migration, downtime was only a few seconds. We have also tried a running update for a non existing Docker image tag. The new pod kept running, a new one was created and failed to pull the image; I reverted the Docker image to the 1.9.0 tag and the failed pod was deleting, leaving the previous one.

While we were at it I removed one of the worker nodes (see #29 (comment)), first setting an unused node as cordoned, and then downsizing the pool. We are running on two smalls now

We have performed the migration, downtime was only a few seconds. We have also tried a running update for a non existing Docker image tag. The new pod kept running, a new one was created and failed to pull the image; I reverted the Docker image to the 1.9.0 tag and the failed pod was deleting, leaving the previous one. While we were at it I removed one of the worker nodes (see https://gitea.kosmos.org/kosmos/gitea.kosmos.org/issues/29#issuecomment-814), first setting an unused node as cordoned, and then downsizing the pool. We are running on two smalls now
greg closed this pull request 2019-08-07 09:20:31 +00:00
greg reopened this pull request 2019-08-07 09:20:41 +00:00
greg closed this pull request 2019-08-07 09:20:51 +00:00
greg deleted branch deployment_api_version 2019-08-07 09:21:02 +00:00
This repo is archived. You cannot comment on pull requests.
No reviewers
No Milestone
No Assignees
2 Participants
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: kosmos/gitea.kosmos.org#32
No description provided.