Fix rolling upgrades on k18s #32
Loading…
x
Reference in New Issue
Block a user
No description provided.
Delete Branch "deployment_api_version"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
It was previously set to extensions/v1beta1. I have discovered that when the Deployment is created as a extensions/v1beta1, it causes the existing pod to be killed immediately when doing a rolling update. When the Deployment was created as apps/v1, a rolling update behaves as expected: a new pod is created, and the old one is only terminated once the new pod is ready to serve traffic.
The existing Deployment resource will need to be deleted and recreated:
Applying the file without deleting it first will not fix the issue with
rolling updates. It will cause a short downtime
@raucao What is a good time for you to pair on this migration?
The title here should be "Fix rolling upgrades on k18s" imo.
Please coordinate times and such on chat. This is not a chat platform. Thanks.
Bump the api version for the Deployment resource to apps/v1to Fix rolling upgrades on k18sWe have performed the migration, downtime was only a few seconds. We have also tried a running update for a non existing Docker image tag. The new pod kept running, a new one was created and failed to pull the image; I reverted the Docker image to the 1.9.0 tag and the failed pod was deleting, leaving the previous one.
While we were at it I removed one of the worker nodes (see #29 (comment)), first setting an unused node as cordoned, and then downsizing the pool. We are running on two smalls now