
If labels are not specified on a Job, kubernetes defaults them to include the labels of their underlying Pod template. Helm 3 injects metadata into all resources [0] including a `app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes sees a Job's labels they are no longer empty and thus do not get defaulted to the underlying Pod template's labels. This is a problem since Job labels are depended on by - Armada pre-upgrade delete hooks - Armada wait logic configurations - kubernetes-entrypoint dependencies Thus for each Job template this adds labels matching the underlying Pod template to retain the same labels that were present with Helm 2. [0]: https://github.com/helm/helm/pull/7649 Change-Id: Ib5a7eb494fb776d74e1edc767b9522b02453b19d
17 lines
619 B
YAML
17 lines
619 B
YAML
---
|
|
placement:
|
|
- 0.1.0 Initial Chart
|
|
- 0.1.1 Change helm-toolkit dependency version to ">= 0.1.0"
|
|
- 0.1.2 Establish Nova/Placement dependencies
|
|
- 0.1.3 Use proper default placement image
|
|
- 0.1.4 Add null check condition in placement deployment manifest
|
|
- 0.1.5 Change Issuer to ClusterIssuer
|
|
- 0.1.6 Revert - Change Issuer to ClusterIssuer
|
|
- 0.1.7 Change Issuer to ClusterIssuer
|
|
- 0.2.0 Remove support for releases before T
|
|
- 0.2.1 Add Ussuri release support
|
|
- 0.2.2 Add Victoria and Wallaby releases support
|
|
- 0.2.3 Added helm.sh/hook annotations for Jobs
|
|
- 0.2.4 Helm 3 - Fix Job Labels
|
|
...
|