
If labels are not specified on a Job, kubernetes defaults them to include the labels of their underlying Pod template. Helm 3 injects metadata into all resources [0] including a `app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes sees a Job's labels they are no longer empty and thus do not get defaulted to the underlying Pod template's labels. This is a problem since Job labels are depended on by - Armada pre-upgrade delete hooks - Armada wait logic configurations - kubernetes-entrypoint dependencies Thus for each Job template this adds labels matching the underlying Pod template to retain the same labels that were present with Helm 2. [0]: https://github.com/helm/helm/pull/7649 Change-Id: Ib5a7eb494fb776d74e1edc767b9522b02453b19d
18 lines
632 B
YAML
18 lines
632 B
YAML
---
|
|
heat:
|
|
- 0.1.0 Initial Chart
|
|
- 0.1.1 Change helm-toolkit dependency version to ">= 0.1.0"
|
|
- 0.1.2 Remove tls values override for clients_heat
|
|
- 0.1.3 Change Issuer to ClusterIssuer
|
|
- 0.1.4 Revert - Change Issuer to ClusterIssuer
|
|
- 0.1.5 Change Issuer to ClusterIssuer
|
|
- 0.2.0 Remove support for releases before T
|
|
- 0.2.1 Adding rabbitmq TLS logic
|
|
- 0.2.2 Use policies in yaml format
|
|
- 0.2.3 Mount rabbitmq TLS secret
|
|
- 0.2.4 Add Ussuri release support
|
|
- 0.2.5 Add Victoria and Wallaby releases support
|
|
- 0.2.6 Added post-install and post-upgrade helm-hook for jobs
|
|
- 0.2.7 Helm 3 - Fix Job Labels
|
|
...
|