Implement cinder-volume as a snap

This change includes cinder-volume and cinder-volume-ceph to manager the
cinder-volume service as snap that can be configured over multiple
backends.

Change-Id: Id520fc95710c8516aed5eae08cb20c8e54808cc7
Signed-off-by: Guillaume Boutry <guillaume.boutry@canonical.com>
This commit is contained in:
Guillaume Boutry 2025-02-19 18:08:59 +01:00
parent 4d4b4a41b0
commit 93eabbfa72
No known key found for this signature in database
GPG Key ID: 0DD77DC1796E98CD
32 changed files with 3225 additions and 195 deletions

View File

@ -0,0 +1,10 @@
external-libraries:
- charms.rabbitmq_k8s.v0.rabbitmq
- charms.loki_k8s.v1.loki_push_api
- charms.tempo_k8s.v2.tracing
- charms.tempo_k8s.v1.charm_tracing
- charms.operator_libs_linux.v2.snap
internal-libraries:
- charms.keystone_k8s.v0.identity_credentials
- charms.cinder_volume.v0.cinder_volume
- charms.cinder_ceph_k8s.v0.ceph_access

View File

@ -0,0 +1,54 @@
# cinder-volume-ceph
## Developing
Create and activate a virtualenv with the development requirements:
virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements-dev.txt
## Code overview
Get familiarise with [Charmed Operator Framework](https://juju.is/docs/sdk)
and [Sunbeam documentation](sunbeam-docs).
cinder-volume-ceph charm uses the ops\_sunbeam library and extends
OSBaseOperatorCharm from the library.
cinder-volume-ceph charm consumes database relation to connect to database,
amqp to connect to rabbitmq and ceph relation to connect to external ceph.
The charm starts cinder-volume service with integration with ceph as
storage backend.
## Intended use case
cinder-volume-ceph charm deploys and configures OpenStack Block storage service
with ceph as backend storage on a kubernetes based environment.
## Roadmap
TODO
## Testing
The Python operator framework includes a very nice harness for testing
operator behaviour without full deployment. Run tests using command:
tox -e py3
## Deployment
This project uses tox for building and managing. To build the charm
run:
tox -e build
To deploy the local test instance:
juju deploy ./cinder-volume-ceph.charm
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,58 @@
# cinder-volume-ceph
## Description
The cinder-volume-ceph is an operator to manage the Cinder service
integration with Ceph storage backend on a snap based deployment.
## Usage
### Deployment
cinder-volume-ceph is deployed using below command:
juju deploy cinder-volume-ceph --trust
Now connect the cinder-ceph application to cinder-volume and Ceph
services:
juju relate cinder-volume:cinder-volume cinder-ceph:cinder-volume
juju relate ceph-mon:ceph cinder-ceph:ceph
### Configuration
This section covers common and/or important configuration options. See file
`config.yaml` for the full list of options, along with their descriptions and
default values. See the [Juju documentation][juju-docs-config-apps] for details
on configuring applications.
### Actions
This section covers Juju [actions][juju-docs-actions] supported by the charm.
Actions allow specific operations to be performed on a per-unit basis. To
display action descriptions run `juju actions cinderceph`. If the charm is not
deployed then see file `actions.yaml`.
## Relations
cinder-volume-ceph requires the following relations:
`cinder-volume`: To connect to Cinder service
`ceph`: To connect to Ceph storage backend
## Contributing
Please see the [Juju SDK docs](https://juju.is/docs/sdk) for guidelines
on enhancements to this charm following best practice guidelines, and
[CONTRIBUTING.md](contributors-guide) for developer guidance.
## Bugs
Please report bugs on [Launchpad][lp-bugs-charm-cinder-volume-ceph].
<!-- LINKS -->
[contributors-guide]: https://opendev.org/openstack/charm-cinder-volume-ceph/src/branch/main/CONTRIBUTING.md
[juju-docs-actions]: https://jaas.ai/docs/actions
[juju-docs-config-apps]: https://juju.is/docs/configuring-applications

View File

@ -0,0 +1,311 @@
type: charm
name: cinder-volume-ceph
summary: OpenStack volume service - Ceph backend
description: |
Cinder is the OpenStack project that provides volume management for
instances. This charm provides integration with Ceph storage
backends.
assumes:
- juju >= 3.1
links:
source:
- https://opendev.org/openstack/sunbeam-charms
issues:
- https://bugs.launchpad.net/sunbeam-charms
base: ubuntu@24.04
platforms:
amd64:
subordinate: true
config:
options:
ceph-osd-replication-count:
default: 3
type: int
description: |
This value dictates the number of replicas ceph must make of any
object it stores within the cinder rbd pool. Of course, this only
applies if using Ceph as a backend store. Note that once the cinder
rbd pool has been created, changing this value will not have any
effect (although it can be changed in ceph by manually configuring
your ceph cluster).
ceph-pool-weight:
type: int
default: 20
description: |
Defines a relative weighting of the pool as a percentage of the total
amount of data in the Ceph cluster. This effectively weights the number
of placement groups for the pool created to be appropriately portioned
to the amount of data expected. For example, if the ephemeral volumes
for the OpenStack compute instances are expected to take up 20% of the
overall configuration then this value would be specified as 20. Note -
it is important to choose an appropriate value for the pool weight as
this directly affects the number of placement groups which will be
created for the pool. The number of placement groups for a pool can
only be increased, never decreased - so it is important to identify the
percent of data that will likely reside in the pool.
volume-backend-name:
default: null
type: string
description: |
Volume backend name for the backend. The default value is the
application name in the Juju model, e.g. "cinder-ceph-mybackend"
if it's deployed as `juju deploy cinder-ceph cinder-ceph-mybackend`.
A common backend name can be set to multiple backends with the
same characters so that those can be treated as a single virtual
backend associated with a single volume type.
backend-availability-zone:
default: null
type: string
description: |
Availability zone name of this volume backend. If set, it will
override the default availability zone. Supported for Pike or
newer releases.
restrict-ceph-pools:
default: false
type: boolean
description: |
Optionally restrict Ceph key permissions to access pools as required.
rbd-pool-name:
default: null
type: string
description: |
Optionally specify an existing rbd pool that cinder should map to.
rbd-flatten-volume-from-snapshot:
default: false
type: boolean
description: |
Flatten volumes created from snapshots to remove dependency from
volume to snapshot.
rbd-mirroring-mode:
type: string
default: pool
description: |
The RBD mirroring mode used for the Ceph pool. This option is only used
with 'replicated' pool type, as it's not supported for 'erasure-coded'
pool type - valid values: 'pool' and 'image'
pool-type:
type: string
default: replicated
description: |
Ceph pool type to use for storage - valid values include `replicated`
and `erasure-coded`.
ec-profile-name:
type: string
default: null
description: |
Name for the EC profile to be created for the EC pools. If not defined
a profile name will be generated based on the name of the pool used by
the application.
ec-rbd-metadata-pool:
type: string
default: null
description: |
Name of the metadata pool to be created (for RBD use-cases). If not
defined a metadata pool name will be generated based on the name of
the data pool used by the application. The metadata pool is always
replicated, not erasure coded.
ec-profile-k:
type: int
default: 1
description: |
Number of data chunks that will be used for EC data pool. K+M factors
should never be greater than the number of available zones (or hosts)
for balancing.
ec-profile-m:
type: int
default: 2
description: |
Number of coding chunks that will be used for EC data pool. K+M factors
should never be greater than the number of available zones (or hosts)
for balancing.
ec-profile-locality:
type: int
default: null
description: |
(lrc plugin - l) Group the coding and data chunks into sets of size l.
For instance, for k=4 and m=2, when l=3 two groups of three are created.
Each set can be recovered without reading chunks from another set. Note
that using the lrc plugin does incur more raw storage usage than isa or
jerasure in order to reduce the cost of recovery operations.
ec-profile-crush-locality:
type: string
default: null
description: |
(lrc plugin) The type of the crush bucket in which each set of chunks
defined by l will be stored. For instance, if it is set to rack, each
group of l chunks will be placed in a different rack. It is used to
create a CRUSH rule step such as step choose rack. If it is not set,
no such grouping is done.
ec-profile-durability-estimator:
type: int
default: null
description: |
(shec plugin - c) The number of parity chunks each of which includes
each data chunk in its calculation range. The number is used as a
durability estimator. For instance, if c=2, 2 OSDs can be down
without losing data.
ec-profile-helper-chunks:
type: int
default: null
description: |
(clay plugin - d) Number of OSDs requested to send data during
recovery of a single chunk. d needs to be chosen such that
k+1 <= d <= k+m-1. Larger the d, the better the savings.
ec-profile-scalar-mds:
type: string
default: null
description: |
(clay plugin) specifies the plugin that is used as a building
block in the layered construction. It can be one of jerasure,
isa, shec (defaults to jerasure).
ec-profile-plugin:
type: string
default: jerasure
description: |
EC plugin to use for this applications pool. The following list of
plugins acceptable - jerasure, lrc, isa, shec, clay.
ec-profile-technique:
type: string
default: null
description: |
EC profile technique used for this applications pool - will be
validated based on the plugin configured via ec-profile-plugin.
Supported techniques are `reed_sol_van`, `reed_sol_r6_op`,
`cauchy_orig`, `cauchy_good`, `liber8tion` for jerasure,
`reed_sol_van`, `cauchy` for isa and `single`, `multiple`
for shec.
ec-profile-device-class:
type: string
default: null
description: |
Device class from CRUSH map to use for placement groups for
erasure profile - valid values: ssd, hdd or nvme (or leave
unset to not use a device class).
bluestore-compression-algorithm:
type: string
default: null
description: |
Compressor to use (if any) for pools requested by this charm.
.
NOTE: The ceph-osd charm sets a global default for this value (defaults
to 'lz4' unless configured by the end user) which will be used unless
specified for individual pools.
bluestore-compression-mode:
type: string
default: null
description: |
Policy for using compression on pools requested by this charm.
.
'none' means never use compression.
'passive' means use compression when clients hint that data is
compressible.
'aggressive' means use compression unless clients hint that
data is not compressible.
'force' means use compression under all circumstances even if the clients
hint that the data is not compressible.
bluestore-compression-required-ratio:
type: float
default: null
description: |
The ratio of the size of the data chunk after compression relative to the
original size must be at least this small in order to store the
compressed version on pools requested by this charm.
bluestore-compression-min-blob-size:
type: int
default: null
description: |
Chunks smaller than this are never compressed on pools requested by
this charm.
bluestore-compression-min-blob-size-hdd:
type: int
default: null
description: |
Value of bluestore compression min blob size for rotational media on
pools requested by this charm.
bluestore-compression-min-blob-size-ssd:
type: int
default: null
description: |
Value of bluestore compression min blob size for solid state media on
pools requested by this charm.
bluestore-compression-max-blob-size:
type: int
default: null
description: |
Chunks larger than this are broken into smaller blobs sizing bluestore
compression max blob size before being compressed on pools requested by
this charm.
bluestore-compression-max-blob-size-hdd:
type: int
default: null
description: |
Value of bluestore compression max blob size for rotational media on
pools requested by this charm.
bluestore-compression-max-blob-size-ssd:
type: int
default: null
description: |
Value of bluestore compression max blob size for solid state media on
pools requested by this charm.
image-volume-cache-enabled:
type: boolean
default: false
description: |
Enable the image volume cache.
image-volume-cache-max-size-gb:
type: int
default: 0
description: |
Max size of the image volume cache in GB. 0 means unlimited.
image-volume-cache-max-count:
type: int
default: 0
description: |
Max number of entries allowed in the image volume cache. 0 means
unlimited.
requires:
ceph:
interface: ceph-client
cinder-volume:
interface: cinder-volume
scope: container
limit: 1
tracing:
interface: tracing
optional: true
limit: 1
provides:
ceph-access:
interface: cinder-ceph-key
peers:
peers:
interface: cinder-peer
parts:
update-certificates:
plugin: nil
override-build: |
apt update
apt install -y ca-certificates
update-ca-certificates
charm:
after:
- update-certificates
build-packages:
- git
- libffi-dev
- libssl-dev
- pkg-config
- rustc
- cargo
charm-binary-python-packages:
- cryptography
- jsonschema
- pydantic
- jinja2

View File

@ -0,0 +1,3 @@
# This file is used to trigger a build.
# Change uuid to trigger a new build.
37af2d20-53dc-11ef-97a3-b37540f14c92

View File

@ -0,0 +1,27 @@
# This file is managed centrally by release-tools and should not be modified
# within individual charm repos. See the 'global' dir contents for available
# choices of *requirements.txt files for OpenStack Charms:
# https://github.com/openstack-charmers/release-tools
#
cryptography
jinja2
pydantic
lightkube
lightkube-models
requests # Drop - not needed in storage backend interface.
ops
git+https://opendev.org/openstack/charm-ops-interface-tls-certificates#egg=interface_tls_certificates
# Note: Required for cinder-k8s, cinder-ceph-k8s, glance-k8s, nova-k8s
git+https://opendev.org/openstack/charm-ops-interface-ceph-client#egg=interface_ceph_client
# Charmhelpers is only present as interface_ceph_client uses it.
git+https://github.com/juju/charm-helpers.git#egg=charmhelpers
# TODO
requests # Drop - not needed in storage backend interface.
netifaces # Drop when charmhelpers dependency is removed.
# From ops_sunbeam
tenacity

View File

@ -0,0 +1,297 @@
#!/usr/bin/env python3
#
# Copyright 2025 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Cinder Ceph Operator Charm.
This charm provide Cinder <-> Ceph integration as part
of an OpenStack deployment
"""
import logging
import uuid
from typing import (
Callable,
Mapping,
)
import charms.cinder_ceph_k8s.v0.ceph_access as sunbeam_ceph_access # noqa
import ops
import ops.charm
import ops_sunbeam.charm as charm
import ops_sunbeam.config_contexts as config_contexts
import ops_sunbeam.guard as sunbeam_guard
import ops_sunbeam.relation_handlers as relation_handlers
import ops_sunbeam.relation_handlers as sunbeam_rhandlers
import ops_sunbeam.tracing as sunbeam_tracing
from ops.model import (
Relation,
SecretRotate,
)
logger = logging.getLogger(__name__)
@sunbeam_tracing.trace_type
class CinderCephConfigurationContext(config_contexts.ConfigContext):
"""Configuration context for cinder parameters."""
charm: "CinderVolumeCephOperatorCharm"
def context(self) -> dict:
"""Generate context information for cinder config."""
config = self.charm.model.config.get
data_pool_name = config("rbd-pool-name") or self.charm.app.name
if config("pool-type") == sunbeam_rhandlers.ERASURE_CODED:
pool_name = (
config("ec-rbd-metadata-pool") or f"{data_pool_name}-metadata"
)
else:
pool_name = data_pool_name
backend_name = config("volume-backend-name") or self.charm.app.name
return {
"rbd_pool": pool_name,
"rbd_user": self.charm.app.name,
"backend_name": backend_name,
"backend_availability_zone": config("backend-availability-zone"),
"secret_uuid": self.charm.get_secret_uuid() or "unknown",
}
@sunbeam_tracing.trace_type
class CephAccessProvidesHandler(sunbeam_rhandlers.RelationHandler):
"""Handler for identity service relation."""
interface: sunbeam_ceph_access.CephAccessProvides
def __init__(
self,
charm: charm.OSBaseOperatorCharm,
relation_name: str,
callback_f: Callable,
):
super().__init__(charm, relation_name, callback_f)
def setup_event_handler(self):
"""Configure event handlers for an Identity service relation."""
logger.debug("Setting up Ceph Access event handler")
ceph_access_svc = sunbeam_tracing.trace_type(
sunbeam_ceph_access.CephAccessProvides
)(
self.charm,
self.relation_name,
)
self.framework.observe(
ceph_access_svc.on.ready_ceph_access_clients,
self._on_ceph_access_ready,
)
return ceph_access_svc
def _on_ceph_access_ready(self, event) -> None:
"""Handles AMQP change events."""
# Ready is only emitted when the interface considers
# that the relation is complete.
self.callback_f(event)
@property
def ready(self) -> bool:
"""Report if relation is ready."""
return True
@sunbeam_tracing.trace_sunbeam_charm
class CinderVolumeCephOperatorCharm(charm.OSCinderVolumeDriverOperatorCharm):
"""Cinder/Ceph Operator charm."""
service_name = "cinder-volume-ceph"
client_secret_key = "secret-uuid"
ceph_access_relation_name = "ceph-access"
def configure_charm(self, event: ops.EventBase):
"""Catchall handler to configure charm services."""
super().configure_charm(event)
if self.has_ceph_relation() and self.ceph.ready:
logger.info("CONFIG changed and ceph ready: calling request pools")
self.ceph.request_pools(event)
@property
def backend_key(self) -> str:
"""Return the backend key."""
return "ceph." + self.model.app.name
def get_relation_handlers(
self, handlers: list[relation_handlers.RelationHandler] | None = None
) -> list[relation_handlers.RelationHandler]:
"""Relation handlers for the service."""
handlers = handlers or []
self.ceph = relation_handlers.CephClientHandler(
self,
"ceph",
self.configure_charm,
allow_ec_overwrites=True,
app_name="rbd",
mandatory="ceph" in self.mandatory_relations,
)
handlers.append(self.ceph)
self.ceph_access = CephAccessProvidesHandler(
self,
"ceph-access",
self.process_ceph_access_client_event,
) # type: ignore
handlers.append(self.ceph_access)
return super().get_relation_handlers(handlers)
def has_ceph_relation(self) -> bool:
"""Returns whether or not the application has been related to Ceph.
:return: True if the ceph relation has been made, False otherwise.
"""
return self.model.get_relation("ceph") is not None
def get_backend_configuration(self) -> Mapping:
"""Return the backend configuration."""
try:
contexts = self.contexts()
return {
"volume-backend-name": contexts.cinder_ceph.backend_name,
"backend-availability-zone": contexts.cinder_ceph.backend_availability_zone,
"mon-hosts": contexts.ceph.mon_hosts,
"rbd-pool": contexts.cinder_ceph.rbd_pool,
"rbd-user": contexts.cinder_ceph.rbd_user,
"rbd-secret-uuid": contexts.cinder_ceph.secret_uuid,
"rbd-key": contexts.ceph.key,
"auth": contexts.ceph.auth,
}
except AttributeError as e:
raise sunbeam_guard.WaitingExceptionError(
"Data missing: {}".format(e.name)
)
@property
def config_contexts(self) -> list[config_contexts.ConfigContext]:
"""Configuration contexts for the operator."""
contexts = super().config_contexts
contexts.append(CinderCephConfigurationContext(self, "cinder_ceph"))
return contexts
def _set_or_update_rbd_secret(
self,
ceph_key: str,
scope: dict = {},
rotate: SecretRotate = SecretRotate.NEVER,
) -> str:
"""Create ceph access secret or update it.
Create ceph access secret or if it already exists check the contents
and update them if needed.
"""
rbd_secret_uuid_id = self.peers.get_app_data(self.client_secret_key)
if rbd_secret_uuid_id:
secret = self.model.get_secret(id=rbd_secret_uuid_id)
secret_data = secret.get_content(refresh=True)
if secret_data.get("key") != ceph_key:
secret_data["key"] = ceph_key
secret.set_content(secret_data)
else:
secret = self.model.app.add_secret(
{
"uuid": str(uuid.uuid4()),
"key": ceph_key,
},
label=self.client_secret_key,
rotate=rotate,
)
self.peers.set_app_data(
{
self.client_secret_key: secret.id,
}
)
if "relation" in scope:
secret.grant(scope["relation"])
return secret.id
def get_secret_uuid(self) -> str | None:
"""Get the secret uuid."""
uuid = None
rbd_secret_uuid_id = self.peers.get_app_data(self.client_secret_key)
if rbd_secret_uuid_id:
secret = self.model.get_secret(id=rbd_secret_uuid_id)
secret_data = secret.get_content(refresh=True)
uuid = secret_data["uuid"]
return uuid
def configure_app_leader(self, event: ops.framework.EventBase):
"""Run global app setup.
These are tasks that should only be run once per application and only
the leader runs them.
"""
if self.ceph.ready:
self._set_or_update_rbd_secret(self.ceph.key)
self.set_leader_ready()
self.broadcast_ceph_access_credentials()
else:
raise sunbeam_guard.WaitingExceptionError(
"Ceph relation not ready"
)
def can_service_requests(self) -> bool:
"""Check if unit can process client requests."""
if self.bootstrapped() and self.unit.is_leader():
logger.debug("Can service client requests")
return True
else:
logger.debug(
"Cannot service client requests. "
"Bootstrapped: {} Leader {}".format(
self.bootstrapped(), self.unit.is_leader()
)
)
return False
def send_ceph_access_credentials(self, relation: Relation):
"""Send clients a link to the secret and grant them access."""
rbd_secret_uuid_id = self.peers.get_app_data(self.client_secret_key)
secret = self.model.get_secret(id=rbd_secret_uuid_id)
secret.grant(relation)
self.ceph_access.interface.set_ceph_access_credentials(
self.ceph_access_relation_name, relation.id, rbd_secret_uuid_id
)
def process_ceph_access_client_event(self, event: ops.framework.EventBase):
"""Inform a single client of the access data."""
self.broadcast_ceph_access_credentials(relation_id=event.relation.id)
def broadcast_ceph_access_credentials(
self, relation_id: str | None = None
) -> None:
"""Send ceph access data to clients."""
logger.debug("Checking for outstanding client requests")
if not self.can_service_requests():
return
for relation in self.framework.model.relations[
self.ceph_access_relation_name
]:
if relation_id and relation.id == relation_id:
self.send_ceph_access_credentials(relation)
elif not relation_id:
self.send_ceph_access_credentials(relation)
if __name__ == "__main__": # pragma: nocover
ops.main(CinderVolumeCephOperatorCharm)

View File

@ -0,0 +1,16 @@
#
# Copyright 2025 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit testing module for Cinder Volume Ceph operator."""

View File

@ -0,0 +1,134 @@
#!/usr/bin/env python3
# Copyright 2025 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for Cinder Ceph operator charm class."""
from unittest.mock import (
MagicMock,
Mock,
patch,
)
import charm
import ops.testing
import ops_sunbeam.test_utils as test_utils
class _CinderVolumeCephOperatorCharm(charm.CinderVolumeCephOperatorCharm):
"""Charm wrapper for test usage."""
openstack_release = "wallaby"
def __init__(self, framework):
self.seen_events = []
super().__init__(framework)
def _log_event(self, event):
self.seen_events.append(type(event).__name__)
def add_complete_cinder_volume_relation(harness: ops.testing.Harness) -> int:
"""Add a complete cinder-volume relation to the charm."""
return harness.add_relation(
"cinder-volume",
"cinder-volume",
unit_data={
"snap-name": "cinder-volume",
},
)
class TestCinderCephOperatorCharm(test_utils.CharmTestCase):
"""Test cases for CinderCephOperatorCharm class."""
PATCHES = []
def setUp(self):
"""Setup fixtures ready for testing."""
super().setUp(charm, self.PATCHES)
self.mock_event = MagicMock()
self.snap = Mock()
snap_patch = patch.object(
_CinderVolumeCephOperatorCharm,
"_import_snap",
Mock(return_value=self.snap),
)
snap_patch.start()
self.harness = test_utils.get_harness(
_CinderVolumeCephOperatorCharm,
container_calls=self.container_calls,
)
mock_get_platform = patch(
"charmhelpers.osplatform.get_platform", return_value="ubuntu"
)
mock_get_platform.start()
self.addCleanup(mock_get_platform.stop)
self.addCleanup(snap_patch.stop)
self.addCleanup(self.harness.cleanup)
def test_all_relations(self):
"""Test charm in context of full set of relations."""
self.harness.begin_with_initial_hooks()
test_utils.add_complete_ceph_relation(self.harness)
add_complete_cinder_volume_relation(self.harness)
self.assertSetEqual(
self.harness.charm.get_mandatory_relations_not_ready(
self.mock_event
),
set(),
)
def test_ceph_access(self):
"""Test charm provides secret via ceph-access."""
cinder_volume_snap_mock = MagicMock()
cinder_volume_snap_mock.present = False
self.snap.SnapState.Latest = "latest"
self.snap.SnapCache.return_value = {
"cinder-volume": cinder_volume_snap_mock
}
self.harness.begin_with_initial_hooks()
self.harness.set_leader()
test_utils.add_complete_ceph_relation(self.harness)
add_complete_cinder_volume_relation(self.harness)
access_rel = self.harness.add_relation(
"ceph-access", "openstack-hypervisor", unit_data={"oui": "non"}
)
self.assertSetEqual(
self.harness.charm.get_mandatory_relations_not_ready(
self.mock_event
),
set(),
)
expect_settings = {
"ceph.cinder-volume-ceph": {
"volume-backend-name": "cinder-volume-ceph",
"backend-availability-zone": None,
"mon-hosts": "192.0.2.2",
"rbd-pool": "cinder-volume-ceph",
"rbd-user": "cinder-volume-ceph",
"rbd-secret-uuid": "unknown",
"rbd-key": "AQBUfpVeNl7CHxAA8/f6WTcYFxW2dJ5VyvWmJg==",
"auth": "cephx",
}
}
cinder_volume_snap_mock.set.assert_any_call(
expect_settings, typed=True
)
rel_data = self.harness.get_relation_data(
access_rel, self.harness.charm.unit.app.name
)
self.assertRegex(rel_data["access-credentials"], "^secret:.*")

1
charms/cinder-volume/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
!lib/charms/cinder_volume/

View File

@ -0,0 +1,11 @@
external-libraries:
- charms.operator_libs_linux.v2.snap
- charms.data_platform_libs.v0.data_interfaces
- charms.rabbitmq_k8s.v0.rabbitmq
- charms.loki_k8s.v1.loki_push_api
- charms.tempo_k8s.v2.tracing
- charms.tempo_k8s.v1.charm_tracing
internal-libraries:
- charms.keystone_k8s.v0.identity_credentials
- charms.cinder_k8s.v0.storage_backend
templates: []

View File

@ -0,0 +1,52 @@
# cinder-volume
## Developing
Create and activate a virtualenv with the development requirements:
virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements-dev.txt
## Code overview
Get familiarise with [Charmed Operator Framework](https://juju.is/docs/sdk)
and [Sunbeam documentation](sunbeam-docs).
cinder-volume charm uses the ops\_sunbeam library and extends
OSBaseOperatorCharm from the library.
cinder-volume charm consumes database relation to connect to database,
amqp to connect to rabbitmq.
The charm starts cinder-volume service.
## Intended use case
cinder-volume charm deploys and configures OpenStack Block storage service.
## Roadmap
TODO
## Testing
The Python operator framework includes a very nice harness for testing
operator behaviour without full deployment. Run tests using command:
tox -e py3
## Deployment
This project uses tox for building and managing. To build the charm
run:
tox -e build
To deploy the local test instance:
juju deploy ./cinder-volume.charm
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,55 @@
# cinder-volume
## Description
The cinder-volume is an operator to manage the Cinder-volume service
in a snap based deployment.
## Usage
### Deployment
cinder-volume is deployed using below command:
juju deploy cinder-volume
Now connect the cinder-volume application to database, messaging and Ceph
services:
juju relate mysql:database cinder-volume:database
juju relate rabbitmq:amqp cinder-volume:amqp
juju relate keystone:identity-credentials cinder-volume:identity-credentials
juju relate cinder:storage-backend cinder-volume:storage-backend
### Configuration
This section covers common and/or important configuration options. See file
`config.yaml` for the full list of options, along with their descriptions and
default values. See the [Juju documentation][juju-docs-config-apps] for details
on configuring applications.
### Actions
This section covers Juju [actions][juju-docs-actions] supported by the charm.
Actions allow specific operations to be performed on a per-unit basis. To
display action descriptions run `juju actions cinderceph`. If the charm is not
deployed then see file `actions.yaml`.
## Relations
cinder-volume requires the following relations:
`amqp`: To connect to RabbitMQ
`database`: To connect to MySQL
`identity-credentials`: To connect to Keystone
## Contributing
Please see the [Juju SDK docs](https://juju.is/docs/sdk) for guidelines
on enhancements to this charm following best practice guidelines, and
[CONTRIBUTING.md](contributors-guide) for developer guidance.
<!-- LINKS -->
[juju-docs-actions]: https://jaas.ai/docs/actions
[juju-docs-config-apps]: https://juju.is/docs/configuring-applications

View File

@ -0,0 +1,106 @@
type: charm
name: cinder-volume
summary: OpenStack volume service
description: |
Cinder is the OpenStack project that provides volume management for
instances. This charm provides Cinder Volume service.
assumes:
- juju >= 3.1
links:
source:
- https://opendev.org/openstack/sunbeam-charms
issues:
- https://bugs.launchpad.net/sunbeam-charms
base: ubuntu@24.04
platforms:
amd64:
config:
options:
debug:
type: boolean
default: false
description: Enable debug logging.
snap-name:
default: cinder-volume
type: string
description: Name of the snap to install.
snap-channel:
default: 2024.1/edge
type: string
rabbit-user:
type: string
default: null
description: Username to request access on rabbitmq-server.
rabbit-vhost:
type: string
default: null
description: RabbitMQ virtual host to request access on rabbitmq-server.
enable-telemetry-notifications:
type: boolean
default: false
description: Enable notifications to send to telemetry.
image-volume-cache-enabled:
type: boolean
default: false
description: |
Enable the image volume cache.
image-volume-cache-max-size-gb:
type: int
default: 0
description: |
Max size of the image volume cache in GB. 0 means unlimited.
image-volume-cache-max-count:
type: int
default: 0
description: |
Max number of entries allowed in the image volume cache. 0 means
unlimited.
default-volume-type:
type: string
default: null
description: |
Default volume type to use when creating volumes.
requires:
amqp:
interface: rabbitmq
database:
interface: mysql_client
limit: 1
identity-credentials:
interface: keystone-credentials
tracing:
interface: tracing
optional: true
limit: 1
provides:
storage-backend:
interface: cinder-backend
cinder-volume:
interface: cinder-volume
parts:
update-certificates:
plugin: nil
override-build: |
apt update
apt install -y ca-certificates
update-ca-certificates
charm:
after:
- update-certificates
build-packages:
- git
- libffi-dev
- libssl-dev
- pkg-config
- rustc
- cargo
charm-binary-python-packages:
- cryptography
- jsonschema
- pydantic
- jinja2

View File

@ -0,0 +1,270 @@
"""CinderVolume Provides and Requires module.
This library contains the Requires and Provides classes for handling
the cinder-volume interface.
Import `CinderVolumeRequires` in your charm, with the charm object and the
relation name:
- self
- "cinder-volume"
- backend_key
Three events are also available to respond to:
- connected
- ready
- goneaway
A basic example showing the usage of this relation follows:
```
from charms.cinder_volume.v0.cinder_volume import CinderVolumeRequires
class CinderVolumeDriver(CharmBase):
def __init__(self, *args):
super().__init__(*args)
# CinderVolume Requires
self.cinder_volume = CinderVolumeRequires(
self,
relation_name="cinder-volume",
backend_key="ceph.monoceph",
)
self.framework.observe(
self.cinder_volume.on.connected, self._on_cinder_volume_connected
)
self.framework.observe(
self.cinder_volume.on.ready, self._on_cinder_volume_ready
)
self.framework.observe(
self.cinder_volume.on.goneaway, self._on_cinder_volume_goneaway
)
def _on_cinder_volume_connected(self, event):
'''React to the CinderVolume connected event.
This event happens when CinderVolume relation is added to the
model before credentials etc have been provided.
'''
# Do something before the relation is complete
pass
def _on_cinder_volume_ready(self, event):
'''React to the CinderVolume ready event.
This event happens when an CinderVolume relation is ready.
'''
# CinderVolume Relation is ready. Configure services or suchlike
pass
def _on_cinder_volume_goneaway(self, event):
'''React to the CinderVolume goneaway event.
This event happens when an CinderVolume relation is broken.
'''
# CinderVolume Relation has goneaway. shutdown services or suchlike
pass
```
"""
import logging
import ops
# The unique Charmhub library identifier, never change it
LIBID = "9aa142db811f4f8588a257d7dc6dff86"
# Increment this major API version when introducing breaking changes
LIBAPI = 0
# Increment this PATCH version before using `charmcraft publish-lib` or reset
# to 0 if you are raising the major API version
LIBPATCH = 1
logger = logging.getLogger(__name__)
BACKEND_KEY = "backend"
SNAP_KEY = "snap-name"
class CinderVolumeConnectedEvent(ops.RelationJoinedEvent):
"""CinderVolume connected Event."""
pass
class CinderVolumeReadyEvent(ops.RelationChangedEvent):
"""CinderVolume ready for use Event."""
pass
class CinderVolumeGoneAwayEvent(ops.RelationBrokenEvent):
"""CinderVolume relation has gone-away Event"""
pass
class CinderVolumeRequiresEvents(ops.ObjectEvents):
"""Events class for `on`"""
connected = ops.EventSource(CinderVolumeConnectedEvent)
ready = ops.EventSource(CinderVolumeReadyEvent)
goneaway = ops.EventSource(CinderVolumeGoneAwayEvent)
def remote_unit(relation: ops.Relation) -> ops.Unit | None:
if len(relation.units) == 0:
return None
return list(relation.units)[0]
class CinderVolumeRequires(ops.Object):
"""
CinderVolumeRequires class
"""
on = CinderVolumeRequiresEvents() # type: ignore
def __init__(
self, charm: ops.CharmBase, relation_name: str, backend_key: str
):
super().__init__(charm, relation_name)
self.charm = charm
self.relation_name = relation_name
self.backend_key = backend_key
rel_observer = self.charm.on[relation_name]
self.framework.observe(
rel_observer.relation_joined,
self._on_cinder_volume_relation_joined,
)
self.framework.observe(
rel_observer.relation_changed,
self._on_cinder_volume_relation_changed,
)
self.framework.observe(
rel_observer.relation_departed,
self._on_cinder_volume_relation_changed,
)
self.framework.observe(
rel_observer.relation_broken,
self._on_cinder_volume_relation_broken,
)
def _on_cinder_volume_relation_joined(self, event):
"""CinderVolume relation joined."""
logging.debug("CinderVolumeRequires on_joined")
self.on.connected.emit(event.relation)
def _on_cinder_volume_relation_changed(self, event):
"""CinderVolume relation changed."""
logging.debug("CinderVolumeRequires on_changed")
if self.provider_ready():
self.on.ready.emit(event.relation)
def _on_cinder_volume_relation_broken(self, event):
"""CinderVolume relation broken."""
logging.debug("CinderVolumeRequires on_broken")
self.on.goneaway.emit(event.relation)
def snap_name(self) -> str | None:
"""Return the snap name."""
relation = self.model.get_relation(self.relation_name)
if relation is None:
return None
sub_unit = remote_unit(relation)
if sub_unit is None:
logger.debug("No remote unit yet")
return None
return relation.data[sub_unit].get(SNAP_KEY)
def provider_ready(self) -> bool:
return self.snap_name() is not None
def set_ready(self) -> None:
"""Communicate Cinder backend is ready."""
logging.debug("Signaling backend has been configured")
relation = self.model.get_relation(self.relation_name)
if relation is not None:
relation.data[self.model.unit][BACKEND_KEY] = self.backend_key
class DriverReadyEvent(ops.RelationChangedEvent):
"""Driver Ready Event."""
class DriverGoneEvent(ops.RelationBrokenEvent):
"""Driver Gone Event."""
class CinderVolumeClientEvents(ops.ObjectEvents):
"""Events class for `on`"""
driver_ready = ops.EventSource(DriverReadyEvent)
driver_gone = ops.EventSource(DriverGoneEvent)
class CinderVolumeProvides(ops.Object):
"""
CinderVolumeProvides class
"""
on = CinderVolumeClientEvents() # type: ignore
def __init__(
self, charm: ops.CharmBase, relation_name: str, snap_name: str
):
super().__init__(charm, relation_name)
self.charm = charm
self.relation_name = relation_name
self.snap_name = snap_name
rel_observer = self.charm.on[relation_name]
self.framework.observe(
rel_observer.relation_joined,
self._on_cinder_volume_relation_joined,
)
self.framework.observe(
rel_observer.relation_changed,
self._on_cinder_volume_relation_changed,
)
self.framework.observe(
rel_observer.relation_broken,
self._on_cinder_volume_relation_broken,
)
def _on_cinder_volume_relation_joined(
self, event: ops.RelationJoinedEvent
):
"""Handle CinderVolume joined."""
logging.debug("CinderVolumeProvides on_joined")
self.publish_snap(event.relation)
def _on_cinder_volume_relation_changed(
self, event: ops.RelationChangedEvent
):
"""Handle CinderVolume changed."""
logging.debug("CinderVolumeProvides on_changed")
if self.requirer_ready(event.relation):
self.on.driver_ready.emit(event.relation)
def _on_cinder_volume_relation_broken(
self, event: ops.RelationBrokenEvent
):
"""Handle CinderVolume broken."""
logging.debug("CinderVolumeProvides on_departed")
self.on.driver_gone.emit(event.relation)
def requirer_backend(self, relation: ops.Relation) -> str | None:
sub_unit = remote_unit(relation)
if sub_unit is None:
logger.debug("No remote unit yet")
return None
return relation.data[sub_unit].get(BACKEND_KEY)
def requirer_ready(self, relation: ops.Relation) -> bool:
return self.requirer_backend(relation) is not None
def publish_snap(self, relation: ops.Relation):
"""Publish snap name to relation."""
relation.data[self.model.unit][SNAP_KEY] = self.snap_name

View File

@ -0,0 +1,3 @@
# This file is used to trigger a build.
# Change uuid to trigger a new build.
37af2d20-53dc-11ef-97a3-b37540f14c92

View File

@ -0,0 +1,21 @@
# This file is managed centrally by release-tools and should not be modified
# within individual charm repos. See the 'global' dir contents for available
# choices of *requirements.txt files for OpenStack Charms:
# https://github.com/openstack-charmers/release-tools
#
cryptography
jinja2
pydantic
lightkube
lightkube-models
requests # Drop - not needed in storage backend interface.
ops
git+https://opendev.org/openstack/charm-ops-interface-tls-certificates#egg=interface_tls_certificates
# TODO
requests # Drop - not needed in storage backend interface.
# From ops_sunbeam
tenacity

285
charms/cinder-volume/src/charm.py Executable file
View File

@ -0,0 +1,285 @@
#!/usr/bin/env python3
#
# Copyright 2025 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Cinder Volume Operator Charm.
This charm provide Cinder Volume capabilities for OpenStack.
This charm is responsible for managing the cinder-volume snap, actual
backend configurations are managed by the subordinate charms.
"""
import logging
import typing
from typing import (
Mapping,
)
import charms.cinder_k8s.v0.storage_backend as sunbeam_storage_backend # noqa
import charms.cinder_volume.v0.cinder_volume as sunbeam_cinder_volume # noqa
import charms.operator_libs_linux.v2.snap as snap
import ops
import ops.charm
import ops_sunbeam.charm as charm
import ops_sunbeam.guard as sunbeam_guard
import ops_sunbeam.relation_handlers as relation_handlers
import ops_sunbeam.relation_handlers as sunbeam_rhandlers
import ops_sunbeam.tracing as sunbeam_tracing
from ops_sunbeam import (
compound_status,
)
logger = logging.getLogger(__name__)
@sunbeam_tracing.trace_type
class StorageBackendProvidesHandler(sunbeam_rhandlers.RelationHandler):
"""Relation handler for storage-backend interface type."""
interface: sunbeam_storage_backend.StorageBackendProvides
def setup_event_handler(self):
"""Configure event handlers for an storage-backend relation."""
logger.debug("Setting up Identity Service event handler")
sb_svc = sunbeam_tracing.trace_type(
sunbeam_storage_backend.StorageBackendProvides
)(
self.charm,
self.relation_name,
)
self.framework.observe(sb_svc.on.api_ready, self._on_ready)
return sb_svc
def _on_ready(self, event) -> None:
"""Handles AMQP change events."""
# Ready is only emitted when the interface considers
# that the relation is complete (indicated by a password)
self.callback_f(event)
@property
def ready(self) -> bool:
"""Check whether storage-backend interface is ready for use."""
return self.interface.remote_ready()
class CinderVolumeProviderHandler(sunbeam_rhandlers.RelationHandler):
"""Relation handler for cinder-volume interface type."""
interface: sunbeam_cinder_volume.CinderVolumeProvides
def __init__(
self,
charm: "CinderVolumeOperatorCharm",
relation_name: str,
snap: str,
callback_f: typing.Callable,
mandatory: bool = False,
) -> None:
self._snap = snap
super().__init__(charm, relation_name, callback_f, mandatory)
def setup_event_handler(self):
"""Configure event handlers for an cinder-volume relation."""
logger.debug("Setting up Identity Service event handler")
cinder_volume = sunbeam_tracing.trace_type(
sunbeam_cinder_volume.CinderVolumeProvides
)(
self.charm,
self.relation_name,
self._snap,
)
self.framework.observe(cinder_volume.on.driver_ready, self._on_event)
self.framework.observe(cinder_volume.on.driver_gone, self._on_event)
return cinder_volume
def _on_event(self, event: ops.RelationEvent) -> None:
"""Handles cinder-volume change events."""
self.callback_f(event)
def update_relation_data(self):
"""Publish snap name to all related cinder-volume interfaces."""
for relation in self.model.relations[self.relation_name]:
self.interface.publish_snap(relation)
@property
def ready(self) -> bool:
"""Check whether cinder-volume interface is ready for use."""
relations = self.model.relations[self.relation_name]
if not relations:
return False
for relation in relations:
if not self.interface.requirer_ready(relation):
return False
return True
def backends(self) -> typing.Sequence[str]:
"""Return a list of backends."""
backends = []
for relation in self.model.relations[self.relation_name]:
if backend := self.interface.requirer_backend(relation):
backends.append(backend)
return backends
@sunbeam_tracing.trace_sunbeam_charm
class CinderVolumeOperatorCharm(charm.OSBaseOperatorCharmSnap):
"""Cinder Volume Operator charm."""
service_name = "cinder-volume"
mandatory_relations = {
"storage-backend",
}
def __init__(self, framework):
super().__init__(framework)
self._state.set_default(api_ready=False, backends=[])
self._backend_status = compound_status.Status("backends", priority=10)
self.status_pool.add(self._backend_status)
@property
def snap_name(self) -> str:
"""Return snap name."""
return str(self.model.config["snap-name"])
@property
def snap_channel(self) -> str:
"""Return snap channel."""
return str(self.model.config["snap-channel"])
def get_relation_handlers(
self, handlers: list[relation_handlers.RelationHandler] | None = None
) -> list[relation_handlers.RelationHandler]:
"""Relation handlers for the service."""
handlers = super().get_relation_handlers()
self.sb_svc = StorageBackendProvidesHandler(
self,
"storage-backend",
self.api_ready,
"storage-backend" in self.mandatory_relations,
)
handlers.append(self.sb_svc)
self.cinder_volume = CinderVolumeProviderHandler(
self,
"cinder-volume",
str(self.model.config["snap-name"]),
self.backend_changes,
"cinder-volume" in self.mandatory_relations,
)
handlers.append(self.cinder_volume)
return handlers
def api_ready(self, event) -> None:
"""Event handler for bootstrap of service when api services are ready."""
self._state.api_ready = True
self.configure_charm(event)
def _find_duplicates(self, backends: typing.Sequence[str]) -> set[str]:
"""Find duplicates in a list of backends."""
seen = set()
duplicates = set()
for backend in backends:
if backend in seen:
duplicates.add(backend)
seen.add(backend)
return duplicates
def backend_changes(self, event: ops.RelationEvent) -> None:
"""Event handler for backend changes."""
relation_backends = self.cinder_volume.backends()
if duplicates := self._find_duplicates(relation_backends):
logger.warning(
"Same instance of `cinder-volume` cannot"
" serve the same backend multiple times."
)
raise sunbeam_guard.BlockedExceptionError(
f"Duplicate backends: {duplicates}"
)
state_backends: set[str] = set(self._state.backends) # type: ignore
if leftovers := state_backends.difference(relation_backends):
logger.debug(
"Removing backends %s from state",
leftovers,
)
for backend in leftovers:
self.remove_backend(backend)
state_backends.remove(backend)
self._state.backends = sorted(state_backends.union(relation_backends))
self.configure_charm(event)
@property
def databases(self) -> Mapping[str, str]:
"""Provide database name for cinder services."""
return {"database": "cinder"}
def configure_snap(self, event) -> None:
"""Run configuration on snap."""
config = self.model.config.get
try:
contexts = self.contexts()
snap_data = {
"rabbitmq.url": contexts.amqp.transport_url,
"database.url": contexts.database.connection,
"cinder.project-id": contexts.identity_credentials.project_id,
"cinder.user-id": contexts.identity_credentials.username,
"cinder.cluster": self.app.name,
"cinder.image-volume-cache-enabled": config(
"image-volume-cache-enabled"
),
"cinder.image-volume-cache-max-size-gb": config(
"image-volume-cache-max-size-gb"
),
"cinder.image-volume-cache-max-count": config(
"image-volume-cache-max-count"
),
"cinder.default-volume-type": config("default-volume-type"),
"settings.debug": self.model.config["debug"],
"settings.enable-telemetry-notifications": self.model.config[
"enable-telemetry-notifications"
],
}
except AttributeError as e:
raise sunbeam_guard.WaitingExceptionError(
"Data missing: {}".format(e.name)
)
self.set_snap_data(snap_data)
self.check_serving_backends()
def check_serving_backends(self):
"""Check if backends are ready to serve."""
if not self.cinder_volume.backends():
msg = "Waiting for backends"
self._backend_status.set(ops.WaitingStatus(msg))
raise sunbeam_guard.WaitingExceptionError(msg)
self._backend_status.set(ops.ActiveStatus())
def remove_backend(self, backend: str):
"""Remove backend from snap."""
cinder_volume = self.get_snap()
try:
cinder_volume.unset(backend)
except snap.SnapError as e:
logger.debug(
"Failed to remove backend %s from snap: %s",
backend,
e,
)
if __name__ == "__main__": # pragma: nocover
ops.main(CinderVolumeOperatorCharm)

View File

@ -0,0 +1,16 @@
#
# Copyright 2025 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit testing module for Cinder Volume operator."""

View File

@ -0,0 +1,202 @@
# Copyright 2025 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for Openstack hypervisor charm."""
from unittest.mock import (
MagicMock,
Mock,
patch,
)
import charm
import ops_sunbeam.test_utils as test_utils
class _CinderVolumeOperatorCharm(charm.CinderVolumeOperatorCharm):
"""Neutron test charm."""
def __init__(self, framework):
"""Setup event logging."""
self.seen_events = []
super().__init__(framework)
class TestCharm(test_utils.CharmTestCase):
"""Test charm to test relations."""
PATCHES = []
def setUp(self):
"""Setup OpenStack Hypervisor tests."""
super().setUp(charm, self.PATCHES)
self.snap = Mock()
snap_patch = patch.object(
_CinderVolumeOperatorCharm,
"_import_snap",
Mock(return_value=self.snap),
)
snap_patch.start()
self.harness = test_utils.get_harness(
_CinderVolumeOperatorCharm,
container_calls=self.container_calls,
)
# clean up events that were dynamically defined,
# otherwise we get issues because they'll be redefined,
# which is not allowed.
from charms.data_platform_libs.v0.data_interfaces import (
DatabaseRequiresEvents,
)
for attr in (
"database_database_created",
"database_endpoints_changed",
"database_read_only_endpoints_changed",
):
try:
delattr(DatabaseRequiresEvents, attr)
except AttributeError:
pass
self.addCleanup(snap_patch.stop)
self.addCleanup(self.harness.cleanup)
def initial_setup(self):
"""Setting up relations."""
self.harness.update_config({"snap-channel": "essex/stable"})
self.harness.begin_with_initial_hooks()
def all_required_relations_setup(self):
"""Setting up all the required relations."""
self.initial_setup()
test_utils.add_complete_amqp_relation(self.harness)
test_utils.add_complete_identity_credentials_relation(self.harness)
test_utils.add_complete_db_relation(self.harness)
self.harness.add_relation(
"storage-backend",
"cinder",
app_data={
"ready": "true",
},
)
def test_mandatory_relations(self):
"""Test all the charms relations."""
cinder_volume_snap_mock = MagicMock()
cinder_volume_snap_mock.present = False
self.snap.SnapState.Latest = "latest"
self.snap.SnapCache.return_value = {
"cinder-volume": cinder_volume_snap_mock
}
self.initial_setup()
self.harness.set_leader()
test_utils.add_complete_amqp_relation(self.harness)
test_utils.add_complete_identity_credentials_relation(self.harness)
test_utils.add_complete_db_relation(self.harness)
# Add nova-service relation
self.harness.add_relation(
"storage-backend",
"cinder",
app_data={
"ready": "true",
},
)
cinder_volume_snap_mock.ensure.assert_any_call(
"latest", channel="essex/stable"
)
expect_settings = {
"rabbitmq.url": "rabbit://cinder-volume:rabbit.pass@rabbithost1.local:5672/openstack",
"database.url": "mysql+pymysql://foo:hardpassword@10.0.0.10/cinder",
"cinder.project-id": "uproj-id",
"cinder.user-id": "username",
"cinder.image-volume-cache-enabled": False,
"cinder.image-volume-cache-max-size-gb": 0,
"cinder.image-volume-cache-max-count": 0,
"cinder.default-volume-type": None,
"cinder.cluster": "cinder-volume",
"settings.debug": False,
"settings.enable-telemetry-notifications": False,
}
cinder_volume_snap_mock.set.assert_any_call(
expect_settings, typed=True
)
self.assertEqual(
self.harness.charm.status.message(), "Waiting for backends"
)
self.assertEqual(self.harness.charm.status.status.name, "waiting")
def test_all_relations(self):
"""Test all the charms relations."""
cinder_volume_snap_mock = MagicMock()
cinder_volume_snap_mock.present = False
self.snap.SnapState.Latest = "latest"
self.snap.SnapCache.return_value = {
"cinder-volume": cinder_volume_snap_mock
}
self.all_required_relations_setup()
self.assertEqual(self.harness.charm._state.backends, [])
self.harness.add_relation(
"cinder-volume",
"cinder-volume-ceph",
unit_data={"backend": "ceph.monostack"},
)
self.assertEqual(self.harness.charm.status.message(), "")
self.assertEqual(self.harness.charm.status.status.name, "active")
self.assertEqual(
self.harness.charm._state.backends, ["ceph.monostack"]
)
def test_backend_leaving(self):
"""Ensure correct behavior when a backend leaves."""
cinder_volume_snap_mock = MagicMock()
cinder_volume_snap_mock.present = False
self.snap.SnapState.Latest = "latest"
self.snap.SnapCache.return_value = {
"cinder-volume": cinder_volume_snap_mock
}
self.all_required_relations_setup()
slow_id = self.harness.add_relation(
"cinder-volume",
"cinder-volume-ceph-slow",
unit_data={"backend": "ceph.slow"},
)
fast_id = self.harness.add_relation(
"cinder-volume",
"cinder-volume-ceph-fast",
unit_data={"backend": "ceph.fast"},
)
self.assertEqual(self.harness.charm.status.message(), "")
self.assertEqual(self.harness.charm.status.status.name, "active")
self.assertEqual(
self.harness.charm._state.backends,
sorted(["ceph.slow", "ceph.fast"]),
)
self.harness.remove_relation(fast_id)
self.assertEqual(self.harness.charm._state.backends, ["ceph.slow"])
cinder_volume_snap_mock.unset.assert_any_call("ceph.fast")
self.assertEqual(self.harness.charm.status.message(), "")
self.assertEqual(self.harness.charm.status.status.name, "active")
self.harness.remove_relation(slow_id)
self.assertEqual(self.harness.charm._state.backends, [])
cinder_volume_snap_mock.unset.assert_any_call("ceph.slow")
self.assertEqual(
self.harness.charm.status.message(), "Waiting for backends"
)
self.assertEqual(self.harness.charm.status.status.name, "waiting")

View File

@ -331,10 +331,14 @@ LIBAPI = 0
# Increment this PATCH version before using `charmcraft publish-lib` or reset
# to 0 if you are raising the major API version
LIBPATCH = 37
LIBPATCH = 41
PYDEPS = ["ops>=2.0.0"]
# Starting from what LIBPATCH number to apply legacy solutions
# v0.17 was the last version without secrets
LEGACY_SUPPORT_FROM = 17
logger = logging.getLogger(__name__)
Diff = namedtuple("Diff", "added changed deleted")
@ -351,36 +355,16 @@ REQ_SECRET_FIELDS = "requested-secrets"
GROUP_MAPPING_FIELD = "secret_group_mapping"
GROUP_SEPARATOR = "@"
class SecretGroup(str):
"""Secret groups specific type."""
MODEL_ERRORS = {
"not_leader": "this unit is not the leader",
"no_label_and_uri": "ERROR either URI or label should be used for getting an owned secret but not both",
"owner_no_refresh": "ERROR secret owner cannot use --refresh",
}
class SecretGroupsAggregate(str):
"""Secret groups with option to extend with additional constants."""
def __init__(self):
self.USER = SecretGroup("user")
self.TLS = SecretGroup("tls")
self.EXTRA = SecretGroup("extra")
def __setattr__(self, name, value):
"""Setting internal constants."""
if name in self.__dict__:
raise RuntimeError("Can't set constant!")
else:
super().__setattr__(name, SecretGroup(value))
def groups(self) -> list:
"""Return the list of stored SecretGroups."""
return list(self.__dict__.values())
def get_group(self, group: str) -> Optional[SecretGroup]:
"""If the input str translates to a group name, return that."""
return SecretGroup(group) if group in self.groups() else None
SECRET_GROUPS = SecretGroupsAggregate()
##############################################################################
# Exceptions
##############################################################################
class DataInterfacesError(Exception):
@ -407,6 +391,19 @@ class IllegalOperationError(DataInterfacesError):
"""To be used when an operation is not allowed to be performed."""
class PrematureDataAccessError(DataInterfacesError):
"""To be raised when the Relation Data may be accessed (written) before protocol init complete."""
##############################################################################
# Global helpers / utilities
##############################################################################
##############################################################################
# Databag handling and comparison methods
##############################################################################
def get_encoded_dict(
relation: Relation, member: Union[Unit, Application], field: str
) -> Optional[Dict[str, str]]:
@ -482,6 +479,11 @@ def diff(event: RelationChangedEvent, bucket: Optional[Union[Unit, Application]]
return Diff(added, changed, deleted)
##############################################################################
# Module decorators
##############################################################################
def leader_only(f):
"""Decorator to ensure that only leader can perform given operation."""
@ -536,6 +538,36 @@ def either_static_or_dynamic_secrets(f):
return wrapper
def legacy_apply_from_version(version: int) -> Callable:
"""Decorator to decide whether to apply a legacy function or not.
Based on LEGACY_SUPPORT_FROM module variable value, the importer charm may only want
to apply legacy solutions starting from a specific LIBPATCH.
NOTE: All 'legacy' functions have to be defined and called in a way that they return `None`.
This results in cleaner and more secure execution flows in case the function may be disabled.
This requirement implicitly means that legacy functions change the internal state strictly,
don't return information.
"""
def decorator(f: Callable[..., None]):
"""Signature is ensuring None return value."""
f.legacy_version = version
def wrapper(self, *args, **kwargs) -> None:
if version >= LEGACY_SUPPORT_FROM:
return f(self, *args, **kwargs)
return wrapper
return decorator
##############################################################################
# Helper classes
##############################################################################
class Scope(Enum):
"""Peer relations scope."""
@ -543,17 +575,45 @@ class Scope(Enum):
UNIT = "unit"
################################################################################
# Secrets internal caching
################################################################################
class SecretGroup(str):
"""Secret groups specific type."""
class SecretGroupsAggregate(str):
"""Secret groups with option to extend with additional constants."""
def __init__(self):
self.USER = SecretGroup("user")
self.TLS = SecretGroup("tls")
self.EXTRA = SecretGroup("extra")
def __setattr__(self, name, value):
"""Setting internal constants."""
if name in self.__dict__:
raise RuntimeError("Can't set constant!")
else:
super().__setattr__(name, SecretGroup(value))
def groups(self) -> list:
"""Return the list of stored SecretGroups."""
return list(self.__dict__.values())
def get_group(self, group: str) -> Optional[SecretGroup]:
"""If the input str translates to a group name, return that."""
return SecretGroup(group) if group in self.groups() else None
SECRET_GROUPS = SecretGroupsAggregate()
class CachedSecret:
"""Locally cache a secret.
The data structure is precisely re-using/simulating as in the actual Secret Storage
The data structure is precisely reusing/simulating as in the actual Secret Storage
"""
KNOWN_MODEL_ERRORS = [MODEL_ERRORS["no_label_and_uri"], MODEL_ERRORS["owner_no_refresh"]]
def __init__(
self,
model: Model,
@ -571,6 +631,95 @@ class CachedSecret:
self.legacy_labels = legacy_labels
self.current_label = None
@property
def meta(self) -> Optional[Secret]:
"""Getting cached secret meta-information."""
if not self._secret_meta:
if not (self._secret_uri or self.label):
return
try:
self._secret_meta = self._model.get_secret(label=self.label)
except SecretNotFoundError:
# Falling back to seeking for potential legacy labels
self._legacy_compat_find_secret_by_old_label()
# If still not found, to be checked by URI, to be labelled with the proposed label
if not self._secret_meta and self._secret_uri:
self._secret_meta = self._model.get_secret(id=self._secret_uri, label=self.label)
return self._secret_meta
##########################################################################
# Backwards compatibility / Upgrades
##########################################################################
# These functions are used to keep backwards compatibility on rolling upgrades
# Policy:
# All data is kept intact until the first write operation. (This allows a minimal
# grace period during which rollbacks are fully safe. For more info see the spec.)
# All data involves:
# - databag contents
# - secrets content
# - secret labels (!!!)
# Legacy functions must return None, and leave an equally consistent state whether
# they are executed or skipped (as a high enough versioned execution environment may
# not require so)
# Compatibility
@legacy_apply_from_version(34)
def _legacy_compat_find_secret_by_old_label(self) -> None:
"""Compatibility function, allowing to find a secret by a legacy label.
This functionality is typically needed when secret labels changed over an upgrade.
Until the first write operation, we need to maintain data as it was, including keeping
the old secret label. In order to keep track of the old label currently used to access
the secret, and additional 'current_label' field is being defined.
"""
for label in self.legacy_labels:
try:
self._secret_meta = self._model.get_secret(label=label)
except SecretNotFoundError:
pass
else:
if label != self.label:
self.current_label = label
return
# Migrations
@legacy_apply_from_version(34)
def _legacy_migration_to_new_label_if_needed(self) -> None:
"""Helper function to re-create the secret with a different label.
Juju does not provide a way to change secret labels.
Thus whenever moving from secrets version that involves secret label changes,
we "re-create" the existing secret, and attach the new label to the new
secret, to be used from then on.
Note: we replace the old secret with a new one "in place", as we can't
easily switch the containing SecretCache structure to point to a new secret.
Instead we are changing the 'self' (CachedSecret) object to point to the
new instance.
"""
if not self.current_label or not (self.meta and self._secret_meta):
return
# Create a new secret with the new label
content = self._secret_meta.get_content()
self._secret_uri = None
# It will be nice to have the possibility to check if we are the owners of the secret...
try:
self._secret_meta = self.add_secret(content, label=self.label)
except ModelError as err:
if MODEL_ERRORS["not_leader"] not in str(err):
raise
self.current_label = None
##########################################################################
# Public functions
##########################################################################
def add_secret(
self,
content: Dict[str, str],
@ -593,28 +742,6 @@ class CachedSecret:
self._secret_meta = secret
return self._secret_meta
@property
def meta(self) -> Optional[Secret]:
"""Getting cached secret meta-information."""
if not self._secret_meta:
if not (self._secret_uri or self.label):
return
for label in [self.label] + self.legacy_labels:
try:
self._secret_meta = self._model.get_secret(label=label)
except SecretNotFoundError:
pass
else:
if label != self.label:
self.current_label = label
break
# If still not found, to be checked by URI, to be labelled with the proposed label
if not self._secret_meta and self._secret_uri:
self._secret_meta = self._model.get_secret(id=self._secret_uri, label=self.label)
return self._secret_meta
def get_content(self) -> Dict[str, str]:
"""Getting cached secret content."""
if not self._secret_content:
@ -624,35 +751,14 @@ class CachedSecret:
except (ValueError, ModelError) as err:
# https://bugs.launchpad.net/juju/+bug/2042596
# Only triggered when 'refresh' is set
known_model_errors = [
"ERROR either URI or label should be used for getting an owned secret but not both",
"ERROR secret owner cannot use --refresh",
]
if isinstance(err, ModelError) and not any(
msg in str(err) for msg in known_model_errors
msg in str(err) for msg in self.KNOWN_MODEL_ERRORS
):
raise
# Due to: ValueError: Secret owner cannot use refresh=True
self._secret_content = self.meta.get_content()
return self._secret_content
def _move_to_new_label_if_needed(self):
"""Helper function to re-create the secret with a different label."""
if not self.current_label or not (self.meta and self._secret_meta):
return
# Create a new secret with the new label
content = self._secret_meta.get_content()
self._secret_uri = None
# I wish we could just check if we are the owners of the secret...
try:
self._secret_meta = self.add_secret(content, label=self.label)
except ModelError as err:
if "this unit is not the leader" not in str(err):
raise
self.current_label = None
def set_content(self, content: Dict[str, str]) -> None:
"""Setting cached secret content."""
if not self.meta:
@ -663,7 +769,7 @@ class CachedSecret:
return
if content:
self._move_to_new_label_if_needed()
self._legacy_migration_to_new_label_if_needed()
self.meta.set_content(content)
self._secret_content = content
else:
@ -926,6 +1032,23 @@ class Data(ABC):
"""Delete data available (directily or indirectly -- i.e. secrets) from the relation for owner/this_app."""
raise NotImplementedError
# Optional overrides
def _legacy_apply_on_fetch(self) -> None:
"""This function should provide a list of compatibility functions to be applied when fetching (legacy) data."""
pass
def _legacy_apply_on_update(self, fields: List[str]) -> None:
"""This function should provide a list of compatibility functions to be applied when writing data.
Since data may be at a legacy version, migration may be mandatory.
"""
pass
def _legacy_apply_on_delete(self, fields: List[str]) -> None:
"""This function should provide a list of compatibility functions to be applied when deleting (legacy) data."""
pass
# Internal helper methods
@staticmethod
@ -1178,6 +1301,16 @@ class Data(ABC):
return relation
def get_secret_uri(self, relation: Relation, group: SecretGroup) -> Optional[str]:
"""Get the secret URI for the corresponding group."""
secret_field = self._generate_secret_field_name(group)
return relation.data[self.component].get(secret_field)
def set_secret_uri(self, relation: Relation, group: SecretGroup, secret_uri: str) -> None:
"""Set the secret URI for the corresponding group."""
secret_field = self._generate_secret_field_name(group)
relation.data[self.component][secret_field] = secret_uri
def fetch_relation_data(
self,
relation_ids: Optional[List[int]] = None,
@ -1194,6 +1327,8 @@ class Data(ABC):
a dict of the values stored in the relation data bag
for all relation instances (indexed by the relation ID).
"""
self._legacy_apply_on_fetch()
if not relation_name:
relation_name = self.relation_name
@ -1232,6 +1367,8 @@ class Data(ABC):
NOTE: Since only the leader can read the relation's 'this_app'-side
Application databag, the functionality is limited to leaders
"""
self._legacy_apply_on_fetch()
if not relation_name:
relation_name = self.relation_name
@ -1263,6 +1400,8 @@ class Data(ABC):
@leader_only
def update_relation_data(self, relation_id: int, data: dict) -> None:
"""Update the data within the relation."""
self._legacy_apply_on_update(list(data.keys()))
relation_name = self.relation_name
relation = self.get_relation(relation_name, relation_id)
return self._update_relation_data(relation, data)
@ -1270,6 +1409,8 @@ class Data(ABC):
@leader_only
def delete_relation_data(self, relation_id: int, fields: List[str]) -> None:
"""Remove field from the relation."""
self._legacy_apply_on_delete(fields)
relation_name = self.relation_name
relation = self.get_relation(relation_name, relation_id)
return self._delete_relation_data(relation, fields)
@ -1316,6 +1457,8 @@ class EventHandlers(Object):
class ProviderData(Data):
"""Base provides-side of the data products relation."""
RESOURCE_FIELD = "database"
def __init__(
self,
model: Model,
@ -1336,8 +1479,7 @@ class ProviderData(Data):
uri_to_databag=True,
) -> bool:
"""Add a new Juju Secret that will be registered in the relation databag."""
secret_field = self._generate_secret_field_name(group_mapping)
if uri_to_databag and relation.data[self.component].get(secret_field):
if uri_to_databag and self.get_secret_uri(relation, group_mapping):
logging.error("Secret for relation %s already exists, not adding again", relation.id)
return False
@ -1348,7 +1490,7 @@ class ProviderData(Data):
# According to lint we may not have a Secret ID
if uri_to_databag and secret.meta and secret.meta.id:
relation.data[self.component][secret_field] = secret.meta.id
self.set_secret_uri(relation, group_mapping, secret.meta.id)
# Return the content that was added
return True
@ -1449,8 +1591,7 @@ class ProviderData(Data):
if not relation:
return
secret_field = self._generate_secret_field_name(group_mapping)
if secret_uri := relation.data[self.local_app].get(secret_field):
if secret_uri := self.get_secret_uri(relation, group_mapping):
return self.secrets.get(label, secret_uri)
def _fetch_specific_relation_data(
@ -1483,6 +1624,15 @@ class ProviderData(Data):
def _update_relation_data(self, relation: Relation, data: Dict[str, str]) -> None:
"""Set values for fields not caring whether it's a secret or not."""
req_secret_fields = []
keys = set(data.keys())
if self.fetch_relation_field(relation.id, self.RESOURCE_FIELD) is None and (
keys - {"endpoints", "read-only-endpoints", "replset"}
):
raise PrematureDataAccessError(
"Premature access to relation data, update is forbidden before the connection is initialized."
)
if relation.app:
req_secret_fields = get_encoded_list(relation, relation.app, REQ_SECRET_FIELDS)
@ -1603,11 +1753,10 @@ class RequirerData(Data):
for group in SECRET_GROUPS.groups():
secret_field = self._generate_secret_field_name(group)
if secret_field in params_name_list:
if secret_uri := relation.data[relation.app].get(secret_field):
self._register_secret_to_relation(
relation.name, relation.id, secret_uri, group
)
if secret_field in params_name_list and (
secret_uri := self.get_secret_uri(relation, group)
):
self._register_secret_to_relation(relation.name, relation.id, secret_uri, group)
def _is_resource_created_for_relation(self, relation: Relation) -> bool:
if not relation.app:
@ -1618,6 +1767,17 @@ class RequirerData(Data):
)
return bool(data.get("username")) and bool(data.get("password"))
# Public functions
def get_secret_uri(self, relation: Relation, group: SecretGroup) -> Optional[str]:
"""Getting relation secret URI for the corresponding Secret Group."""
secret_field = self._generate_secret_field_name(group)
return relation.data[relation.app].get(secret_field)
def set_secret_uri(self, relation: Relation, group: SecretGroup, uri: str) -> None:
"""Setting relation secret URI is not possible for a Requirer."""
raise NotImplementedError("Requirer can not change the relation secret URI.")
def is_resource_created(self, relation_id: Optional[int] = None) -> bool:
"""Check if the resource has been created.
@ -1768,7 +1928,6 @@ class DataPeerData(RequirerData, ProviderData):
secret_field_name: Optional[str] = None,
deleted_label: Optional[str] = None,
):
"""Manager of base client relations."""
RequirerData.__init__(
self,
model,
@ -1779,6 +1938,11 @@ class DataPeerData(RequirerData, ProviderData):
self.secret_field_name = secret_field_name if secret_field_name else self.SECRET_FIELD_NAME
self.deleted_label = deleted_label
self._secret_label_map = {}
# Legacy information holders
self._legacy_labels = []
self._legacy_secret_uri = None
# Secrets that are being dynamically added within the scope of this event handler run
self._new_secrets = []
self._additional_secret_group_mapping = additional_secret_group_mapping
@ -1853,10 +2017,12 @@ class DataPeerData(RequirerData, ProviderData):
value: The string value of the secret
group_mapping: The name of the "secret group", in case the field is to be added to an existing secret
"""
self._legacy_apply_on_update([field])
full_field = self._field_to_internal_name(field, group_mapping)
if self.secrets_enabled and full_field not in self.current_secret_fields:
self._new_secrets.append(full_field)
if self._no_group_with_databag(field, full_field):
if self.valid_field_pattern(field, full_field):
self.update_relation_data(relation_id, {full_field: value})
# Unlike for set_secret(), there's no harm using this operation with static secrets
@ -1869,6 +2035,8 @@ class DataPeerData(RequirerData, ProviderData):
group_mapping: Optional[SecretGroup] = None,
) -> Optional[str]:
"""Public interface method to fetch secrets only."""
self._legacy_apply_on_fetch()
full_field = self._field_to_internal_name(field, group_mapping)
if (
self.secrets_enabled
@ -1876,7 +2044,7 @@ class DataPeerData(RequirerData, ProviderData):
and field not in self.current_secret_fields
):
return
if self._no_group_with_databag(field, full_field):
if self.valid_field_pattern(field, full_field):
return self.fetch_my_relation_field(relation_id, full_field)
@dynamic_secrets_only
@ -1887,14 +2055,19 @@ class DataPeerData(RequirerData, ProviderData):
group_mapping: Optional[SecretGroup] = None,
) -> Optional[str]:
"""Public interface method to delete secrets only."""
self._legacy_apply_on_delete([field])
full_field = self._field_to_internal_name(field, group_mapping)
if self.secrets_enabled and full_field not in self.current_secret_fields:
logger.warning(f"Secret {field} from group {group_mapping} was not found")
return
if self._no_group_with_databag(field, full_field):
if self.valid_field_pattern(field, full_field):
self.delete_relation_data(relation_id, [full_field])
##########################################################################
# Helpers
##########################################################################
@staticmethod
def _field_to_internal_name(field: str, group: Optional[SecretGroup]) -> str:
@ -1936,10 +2109,69 @@ class DataPeerData(RequirerData, ProviderData):
if k in self.secret_fields
}
# Backwards compatibility
def valid_field_pattern(self, field: str, full_field: str) -> bool:
"""Check that no secret group is attempted to be used together without secrets being enabled.
Secrets groups are impossible to use with versions that are not yet supporting secrets.
"""
if not self.secrets_enabled and full_field != field:
logger.error(
f"Can't access {full_field}: no secrets available (i.e. no secret groups either)."
)
return False
return True
##########################################################################
# Backwards compatibility / Upgrades
##########################################################################
# These functions are used to keep backwards compatibility on upgrades
# Policy:
# All data is kept intact until the first write operation. (This allows a minimal
# grace period during which rollbacks are fully safe. For more info see spec.)
# All data involves:
# - databag
# - secrets content
# - secret labels (!!!)
# Legacy functions must return None, and leave an equally consistent state whether
# they are executed or skipped (as a high enough versioned execution environment may
# not require so)
# Full legacy stack for each operation
def _legacy_apply_on_fetch(self) -> None:
"""All legacy functions to be applied on fetch."""
relation = self._model.relations[self.relation_name][0]
self._legacy_compat_generate_prev_labels()
self._legacy_compat_secret_uri_from_databag(relation)
def _legacy_apply_on_update(self, fields) -> None:
"""All legacy functions to be applied on update."""
relation = self._model.relations[self.relation_name][0]
self._legacy_compat_generate_prev_labels()
self._legacy_compat_secret_uri_from_databag(relation)
self._legacy_migration_remove_secret_from_databag(relation, fields)
self._legacy_migration_remove_secret_field_name_from_databag(relation)
def _legacy_apply_on_delete(self, fields) -> None:
"""All legacy functions to be applied on delete."""
relation = self._model.relations[self.relation_name][0]
self._legacy_compat_generate_prev_labels()
self._legacy_compat_secret_uri_from_databag(relation)
self._legacy_compat_check_deleted_label(relation, fields)
# Compatibility
@legacy_apply_from_version(18)
def _legacy_compat_check_deleted_label(self, relation, fields) -> None:
"""Helper function for legacy behavior.
As long as https://bugs.launchpad.net/juju/+bug/2028094 wasn't fixed,
we did not delete fields but rather kept them in the secret with a string value
expressing invalidity. This function is maintainnig that behavior when needed.
"""
if not self.deleted_label:
return
def _check_deleted_label(self, relation, fields) -> None:
"""Helper function for legacy behavior."""
current_data = self.fetch_my_relation_data([relation.id], fields)
if current_data is not None:
# Check if the secret we wanna delete actually exists
@ -1952,7 +2184,43 @@ class DataPeerData(RequirerData, ProviderData):
", ".join(non_existent),
)
def _remove_secret_from_databag(self, relation, fields: List[str]) -> None:
@legacy_apply_from_version(18)
def _legacy_compat_secret_uri_from_databag(self, relation) -> None:
"""Fetching the secret URI from the databag, in case stored there."""
self._legacy_secret_uri = relation.data[self.component].get(
self._generate_secret_field_name(), None
)
@legacy_apply_from_version(34)
def _legacy_compat_generate_prev_labels(self) -> None:
"""Generator for legacy secret label names, for backwards compatibility.
Secret label is part of the data that MUST be maintained across rolling upgrades.
In case there may be a change on a secret label, the old label must be recognized
after upgrades, and left intact until the first write operation -- when we roll over
to the new label.
This function keeps "memory" of previously used secret labels.
NOTE: Return value takes decorator into account -- all 'legacy' functions may return `None`
v0.34 (rev69): Fixing issue https://github.com/canonical/data-platform-libs/issues/155
meant moving from '<app_name>.<scope>' (i.e. 'mysql.app', 'mysql.unit')
to labels '<relation_name>.<app_name>.<scope>' (like 'peer.mysql.app')
"""
if self._legacy_labels:
return
result = []
members = [self._model.app.name]
if self.scope:
members.append(self.scope.value)
result.append(f"{'.'.join(members)}")
self._legacy_labels = result
# Migration
@legacy_apply_from_version(18)
def _legacy_migration_remove_secret_from_databag(self, relation, fields: List[str]) -> None:
"""For Rolling Upgrades -- when moving from databag to secrets usage.
Practically what happens here is to remove stuff from the databag that is
@ -1966,10 +2234,16 @@ class DataPeerData(RequirerData, ProviderData):
if self._fetch_relation_data_without_secrets(self.component, relation, [field]):
self._delete_relation_data_without_secrets(self.component, relation, [field])
def _remove_secret_field_name_from_databag(self, relation) -> None:
@legacy_apply_from_version(18)
def _legacy_migration_remove_secret_field_name_from_databag(self, relation) -> None:
"""Making sure that the old databag URI is gone.
This action should not be executed more than once.
There was a phase (before moving secrets usage to libs) when charms saved the peer
secret URI to the databag, and used this URI from then on to retrieve their secret.
When upgrading to charm versions using this library, we need to add a label to the
secret and access it via label from than on, and remove the old traces from the databag.
"""
# Nothing to do if 'internal-secret' is not in the databag
if not (relation.data[self.component].get(self._generate_secret_field_name())):
@ -1985,25 +2259,9 @@ class DataPeerData(RequirerData, ProviderData):
# Databag reference to the secret URI can be removed, now that it's labelled
relation.data[self.component].pop(self._generate_secret_field_name(), None)
def _previous_labels(self) -> List[str]:
"""Generator for legacy secret label names, for backwards compatibility."""
result = []
members = [self._model.app.name]
if self.scope:
members.append(self.scope.value)
result.append(f"{'.'.join(members)}")
return result
def _no_group_with_databag(self, field: str, full_field: str) -> bool:
"""Check that no secret group is attempted to be used together with databag."""
if not self.secrets_enabled and full_field != field:
logger.error(
f"Can't access {full_field}: no secrets available (i.e. no secret groups either)."
)
return False
return True
##########################################################################
# Event handlers
##########################################################################
def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
"""Event emitted when the relation has changed."""
@ -2013,7 +2271,9 @@ class DataPeerData(RequirerData, ProviderData):
"""Event emitted when the secret has changed."""
pass
##########################################################################
# Overrides of Relation Data handling functions
##########################################################################
def _generate_secret_label(
self, relation_name: str, relation_id: int, group_mapping: SecretGroup
@ -2050,13 +2310,14 @@ class DataPeerData(RequirerData, ProviderData):
return
label = self._generate_secret_label(relation_name, relation_id, group_mapping)
secret_uri = relation.data[self.component].get(self._generate_secret_field_name(), None)
# URI or legacy label is only to applied when moving single legacy secret to a (new) label
if group_mapping == SECRET_GROUPS.EXTRA:
# Fetching the secret with fallback to URI (in case label is not yet known)
# Label would we "stuck" on the secret in case it is found
return self.secrets.get(label, secret_uri, legacy_labels=self._previous_labels())
return self.secrets.get(
label, self._legacy_secret_uri, legacy_labels=self._legacy_labels
)
return self.secrets.get(label)
def _get_group_secret_contents(
@ -2086,7 +2347,6 @@ class DataPeerData(RequirerData, ProviderData):
@either_static_or_dynamic_secrets
def _update_relation_data(self, relation: Relation, data: Dict[str, str]) -> None:
"""Update data available (directily or indirectly -- i.e. secrets) from the relation for owner/this_app."""
self._remove_secret_from_databag(relation, list(data.keys()))
_, normal_fields = self._process_secret_fields(
relation,
self.secret_fields,
@ -2095,7 +2355,6 @@ class DataPeerData(RequirerData, ProviderData):
data=data,
uri_to_databag=False,
)
self._remove_secret_field_name_from_databag(relation)
normal_content = {k: v for k, v in data.items() if k in normal_fields}
self._update_relation_data_without_secrets(self.component, relation, normal_content)
@ -2104,9 +2363,6 @@ class DataPeerData(RequirerData, ProviderData):
def _delete_relation_data(self, relation: Relation, fields: List[str]) -> None:
"""Delete data available (directily or indirectly -- i.e. secrets) from the relation for owner/this_app."""
if self.secret_fields and self.deleted_label:
# Legacy, backwards compatibility
self._check_deleted_label(relation, fields)
_, normal_fields = self._process_secret_fields(
relation,
self.secret_fields,
@ -2141,7 +2397,9 @@ class DataPeerData(RequirerData, ProviderData):
"fetch_my_relation_data() and fetch_my_relation_field()"
)
##########################################################################
# Public functions -- inherited
##########################################################################
fetch_my_relation_data = Data.fetch_my_relation_data
fetch_my_relation_field = Data.fetch_my_relation_field
@ -2606,6 +2864,14 @@ class DatabaseProviderData(ProviderData):
"""
self.update_relation_data(relation_id, {"version": version})
def set_subordinated(self, relation_id: int) -> None:
"""Raises the subordinated flag in the application relation databag.
Args:
relation_id: the identifier for a particular relation.
"""
self.update_relation_data(relation_id, {"subordinated": "true"})
class DatabaseProviderEventHandlers(EventHandlers):
"""Provider-side of the database relation handlers."""
@ -2842,6 +3108,21 @@ class DatabaseRequirerEventHandlers(RequirerEventHandlers):
def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
"""Event emitted when the database relation has changed."""
is_subordinate = False
remote_unit_data = None
for key in event.relation.data.keys():
if isinstance(key, Unit) and not key.name.startswith(self.charm.app.name):
remote_unit_data = event.relation.data[key]
elif isinstance(key, Application) and key.name != self.charm.app.name:
is_subordinate = event.relation.data[key].get("subordinated") == "true"
if is_subordinate:
if not remote_unit_data:
return
if remote_unit_data.get("state") != "ready":
return
# Check which data has changed to emit customs events.
diff = self._diff(event)
@ -3023,6 +3304,8 @@ class KafkaRequiresEvents(CharmEvents):
class KafkaProviderData(ProviderData):
"""Provider-side of the Kafka relation."""
RESOURCE_FIELD = "topic"
def __init__(self, model: Model, relation_name: str) -> None:
super().__init__(model, relation_name)
@ -3272,6 +3555,8 @@ class OpenSearchRequiresEvents(CharmEvents):
class OpenSearchProvidesData(ProviderData):
"""Provider-side of the OpenSearch relation."""
RESOURCE_FIELD = "index"
def __init__(self, model: Model, relation_name: str) -> None:
super().__init__(model, relation_name)

View File

@ -29,11 +29,13 @@ defines the pebble layers, manages pushing configuration to the
containers and managing the service running in the container.
"""
import functools
import ipaddress
import logging
import urllib
import urllib.parse
from typing import (
TYPE_CHECKING,
List,
Mapping,
Optional,
@ -70,6 +72,9 @@ from ops.model import (
MaintenanceStatus,
)
if TYPE_CHECKING:
import charms.operator_libs_linux.v2.snap as snap
logger = logging.getLogger(__name__)
@ -182,6 +187,7 @@ class OSBaseOperatorCharm(
self.configure_charm,
database_name,
relation_name in self.mandatory_relations,
external_access=self.remote_external_access,
)
self.dbs[relation_name] = db
handlers.append(db)
@ -459,7 +465,11 @@ class OSBaseOperatorCharm(
)
if isinstance(event, RelationBrokenEvent):
_is_broken = True
_is_broken = event.relation.name in (
"database",
"api-database",
"cell-database",
)
case "ingress-public" | "ingress-internal":
from charms.traefik_k8s.v2.ingress import (
IngressPerAppRevokedEvent,
@ -1120,3 +1130,186 @@ class OSBaseOperatorAPICharm(OSBaseOperatorCharmK8S):
url.fragment,
)
)
class OSBaseOperatorCharmSnap(OSBaseOperatorCharm):
"""Base charm class for snap based charms."""
def __init__(self, framework):
super().__init__(framework)
self.snap_module = self._import_snap()
self.framework.observe(
self.on.install,
self._on_install,
)
def _import_snap(self):
import charms.operator_libs_linux.v2.snap as snap
return snap
def _on_install(self, _: ops.InstallEvent):
"""Run install on this unit."""
self.ensure_snap_present()
@functools.cache
def get_snap(self) -> "snap.Snap":
"""Return snap object."""
return self.snap_module.SnapCache()[self.snap_name]
@property
def snap_name(self) -> str:
"""Return snap name."""
raise NotImplementedError
@property
def snap_channel(self) -> str:
"""Return snap channel."""
raise NotImplementedError
def ensure_snap_present(self):
"""Install snap if it is not already present."""
try:
snap_svc = self.get_snap()
if not snap_svc.present:
snap_svc.ensure(
self.snap_module.SnapState.Latest,
channel=self.snap_channel,
)
except self.snap_module.SnapError as e:
logger.error(
"An exception occurred when installing %s. Reason: %s",
self.snap_name,
e.message,
)
def ensure_services_running(self, enable: bool = True) -> None:
"""Ensure snap services are up."""
snap_svc = self.get_snap()
snap_svc.start(enable=enable)
def stop_services(self, relation: set[str] | None = None) -> None:
"""Stop snap services."""
snap_svc = self.get_snap()
snap_svc.stop(disable=True)
def set_snap_data(self, snap_data: Mapping, namespace: str | None = None):
"""Set snap data on local snap.
Setting keys with 3 level or more of indentation is not yet supported.
`namespace` offers the possibility to work as if it was supported.
"""
snap_svc = self.get_snap()
new_settings = {}
try:
old_settings = snap_svc.get(namespace, typed=True)
except self.snap_module.SnapError:
old_settings = {}
for key, new_value in snap_data.items():
key_split = key.split(".")
if len(key_split) == 2:
group, subkey = key_split
old_value = old_settings.get(group, {}).get(subkey)
else:
old_value = old_settings.get(key)
if old_value is not None and old_value != new_value:
new_settings[key] = new_value
# Setting a value to None will unset the value from the snap,
# which will fail if the value was never set.
elif new_value is not None:
new_settings[key] = new_value
if new_settings:
if namespace is not None:
new_settings = {namespace: new_settings}
logger.debug(f"Applying new snap settings {new_settings}")
snap_svc.set(new_settings, typed=True)
else:
logger.debug("Snap settings do not need updating")
def configure_snap(self, event: ops.EventBase) -> None:
"""Run configuration on managed snap."""
def configure_unit(self, event: ops.EventBase) -> None:
"""Run configuration on this unit."""
self.ensure_snap_present()
self.check_leader_ready()
self.check_relation_handlers_ready(event)
self.configure_snap(event)
self.ensure_services_running()
self._state.unit_bootstrapped = True
class OSCinderVolumeDriverOperatorCharm(OSBaseOperatorCharmSnap):
"""Base class charms for Cinder volume drivers.
Operators implementing this class are subordinates charm that are not
responsible for installing / managing the snap.
Their only duty is to provide a backend configuration to the
snap managed by the principal unit.
"""
def __init__(self, framework: ops.Framework):
super().__init__(framework)
self._state.set_default(volume_ready=False)
@property
def backend_key(self) -> str:
"""Key for backend configuration."""
raise NotImplementedError
def ensure_snap_present(self):
"""No-op."""
def ensure_services_running(self, enable: bool = True) -> None:
"""No-op."""
def stop_services(self, relation: set[str] | None = None) -> None:
"""No-op."""
@property
def snap_name(self) -> str:
"""Return snap name."""
snap_name = self.cinder_volume.interface.snap_name()
if snap_name is None:
raise sunbeam_guard.WaitingExceptionError(
"Waiting for snap name from cinder-volume relation"
)
return snap_name
def get_relation_handlers(
self, handlers: list[sunbeam_rhandlers.RelationHandler] | None = None
) -> list[sunbeam_rhandlers.RelationHandler]:
"""Relation handlers for the service."""
handlers = handlers or []
self.cinder_volume = sunbeam_rhandlers.CinderVolumeRequiresHandler(
self,
"cinder-volume",
self.backend_key,
self.volume_ready,
mandatory="cinder-volume" in self.mandatory_relations,
)
handlers.append(self.cinder_volume)
return super().get_relation_handlers(handlers)
def volume_ready(self, event) -> None:
"""Event handler for bootstrap of service when api services are ready."""
self._state.volume_ready = True
self.configure_charm(event)
def configure_snap(self, event: ops.EventBase) -> None:
"""Configure backend for cinder volume driver."""
if not bool(self._state.volume_ready):
raise sunbeam_guard.WaitingExceptionError("Volume not ready")
backend_context = self.get_backend_configuration()
self.set_snap_data(backend_context, namespace=self.backend_key)
self.cinder_volume.interface.set_ready()
def get_backend_configuration(self) -> Mapping:
"""Get backend configuration."""
raise NotImplementedError

View File

@ -53,6 +53,7 @@ if typing.TYPE_CHECKING:
import charms.ceilometer_k8s.v0.ceilometer_service as ceilometer_service
import charms.certificate_transfer_interface.v0.certificate_transfer as certificate_transfer
import charms.cinder_ceph_k8s.v0.ceph_access as ceph_access
import charms.cinder_volume.v0.cinder_volume as sunbeam_cinder_volume
import charms.data_platform_libs.v0.data_interfaces as data_interfaces
import charms.gnocchi_k8s.v0.gnocchi_service as gnocchi_service
import charms.keystone_k8s.v0.identity_credentials as identity_credentials
@ -302,11 +303,13 @@ class DBHandler(RelationHandler):
callback_f: Callable,
database: str,
mandatory: bool = False,
external_access: bool = False,
) -> None:
"""Run constructor."""
# a database name as requested by the charm.
super().__init__(charm, relation_name, callback_f, mandatory)
self.database_name = database
self.external_access = external_access
def setup_event_handler(self) -> ops.framework.Object:
"""Configure event handlers for a MySQL relation."""
@ -331,6 +334,7 @@ class DBHandler(RelationHandler):
self.relation_name,
self.database_name,
relations_aliases=[alias],
external_node_connectivity=self.external_access,
)
self.framework.observe(
# db.on[f"{alias}_database_created"], # this doesn't work because:
@ -2388,3 +2392,53 @@ class ServiceReadinessProviderHandler(RelationHandler):
def ready(self) -> bool:
"""Report if relation is ready."""
return True
@sunbeam_tracing.trace_type
class CinderVolumeRequiresHandler(RelationHandler):
"""Handler for Cinder Volume relation."""
interface: "sunbeam_cinder_volume.CinderVolumeRequires"
def __init__(
self,
charm: "OSBaseOperatorCharm",
relation_name: str,
backend_key: str,
callback_f: Callable,
mandatory: bool = True,
):
self.backend_key = backend_key
super().__init__(charm, relation_name, callback_f, mandatory=mandatory)
def setup_event_handler(self):
"""Configure event handlers for Cinder Volume relation."""
import charms.cinder_volume.v0.cinder_volume as sunbeam_cinder_volume
logger.debug("Setting up Cinder Volume event handler")
cinder_volume = sunbeam_tracing.trace_type(
sunbeam_cinder_volume.CinderVolumeRequires
)(
self.charm,
self.relation_name,
backend_key=self.backend_key,
)
self.framework.observe(
cinder_volume.on.ready,
self._on_cinder_volume_ready,
)
return cinder_volume
def _on_cinder_volume_ready(self, event: ops.RelationEvent) -> None:
"""Handles Cinder Volume change events."""
self.callback_f(event)
@property
def ready(self) -> bool:
"""Report if relation is ready."""
return self.interface.provider_ready()
def snap(self) -> str | None:
"""Return snap name."""
return self.interface.snap_name()

View File

@ -22,6 +22,9 @@ import tempfile
from typing import (
TYPE_CHECKING,
)
from unittest.mock import (
Mock,
)
if TYPE_CHECKING:
import ops.framework
@ -363,3 +366,43 @@ class TestMultiSvcCharm(MyAPICharm):
self.configure_charm,
)
]
class MySnapCharm(sunbeam_charm.OSBaseOperatorCharmSnap):
"""Test charm for testing OSBaseOperatorCharmSnap."""
service_name = "mysnap"
def __init__(self, framework: "ops.framework.Framework") -> None:
"""Run constructor."""
self.seen_events = []
self.mock_snap = Mock()
super().__init__(framework)
def _log_event(self, event: "ops.framework.EventBase") -> None:
"""Log events."""
self.seen_events.append(type(event).__name__)
def _on_config_changed(self, event: "ops.framework.EventBase") -> None:
"""Log config changed event."""
self._log_event(event)
super()._on_config_changed(event)
def configure_charm(self, event: "ops.framework.EventBase") -> None:
"""Log configure_charm call."""
self._log_event(event)
super().configure_charm(event)
def get_snap(self):
"""Return mocked snap."""
return self.mock_snap
@property
def snap_name(self) -> str:
"""Return snap name."""
return "mysnap"
@property
def snap_channel(self) -> str:
"""Return snap channel."""
return "latest/stable"

View File

@ -489,7 +489,7 @@ class TestOSBaseOperatorMultiSVCAPICharm(_TestOSBaseOperatorAPICharm):
"""Test Charm with multiple services."""
def setUp(self) -> None:
"""Charm test class setip."""
"""Charm test class setup."""
super().setUp(test_charms.TestMultiSvcCharm)
def test_start_services(self) -> None:
@ -506,3 +506,50 @@ class TestOSBaseOperatorMultiSVCAPICharm(_TestOSBaseOperatorAPICharm):
sorted(self.container_calls.started_services("my-service")),
sorted(["apache forwarder", "my-service"]),
)
class TestOSBaseOperatorCharmSnap(test_utils.CharmTestCase):
"""Test snap based charm."""
PATCHES = []
def setUp(self) -> None:
"""Charm test class setup."""
super().setUp(sunbeam_charm, self.PATCHES)
self.harness = test_utils.get_harness(
test_charms.MySnapCharm,
test_charms.CHARM_METADATA,
None,
charm_config=test_charms.CHARM_CONFIG,
initial_charm_config=test_charms.INITIAL_CHARM_CONFIG,
)
self.mock_event = MagicMock()
self.harness.begin()
self.addCleanup(self.harness.cleanup)
def test_set_snap_data(self) -> None:
"""Test snap set data."""
charm = self.harness.charm
snap = charm.mock_snap
snap.get.return_value = {
"settings.debug": False,
"settings.region": "RegionOne",
}
charm.set_snap_data({"settings.debug": True})
snap.set.assert_called_once_with({"settings.debug": True}, typed=True)
def test_set_snap_data_namespace(self) -> None:
"""Test snap set data under namespace."""
charm = self.harness.charm
snap = charm.mock_snap
namespace = "ceph.monostack"
snap.get.return_value = {
"auth": "cephx",
}
# check unsetting a non-existent value is passed as None
new_data = {"key": "abc", "value": None}
charm.set_snap_data(new_data, namespace=namespace)
snap.get.assert_called_once_with(namespace, typed=True)
snap.set.assert_called_once_with(
{namespace: {"key": "abc"}}, typed=True
)

View File

@ -106,6 +106,30 @@
- rebuild
vars:
charm: cinder-k8s
- job:
name: charm-build-cinder-volume
description: Build sunbeam cinder-volume charm
run: playbooks/charm/build.yaml
timeout: 3600
match-on-config-updates: false
files:
- ops-sunbeam/ops_sunbeam/*
- charms/cinder-volume/*
- rebuild
vars:
charm: cinder-volume
- job:
name: charm-build-cinder-volume-ceph
description: Build sunbeam cinder-volume-ceph charm
run: playbooks/charm/build.yaml
timeout: 3600
match-on-config-updates: false
files:
- ops-sunbeam/ops_sunbeam/*
- charms/cinder-volume-ceph/*
- rebuild
vars:
charm: cinder-volume-ceph
- job:
name: charm-build-cinder-ceph-k8s
description: Build sunbeam cinder-ceph-k8s charm
@ -655,6 +679,32 @@
- charmhub_token
timeout: 3600
- job:
name: publish-charm-cinder-volume
description: |
Publish cinder-volume built in gate pipeline.
run: playbooks/charm/publish.yaml
files:
- ops-sunbeam/ops_sunbeam/*
- charms/cinder-volume/*
- rebuild
secrets:
- charmhub_token
timeout: 3600
- job:
name: publish-charm-cinder-volume-ceph
description: |
Publish cinder-volume-ceph built in gate pipeline.
run: playbooks/charm/publish.yaml
files:
- ops-sunbeam/ops_sunbeam/*
- charms/cinder-volume-ceph/*
- rebuild
secrets:
- charmhub_token
timeout: 3600
- job:
name: publish-charm-designate-bind-k8s
description: |

View File

@ -56,6 +56,10 @@
nodeset: ubuntu-jammy
- charm-build-cinder-ceph-k8s:
nodeset: ubuntu-jammy
- charm-build-cinder-volume:
nodeset: ubuntu-jammy
- charm-build-cinder-volume-ceph:
nodeset: ubuntu-jammy
- charm-build-horizon-k8s:
nodeset: ubuntu-jammy
- charm-build-heat-k8s:
@ -115,6 +119,8 @@
nodeset: ubuntu-jammy
- charm-build-cinder-ceph-k8s:
nodeset: ubuntu-jammy
- charm-build-cinder-volume:
nodeset: ubuntu-jammy
- charm-build-horizon-k8s:
nodeset: ubuntu-jammy
- charm-build-heat-k8s:
@ -178,6 +184,10 @@
nodeset: ubuntu-jammy
- publish-charm-cinder-ceph-k8s:
nodeset: ubuntu-jammy
- publish-charm-cinder-volume:
nodeset: ubuntu-jammy
- publish-charm-cinder-volume-ceph:
nodeset: ubuntu-jammy
- publish-charm-horizon-k8s:
nodeset: ubuntu-jammy
- publish-charm-heat-k8s:

View File

@ -1,75 +1,85 @@
- secret:
name: charmhub_token
data:
# Generated on 2025-01-28T08:30:54+00:00 with 90 days ttl
# Generated on 2025-02-19T21:38:12+00:00 with 90 days ttl
value: !encrypted/pkcs1-oaep
- qpr+j+NZd98jyRq/cgqeVwe16LtssUTK6pnIGtm+Cqs1BZExr4pUsxBcDdmzxWqWqc/kf
29RNFuJRin2rOa+JZdDF1tft78zzEB/soX7xf+1DRvEDv+L29zcTcozMpAZhtQX/NfuPf
qSPnlX+PZc8keAuoeHmIbsHo4E2I4KGre3KX0HuPgWqmYf5np9/FJBe7KUsuO3WzX3Hb0
KCR/ls5nt5nIi5EANuoz3rS5PPBnjn2ELFnv/qusDiSDS3LfxHqpPsmJeT/+q6G5rTUNr
nVjoF7V2eV7sIicWnwpezMUd6Q3AUvtkWIoK0Z+7PlBGT8QRPWln8YkzoC9Xv5ojES7W9
KfcfLeTGrDHmWQF3ReDk/lvzty8BUCtLgA+z5YmI9EjWuLSlhNRJXGNf7/8fiBmnUjbhh
mhAoGH5yRXet/lhI5bHt3KPEi59vuAoRI82OhHhPFlk1Tqfrn0tg03TlPXyFxvZ8xaEDT
nQTdWGsCwfXfG9owN/Iwsu9x9MzhL4FgW/aS4MhbYf/xcYCYF8Zn/APMvf/ycv3EG7C//
YmrumEPpB2R5G7lD9B3Yj1bZtbrIOJ90iT+1BPHVbRE4QnMDWy0mVIanUYtMI1WKEu7IH
2vNOiVJyS48XEtLSlV6qA/1JlM1DkieLwNWuh/su7ZvNbxT0qY2OdUxpN3xobE=
- IcHh4XjtdY8WcWaRN68kPmd7ivrQS9aBCdCaJ3uZO7LzEvEYbyEgo71x+NH3lsqyFgaZQ
WoZ0ri+MHU0QpH4by2od5c8FSFRb9kx78wVxcLp4O/pR7ffrnvQjfP8EFLD14vImL1gp2
vIP6entbbS3A7pX7w2hXJLcOIOgmQe64kPmYcscFtKO17rQByZ1GHWePPo7mqmoR0METT
/W++BgJKuIYplET7tP6WvY3s1CW9Ej0n9HFdkiR2IOBdGg6LD1DwbzyiFNlT5lk4KmKie
aVqNJejxdUwYdpbkPKErR6HGbGrTePsYvhy6YDOXbT0ohol/uYfssT7Ur6Qw2p9JdOI7l
Pwponch4YOxVF0pje7pHDuDriIRHgZakb8No1yEJemQqg0yKJx8IZFFVYNgQVQkXNe+gC
Tu1xMV/CkwOB94z2lYHC6hzOarilijjgvynL00z4KQ7MKUV43MFRk3LwVaqF1BwOAQtzJ
c5HbklpMrZZRDzX2WMiD8DqGhFKRI0/YT8yjvzRDCy/Hcwtq+ktKbnLioLaSBVsInC/UO
+h9o8JLnENEw6PYoTjDigDoFXxEbXraOj2KxMkgqWm33ytRtwy57UpW9U8yIkimlMqZHx
Vn2NtZMqa4EBKMA5Ql/Ae/BgoLJjNgO4bewQQzgnq2aoOV/91/0WV7sQ0jyvDA=
- HlEMXBGU/d6VzDpJ6RCvUn6BGtnCodtLGc8fUi9liCgA2rTiitcltF5D/wqetVYMfsaYr
w/siy6BXg2HC/je9qubN2N+o0tzm2CIe1hZ1nS0t+m/2PuksLtpNKt7MpDtzenix69K4t
TCpp+DWb1+faBd+CqEAoqHC97qPnRMDlIbWTPRnOIk7gJYivC8pUKQd8MEKIxtpMiCE3T
qBWCeiEl1sCiSGPHDV0Y9ufDojin00mHXABfU0LURk8MyxGUMWismy29T2z6YJlYeE37V
xSKgbFvM4auaVSt+plcgTGaJ7QYPjGWafRObqgARd8aL1DgkpVn3CwOgvtuJMKc2JzYJ6
AQl0/eBRejg911C4MZsRwpum+3RsRMi1RP9Bo4VpaymNNDNFyzs6JwZNpSaKbudL4qFD1
F5vBQBFQT1mRYdCqCghgfjSRB4XMwYJkzVlkAQBAXrBLx+GjdnkkrwEwhYozd8jncf77A
j+/SHYz3+YqvMZ5yB0eZDhfvuDl+V8CXeMNZGqdkcif6uCMugIud923ZSHLfLe4gWipaw
E6kufi1IOPoA/hbFfoNYpVQ+MBROGz7GAedQvstmDG0p6NiB4pjfEuOxwRSgauBphMxrR
6gxP94He+o/rdKx0IL6iwalUHuFZF6MlPN0/C2cBh/zOpjeg7N6lRYTIakg9LY=
- kvA6r9US0THLPxDT/keMryaGdxaRHYFFsH7LLUUQkCBioULfOeAhBKOx3zbA5S7scyBhk
BbMzj1mGSSGOXQkGqmrzfN9RmYo391FNs+RHHD9p9SfVfcoayrVZi5wlMUT6eYpYA/F9O
se8ghFUDmiOlYphCXpAqabphWHtnB+HSDk3yn/KbGkY24oa4Tp8nFxYbRTAL7kpA4Q8c4
N7xCyTi5QTDK6FYvQfN0g6hl4JRpvGAiDisAOJJP9wk4ux+6t71KnrsCxAOqZIGtvkblx
8+U24D324uEIa28mdSpNGZ8wCMRKPQ9ClpDCLk4+Kmemnhj7BfuoOSz3BlGgRZTzY066q
KGnCgYC/if5e8u7mD4cPWV+eaOrMluYDY+mt+u8JMMtsuZ+lG0f4N+RCZgxtYPK/LLa27
BGlLkfpuDTSeKCjTkmuYvEWAFPnAVR7AbiIYfhPg6RIAG4bagMW5ORG9Z9NgrvCvr8GxD
6idAnvZaNrr8+Csi47zD4hoxxZfHsZPRXfIOjx0F+DaRlKO63LTxDsNFOVHRdegkDksb6
SOw66eTBwKbyc+e+wW7PC/ASU45fRIxyTTAhW+152qOQ1S9lf++8VtCSTfsdOGmm8ysTk
vInv8UDnI8xFk7CJqIthEvtG0CCut9Gjunx+kTFiO4aK0ZXUgNIq1budz8U9bs=
- ez5yF3P+b+ACgM2xbTvAa86aXMoT5JlsA8uXgvTKd3aWffYkfZF2eBlmFOcA7u5PUBA1f
Lgv+3E5MGPgtP4/7VZ/AyIosNtac7TzQ/4/yPUmXoG3zW0jWHNGvtRW5Aqf/dMaEt4ESL
rbl8D79tjK2M5F/ad5xL4JaW9FaIIBDbPmbqnwcWDuOzO/P5SL8oJ9k5HCoA94CsRjARb
Al0qAlsw1sWGRdoI9e2Uigd4FRjHcIpZKcVORyvokU3SjDP8fcHMqB92wlk0rcgF0y7MV
LlpkhkcQ1tDCdLRwwu5i3B6Z4yKBe68mr/v1xQ5zpo1pX5bKgyXEVGpE90qhmv1Q1q+nq
1s2htwNbtx+1bCeF6ywmARV8MVVrCkQa/T0WOoOpHh/Svw0f1mPwu3VOWVI3Ftc0cBRTq
/NWhrNdHS344tbsvV+vj3zUqadZcQkR3vIU5eoPhamMRh4NM6/uEQ/8cG6zXh445eCY4w
fQLyHL2g17p1dQXt/kBM+jf4SPIXcoWRS3Lggcb8yRA3I7iZ9TJ1a66R/wNPSCoPBGyF/
MATyH4yT6/aAL/jAutNzvT+4w+PKqJ3KewE8n36YLkFIBZmsSyIi4mH2z1GP+8/tceyKS
NRehqsrY6fGe9gvpxQWdoWC4S7snCubgFd6K9dCxjVK1BK1wnQLxUaqVeDjNzI=
- PUIzfkNyaFlWn9b+h7N0vdEylOdK8hGd176KMH4gzA+zL0SyFWtnb5ZVlmp6ANnBQBN1o
wm8Hha39jVzhdKgir0d/u3suOHQtuK3dzxdBY9xHSubascy17Ago9NPGczoW+4vsh8+Kk
aHXeuChSRopOYi8T/4Y7rJ8zzpIvLCN6RRFXKlzISclxX+iEo9jiKo5181Jzc1IrReG0v
ryB5aROzASAwbhu1iDyKMUI/uxsX9hhEYN0YzgCfdDhrryEUZAChyByg5z9pwHreyx6ax
jyHWItkhLiKp5BYMyDp5MXY7MfKXR8Z28G4LtK9YqJJpOHPPbXhFF80gZbe4mtKhaxR1U
QDYGgqdPU0uYgXBZfktyps33U3ERufooD9bh1JRPk/DpXnMno0PHJ1j7fiDoNBiN5m+oe
A1CmAfSevtPfRW17hJBy5LA1yXbWuybYwtSO5FQbydm87q+TD6qdZFn2Dkdrsm9uhZ9Dr
F0KJpAOvrVzl2bEvXw1YlSSXQPpJCiC9T1Xjr41Lz00jMl9pwUXNWjnNSlImihFo5rsfm
peuAXa5TC3Ysd/5aR3vc1tlqbp4bz6R3GeVb1eNc7suBXbQ+clNzvjl/vsXvlM/ZsjiA0
doGbhkhPAkUZ2tz6/nW5ZSrlsKA5f00uNwl48pEiPZwrvL6R+I9UGpJQuqSLms=
- av4xuaxs8uuQ+8IDIQzwqA86eRb/+d0TDj3/tjeNsKFFoTJD0aV7W8I+aqCIgUss3iGKn
tSu7F9OhJ8y+YduGp8VYck8b3q7Ahp/Kf5FtAupVQKINvldRbS9iPg/ahXlPaI7xFo50z
UhlKqCXKmOI9FAEnTodvoQoJGbrhbVgpWJNnaMhl6U+cERL7XYcpoH4kzQxyf8tQ9zOCL
YGoCQNLm4MO9CO7Yj+BJ1yH0ygUMI1w6BzDUOtH+CwEaJY/++63U/gpV6uenJoV9erKh5
8AY55wBuKUWR4dyfkDlnMN9oVK51r67HRnwhO7GCh3lsUuMGJX62TufmRSYd0V1xNaK62
iD5kxwNHxh8qxFYRFuINia1k5Bx/Oj3QlRCTR/W27tE8djfKsUB3IljCF/3uQt02zoA3o
oDAM568NsSvNJOS0aEOqTqrOgaJFIu1sstRLJsEZtn7DLwD/oFVAfvs/fS0E+0sIFmdEe
NHWsCZzII0sS+U20XDA9hDLnxPJobRt1k4ASID7M1SCTjhHn8pNC7ydTiYa7VEVk76QY7
JldIV6DgjvelwJKR0jPT8cHYd+sC6o8ZZt/XDRG5CXVqn+0URWyBypQmoMlaGdVNvCe42
3wUYmFK2xzThOEbS4odhUFcc8z9BYavkOEZZDdWtwJkwK9ODhlDmSls7rRc1t0=
- BIgkSqJGLe+QXgOKAssWB1sF6jNDc/LL/YKWApbg9/O9vzIl+yPvv8XIHTynbfSCGOS8B
VovhGuct3Q9PfF/fSfEz15NSibTNKQX92lrG85ZW0HarZNWU5SvpeQD/JbZGMfdQTE0o7
F7xSU5VeYygkHU1JUBcGcgd/hHAqmaHqaNWZzUwwb9WOCBf/dkRLW8qTTp1so+5o845Hd
tcSa/ErxPxatwiCc4zzZSFBQx/iS2pPC16UFQMxdAL085f5BETwhmZAyL8HQqbMHUUW1O
qSw5kswg4lANpZzv7nim4I52Fy8tqKteDeBldSI6JvGt+5DU6vZBs6Pp/Iz1S4uaV3Nzr
9KTCUD454kfRkGCalXPquCPaIctCh/fhBQZtEJRN0dRajNbJq31ynqmvzzUELHVXbndc/
F8ImGYN5a2v34bCv4WOR9OhZboMht2O+KmOGFB6G3IUBZ2PTQWWEtVYOkwDnEhcPkinU4
GMv0A70qS/4hZYtMt6YnStG/cjFVgnC8Ir21XQVVCvwOxTT7pQa7aVBFi3PdjbRoz6LTy
Knc38m0QntKdla1ft5abdZZVjE7UuAXSyRKBFlSgk50YA99Pq2+KU8MWp7TpZtEvs63Yg
Lyxt2cBRBSa9VRXY95tp85NMvguajbL+ydb80c91uTqWVW9uec5joeH/17akic=
- XNpS95W1dryylM3JIia5ibazTIayLMB//GkhD3Pzy/k+zgZqcDW12XL5nA33v1J7aZd2X
ErAuHyJGoWCPul4l4uQ7HnsqMZzOdOq/XEcK8ThGsw9Oax9morX2M/Pgqu4f5VPXIObiV
xmBRFa2NrmhNZcOgVuyOENWGWzcQ6E8tUxC9/NC/uADZlVfIdCVqHsUcRGeQ4vIFiaefY
R0ey6jbm6nhwPWpBpHBTvYhGWbCQDM9+qi9G9wG+uwy9TEv7Iby3jxm1GsT9sCBzXbuao
KY3xX9ztnzGPodAhrDpjrWoKWpmAeHLORxHhi4jUjXKc+Lvhe3KMKAth+tiEm5v5xnF8b
48GzjbpUJZ3dAGJrAdxmtS7Gu+5uheUCFEs7XNlxlVLXe1Lpl0GQZEq2Ykbim9+JVyPuW
IQhAQ75vtrmy2Bg72hYW1CPq/olrDD54U2xBYISi8UdzfTNH8e6V3Y6vReMxgejOC+XDV
B7AWMvySq9d3lZ9cz3TGI2xApQBCbKMZ57uQhtcAzzbcUr8RDzGwUX/XJSLKqaH8SC6W5
erdMXpL65wj6DQ8Sy2NTbO7A+3PvbU7PVNN39fk9uA3JwJWQtj3MGTRZBI3RdGKNUAwOr
u39aRqOu6quypP+TmSMYvD8KRQgHYN6szSmmQebeJR+AuwobqB8FeW6U7rvFso=
- ShgY47ifcEVBlIiHQUL321ubqunxDvu+GsnfcE7hMVgUvObXozwnghk/P8IpjHMILPkIX
1KULsTJ9fNCFLvvvDzlQHZZyeLlGycog0RSj9rUyuTrHSpAf+tn5ADD4v1KnwENtNSygZ
60Q8lJzP4Y3+kz72onGErZQaaNpE+8FZBqbF9tJhh9CYlKecjTwde+x8l1kV7eHYW7iJn
+mY26er/y4jt9cs7KZJ8bkm3wm8G9PzHtIKXDSxZyOJVaua93wQLp5cmQ/3b50qFKth/P
8kFFbmmszhi0dottYgt4e0jHbaAiuu/8CAurlugk+Xscz/rzDYUTQ4LP+FMR1dLdU3ODE
molot1734FYMxSO9FzopEVI2IZczrvEZ59J73U8l8pSiDI2lHoNX4s9nVsBY0f+0b198z
mL3mCKO+Ur6uFjpBuHJN215d/WbQGk4E2LQddUP42sSxe8PD2jNXjQr1SzgDwb429j/JW
ir/4GWigpJNil0jwunDWgr1o9GWn1ZARQfKDPxsaR0ClNUVCLUhn85wrakIZ3S3mJ9zhi
VuSvtIO66wg/517JFmtz3CErFLuUwCcMskLp6miYzfRN3VmGpH1ozw9sVGTlRkhueO0t/
WAJmMU7ywI4HST7MFZWGP3ByyrUELsoXVnnRFHgMcuAcCrr3OMHssLZj/tAKpA=
- EZg99may0cG2UrIWeq+6OD3ptVdrUcmZFufNFjMqdqv//UkSYMj2sZ4xNXbjTgTJYnj+e
AD+wzVcUKDVS6ZBa1sGyPltrpTRHWlVSc1l1j06HQ9aq1gPzANGJAHsG06F3Op48NZWm9
Dl8kE+zsRhMIUeYxYJFpgLNM3z634AvucjWX8Dkb+K1LlDvXk2oBF2CalzMdWbMRbFzK0
GERvPDyMjbV+5L00gPFTtBb3S9EIPkwA4EaAxiqe6P0aZ++8cIctbSGtEauxXdd80WlCO
I/XgwLfxjutltohGkZo5S1bgbya3JAiELO7BqpS8vX3+6FFpgeKsG/9fehwL9J8YElwNg
uWzqyh25oJaM0mNwBL1Edn6JNi1vAiPSrL+XAPqqffWdRkulbAANmIbBz/lGZechNxmmT
2HfvoUHdauCKBYxIM9zi0ZIsXBAdD0Az6tvyK+0lQPt/IfAab32t+4ewGc9dxxfx51eZq
SaXkoMzDeKH3dcPjychgYBJlDAE6CmrLlZEFifGvYTS1LNW6/NBUSzsMQ1lwYWVedkCCj
FqkImtPcfq7N5VXYEhaskAi2C2A6/oVdmhon5mAn/jUamZUiq8wVGSdjy6+PmM+atuzh5
Xav1LrGZsQmZVO/+1x756gykAy2qTnh+XQuYVpNwfqzQvl63LJ1XK4catZYn98=
- oggyA9fKFwQCI0ZlHWLZzuV1eqO7kkqz4vZGTFjGHZNbaNjj2QO09BaiURbT4UsSI0QAd
a6mHDEUvalNah7XujpOtOtMW2Ll1UVwcoIsBPVzvB5JcQSl0GbYFYsxJC8WprMWY02bOu
9wBxu5HO59/1jtULHTqd3mpRj8YXNXsZcz2cfblOIi4JJD+hlgZ6xXxNg9i2TjESoOvla
UrG82E5kKaC0s3+/l+DJN9I/wnnxSUEUophTwkI5Dkl2uIOjTbxjcNyhtrezgKbuUm/6/
X/8uq3iRJY3eMeaukZtEf6SXnzCkFB2QFQQUJBPqFEeXdEfjGzRGhoxrlmnAGABsJXIB6
pIxL/Ng/Lw8dplQWUi80jDu/RsWWYS9tSlAQX8tiszUws8Mw+82d+YcEBoyo8TpnUF+Bs
xttfJB0SAl87eiYS4fvTk9rhKdzROgDXNIYpN2FvNo7QEBPC3fj8Rm+TAZnYyNyD48goe
gLcNQNY253NTX1vo6t7Lj3sP64EZwaHWoPOTDWIZoAgmVIfA6RCMkRmC5BQlGZ6/lxoSj
Co0Pz/4Xum2SheIrlI7SgTLBnghQeqTz4VBeNdOZmd9N9cxLPm6yvpHzs1MebRR7He8wJ
qgSctWewyunRRKt+NY/R4JJpYwNZ4hrUkY3Upk6+H8EqUWPJaSmtzA7dTa+MFs=
- jZU1QYjchqDNQWgt6dYgia/3FbiZPazLTq6h6xZHNJBnoJ9kbWBGvmZN41rgnkpsNjeLA
YlL8fZ7TtBGtztqm6bYUlheg9pW0d+CRu9kPt+NTt2kYvjkAmF2tqbnUT4ONmae6bWztE
v/T54WSq7HRC/AlKCjUZ4R+p3iHp8qZD4cEOhXKb1CtlapIjNAUXY0ScWdO1ugoPri0tP
pNJA7C6AfS2xLYMh+CRVgvhsvSYsRwg8fz1gDq3Rl3ffIN1LzEAXO3qpT6saqh7MTR4on
AKm8a//zsIFHj1wPFqcgbtIMheRy05FSYSxRp2bkfhv3b4dN0tbYI+dAn+JUbk77XaR1A
S0c1KLGFoTrjxLPRaCSStiCr3/c/wP8OliXQB9HQP5myCp/gBNfgGlUa5tGqFI13SVLNK
rVTd3MPOjyv8iKEMF5l+0TnHafjIOajtdXnwIPSWY19ZDA/Wuot4y8PlfPbTqRidWUmg+
ZF/meNniuSEwJZ06wKiMH6lrzPugfmO0ntOLkKxMovllRIWDt2uGZBRXpoMl0XiUgdlts
KLvB9vyebOozAEj+BcB2zGDGbnv/6m4Qznw+4/oPuCvfy4bQMafF7PVXeXSlJarDJM5Qo
0f6HoIHJwucJntUYLu+RjbmwWJemu1SME1kc96hM7kXMUkrQPO33bxzUyECHTA=
- hfJVnPf0Eka/fd2QlU5FkfJe9Ox69RJf/hzyu0GMaVB5o7/ZzLjFTYSRsVitFFA2PsYHS
66/8f+SK7ctRMADOOifMhb76s8xtmpiNB1RsWv+4du9G1Su8xdt44Y7HPeH5SSbBXKmQ8
pHODdH1svWeiyRcXgZvLaECZkwe3tbih4nR7xupKzdT4Rh1gQYQ7pJOoVPjYflckMx4c8
PTOzqIUSSEp+smIQ4A+qUqABJVDZPv8zg85rI/40vPz3e30tl3rJjlP5AwFlL47dUfZFy
QB9aUrawCWWcvOjCgymM6s99dph2WxakN8Xve1V7t2P3BvnbH5n+AHEn13mlGkcbwpNQo
R1H7VkAL0MY5sbigELl5aJ3TyLiXAiXysEgH1g3GYsQdci6/j7oL+bPPbePQL2pbcxw5U
xwAr+ikMWrjbTjcz7NW9BIYuMak+bzzHgXt9VyfkNBPwqX0FXJ1Zt5zDjqbzHWP49KZgs
gTWCppYCP6iQz0NjlBkt42mBWIBylAvVYN4fDxHQAsp4wYtFJz6QQAbNwf3DpbSQSNIqC
rb/J/wjS+GKRQNGlpRD9m9UbCfuJUagXdvZNriEN1KCL09z5iCREiCMDzfrE1Iuev5qpT
GihIgC/RbRMsfwf2hgAVk7Pqp5dg7EYs4C+Yp58XJkyF6MC0rkkfHnN+UfSDFA=
- pIuwYVcufJQIfTfYEawfVLYrFJozehDq6yy2kZ04rRPkhAWEc6oeWuKi88xHjQtqdJwan
QDOS+V5WknXWz5sGncUpYwTgnoWhP0rK8Af+lseIQc+s0oxWwmkC34w2pUoy1pQGAe45Z
/jdB+gEP4+r/sN2j9s0YKAbsgDgeiM7OAD2KYVQnLo5QOHTBC16tHHkgJP6u2M6T+ripr
SbhFjJeb9Hc18Pa1Y91qHBKuJi3u8bvYz9j8qrbmcioOjzgiNGAqrZwvJN4Y17VzM7OKM
nk5mVSzX766/EdPg+VNe/hSDFmE7JzSVdeHHTA/mSYS5uLHCDmPqF7sbalJ0N/h0oGV8g
fat+TKeBtxoLzOxtNVFHcUA2LjkpuZQlIg66qp3fiZlDJhwB8iDVRA6+5ZmfQ8/GyhlIX
bFd1WrW31zlRBaE+Jhk7AM/+wa9r3EYTpFPhYqobo63XRpZZZUArTZrByoaXdckB+4UMq
wXDK8pwhZY3kOGOn40emXp2BaQ9e5TdbqfS5LWxSd40LTJE4XJZdCpwlnuXB39RYMjDMT
emtQzkqaVO4c2VgFU/T/A3jcckI0/HPHlpJpnQ3p76weQQ1GkJ6GCrnprXz+r3AQtL5pD
bjCDO8kJG5CljhHTR5Y31I0GFhrgLvsGbKvVeezsyZaIQyZvUHaPla3glKHQhE=

View File

@ -28,6 +28,8 @@
ovn-central-k8s: 24.03/edge
ovn-relay-k8s: 24.03/edge
cinder-k8s: 2024.1/edge
cinder-volume: 2024.1/edge
cinder-volume-ceph: 2024.1/edge
cinder-ceph-k8s: 2024.1/edge
horizon-k8s: 2024.1/edge
heat-k8s: 2024.1/edge