Remove cookiecutter code

* Remove cookiecutter code
* Remove references to charm-ops-sunbeam
* Remove .zuul.yaml, fetch-libs in ops-sunbeam

Change-Id: Ie147f490bcaf81452a07003ec166fe15fa2e56df
This commit is contained in:
Hemanth Nakkina 2025-03-05 16:44:44 +05:30
parent 01f83d6f81
commit ba98902b6c
No known key found for this signature in database
GPG Key ID: 2E4970F7B143168E
48 changed files with 15 additions and 851 deletions

View File

@ -51,4 +51,4 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -51,4 +51,4 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -51,4 +51,4 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -49,4 +49,4 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -46,6 +46,6 @@ To deploy the local test instance:
<!-- LINKS -->
[designate-bind-k8s-libs-docs]: https://charmhub.io/sunbeam-designate-bind-operator/libraries/identity_service
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -50,5 +50,5 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -51,4 +51,4 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -49,4 +49,4 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -57,4 +57,4 @@ To deploy the local test instance:
<!-- LINKS -->
[keystone-k8s-libs-docs]: https://charmhub.io/sunbeam-keystone-operator/libraries/identity_service
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -57,4 +57,4 @@ To deploy the local test instance:
<!-- LINKS -->
[keystone-k8s-libs-docs]: https://charmhub.io/sunbeam-keystone-operator/libraries/identity_service
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -52,4 +52,4 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -54,4 +54,4 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -52,4 +52,4 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -49,4 +49,4 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -50,4 +50,4 @@ To deploy the local test instance:
<!-- LINKS -->
[sunbeam-docs]: https://opendev.org/openstack/charm-ops-sunbeam/src/branch/main/README.rst
[sunbeam-docs]: https://opendev.org/openstack/sunbeam-charms/src/branch/main/README.md

View File

@ -1,5 +0,0 @@
[gerrit]
host=review.opendev.org
port=29418
project=openstack/charm-ops-sunbeam.git
defaultbranch=main

View File

@ -1,4 +0,0 @@
- project:
templates:
- openstack-python3-charm-jobs
- openstack-cover-jobs

View File

@ -6,7 +6,6 @@ Tutorials
---------
* `Deploying Sunbeam Charms <doc/deploy-sunbeam-charms.rst>`_
* `Writing an OpenStack API charm with Sunbeam <doc/writing-OS-API-charm.rst>`_
How-Tos
-------

View File

@ -1 +0,0 @@
cookiecutter

View File

@ -1,157 +0,0 @@
=============
New API Charm
=============
The example below will walk through the creation of a basic API charm for the
OpenStack `Ironic <https://wiki.openstack.org/wiki/Ironic>`__ service designed
to run on kubernetes.
Create the skeleton charm
=========================
Prerequisite
~~~~~~~~~~~~
Build a base geneeric charm with the `charmcraft` tool.
.. code:: bash
mkdir charm-ironic-k8s
cd charm-ironic-k8s
charmcraft init --author $USER --name ironic-k8s
Add ASO common files to new charm. The script will ask a few basic questions:
.. code:: bash
git clone https://opendev.org/openstack/charm-ops-sunbeam
cd charm-ops-sunbeam
./sunbeam-charm-init.sh ~/charm-ironic-k8s
This tool is designed to be used after 'charmcraft init' was initially run
service_name [ironic]: ironic
charm_name [ironic-k8s]: ironic-k8s
ingress_port []: 6385
db_sync_command [] ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema:
Fetch interface libs corresponding to the requires interfaces:
.. code:: bash
cd charm-ironic-k8s
charmcraft login --export ~/secrets.auth
export CHARMCRAFT_AUTH=$(cat ~/secrets.auth)
charmcraft fetch-lib charms.nginx_ingress_integrator.v0.ingress
charmcraft fetch-lib charms.data_platform_libs.v0.data_interfaces
charmcraft fetch-lib charms.keystone_k8s.v1.identity_service
charmcraft fetch-lib charms.rabbitmq_k8s.v0.rabbitmq
charmcraft fetch-lib charms.traefik_k8s.v1.ingress
Templates
=========
Much of the service configuration is covered by common templates which were copied
into the charm in the previous step. The only additional template for this charm
is for `ironic.conf`. Add the following into `./src/templates/ironic.conf.j2`
.. code::
[DEFAULT]
debug = {{ options.debug }}
auth_strategy=keystone
transport_url = {{ amqp.transport_url }}
[keystone_authtoken]
{% include "parts/identity-data" %}
[database]
{% include "parts/database-connection" %}
[neutron]
{% include "parts/identity-data" %}
[glance]
{% include "parts/identity-data" %}
[cinder]
{% include "parts/identity-data" %}
[service_catalog]
{% include "parts/identity-data" %}
Make charm deployable
=====================
The next step is to pack the charm into a deployable format
.. code:: bash
cd charm-ironic-k8s
charmcraft pack
Deploy Charm
============
The charm can now be deployed. The Kolla project has images that can be used to
run the service. Juju can pull the image directly from dockerhub.
.. code:: bash
juju deploy ./ironic-k8s_ubuntu-20.04-amd64.charm --resource ironic-api-image=kolla/ubuntu-binary-ironic-api:yoga ironic
juju relate ironic mysql
juju relate ironic keystone
juju relate ironic rabbitmq
juju relate ironic:ingress-internal traefik:ingress
juju relate ironic:ingress-public traefik:ingress
Test Service
============
Check that the juju status shows the charms is active and no error messages are
preset. Then check the ironic api service is responding.
.. code:: bash
$ juju status ironic
Model Controller Cloud/Region Version SLA Timestamp
ks micro microk8s/localhost 2.9.22 unsupported 13:31:41Z
App Version Status Scale Charm Store Channel Rev OS Address Message
ironic active 1 ironic-k8s local 0 kubernetes 10.152.183.73
Unit Workload Agent Address Ports Message
ironic/0* active idle 10.1.155.106
$ curl http://10.1.155.106:6385 | jq '.'
{
"name": "OpenStack Ironic API",
"description": "Ironic is an OpenStack project which aims to provision baremetal machines.",
"default_version": {
"id": "v1",
"links": [
{
"href": "http://10.1.155.106:6385/v1/",
"rel": "self"
}
],
"status": "CURRENT",
"min_version": "1.1",
"version": "1.72"
},
"versions": [
{
"id": "v1",
"links": [
{
"href": "http://10.1.155.106:6385/v1/",
"rel": "self"
}
],
"status": "CURRENT",
"min_version": "1.1",
"version": "1.72"
}
]
}

View File

@ -1,18 +0,0 @@
#!/bin/bash
# NOTE: this only fetches libs for use in unit tests here.
# Charms that depend on this library should fetch these libs themselves.
echo "WARNING: Charm interface libs are excluded from ASO python package."
charmcraft fetch-lib charms.nginx_ingress_integrator.v0.ingress
charmcraft fetch-lib charms.data_platform_libs.v0.data_interfaces
charmcraft fetch-lib charms.keystone_k8s.v1.identity_service
charmcraft fetch-lib charms.keystone_k8s.v0.identity_credentials
charmcraft fetch-lib charms.keystone_k8s.v0.identity_resource
charmcraft fetch-lib charms.rabbitmq_k8s.v0.rabbitmq
charmcraft fetch-lib charms.ovn_central_k8s.v0.ovsdb
charmcraft fetch-lib charms.traefik_k8s.v2.ingress
charmcraft fetch-lib charms.ceilometer_k8s.v0.ceilometer_service
charmcraft fetch-lib charms.cinder_ceph_k8s.v0.ceph_access
echo "Copying libs to to unit_test dir"
rsync --recursive --delete lib/ tests/lib/

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=./unit_tests
top_dir=./

View File

@ -1,8 +0,0 @@
debug:
default: False
description: Enable debug logging.
type: boolean
region:
default: RegionOne
description: Name of the OpenStack region
type: string

View File

@ -1,230 +0,0 @@
ceph-osd-replication-count:
default: 3
type: int
description: |
This value dictates the number of replicas ceph must make of any
object it stores within the cinder rbd pool. Of course, this only
applies if using Ceph as a backend store. Note that once the cinder
rbd pool has been created, changing this value will not have any
effect (although it can be changed in ceph by manually configuring
your ceph cluster).
ceph-pool-weight:
type: int
default: 40
description: |
Defines a relative weighting of the pool as a percentage of the total
amount of data in the Ceph cluster. This effectively weights the number
of placement groups for the pool created to be appropriately portioned
to the amount of data expected. For example, if the ephemeral volumes
for the OpenStack compute instances are expected to take up 20% of the
overall configuration then this value would be specified as 20. Note -
it is important to choose an appropriate value for the pool weight as
this directly affects the number of placement groups which will be
created for the pool. The number of placement groups for a pool can
only be increased, never decreased - so it is important to identify the
percent of data that will likely reside in the pool.
volume-backend-name:
default:
type: string
description: |
Volume backend name for the backend. The default value is the
application name in the Juju model, e.g. "cinder-ceph-mybackend"
if it's deployed as `juju deploy cinder-ceph cinder-ceph-mybackend`.
A common backend name can be set to multiple backends with the
same characters so that those can be treated as a single virtual
backend associated with a single volume type.
backend-availability-zone:
default:
type: string
description: |
Availability zone name of this volume backend. If set, it will
override the default availability zone. Supported for Pike or
newer releases.
restrict-ceph-pools:
default: False
type: boolean
description: |
Optionally restrict Ceph key permissions to access pools as required.
rbd-pool-name:
default:
type: string
description: |
Optionally specify an existing rbd pool that cinder should map to.
rbd-flatten-volume-from-snapshot:
default:
type: boolean
default: False
description: |
Flatten volumes created from snapshots to remove dependency from
volume to snapshot. Supported on Queens+
rbd-mirroring-mode:
type: string
default: pool
description: |
The RBD mirroring mode used for the Ceph pool. This option is only used
with 'replicated' pool type, as it's not supported for 'erasure-coded'
pool type - valid values: 'pool' and 'image'
pool-type:
type: string
default: replicated
description: |
Ceph pool type to use for storage - valid values include `replicated`
and `erasure-coded`.
ec-profile-name:
type: string
default:
description: |
Name for the EC profile to be created for the EC pools. If not defined
a profile name will be generated based on the name of the pool used by
the application.
ec-rbd-metadata-pool:
type: string
default:
description: |
Name of the metadata pool to be created (for RBD use-cases). If not
defined a metadata pool name will be generated based on the name of
the data pool used by the application. The metadata pool is always
replicated, not erasure coded.
ec-profile-k:
type: int
default: 1
description: |
Number of data chunks that will be used for EC data pool. K+M factors
should never be greater than the number of available zones (or hosts)
for balancing.
ec-profile-m:
type: int
default: 2
description: |
Number of coding chunks that will be used for EC data pool. K+M factors
should never be greater than the number of available zones (or hosts)
for balancing.
ec-profile-locality:
type: int
default:
description: |
(lrc plugin - l) Group the coding and data chunks into sets of size l.
For instance, for k=4 and m=2, when l=3 two groups of three are created.
Each set can be recovered without reading chunks from another set. Note
that using the lrc plugin does incur more raw storage usage than isa or
jerasure in order to reduce the cost of recovery operations.
ec-profile-crush-locality:
type: string
default:
description: |
(lrc plugin) The type of the crush bucket in which each set of chunks
defined by l will be stored. For instance, if it is set to rack, each
group of l chunks will be placed in a different rack. It is used to
create a CRUSH rule step such as step choose rack. If it is not set,
no such grouping is done.
ec-profile-durability-estimator:
type: int
default:
description: |
(shec plugin - c) The number of parity chunks each of which includes
each data chunk in its calculation range. The number is used as a
durability estimator. For instance, if c=2, 2 OSDs can be down
without losing data.
ec-profile-helper-chunks:
type: int
default:
description: |
(clay plugin - d) Number of OSDs requested to send data during
recovery of a single chunk. d needs to be chosen such that
k+1 <= d <= k+m-1. Larger the d, the better the savings.
ec-profile-scalar-mds:
type: string
default:
description: |
(clay plugin) specifies the plugin that is used as a building
block in the layered construction. It can be one of jerasure,
isa, shec (defaults to jerasure).
ec-profile-plugin:
type: string
default: jerasure
description: |
EC plugin to use for this applications pool. The following list of
plugins acceptable - jerasure, lrc, isa, shec, clay.
ec-profile-technique:
type: string
default:
description: |
EC profile technique used for this applications pool - will be
validated based on the plugin configured via ec-profile-plugin.
Supported techniques are `reed_sol_van`, `reed_sol_r6_op`,
`cauchy_orig`, `cauchy_good`, `liber8tion` for jerasure,
`reed_sol_van`, `cauchy` for isa and `single`, `multiple`
for shec.
ec-profile-device-class:
type: string
default:
description: |
Device class from CRUSH map to use for placement groups for
erasure profile - valid values: ssd, hdd or nvme (or leave
unset to not use a device class).
bluestore-compression-algorithm:
type: string
default:
description: |
Compressor to use (if any) for pools requested by this charm.
.
NOTE: The ceph-osd charm sets a global default for this value (defaults
to 'lz4' unless configured by the end user) which will be used unless
specified for individual pools.
bluestore-compression-mode:
type: string
default:
description: |
Policy for using compression on pools requested by this charm.
.
'none' means never use compression.
'passive' means use compression when clients hint that data is
compressible.
'aggressive' means use compression unless clients hint that
data is not compressible.
'force' means use compression under all circumstances even if the clients
hint that the data is not compressible.
bluestore-compression-required-ratio:
type: float
default:
description: |
The ratio of the size of the data chunk after compression relative to the
original size must be at least this small in order to store the
compressed version on pools requested by this charm.
bluestore-compression-min-blob-size:
type: int
default:
description: |
Chunks smaller than this are never compressed on pools requested by
this charm.
bluestore-compression-min-blob-size-hdd:
type: int
default:
description: |
Value of bluestore compression min blob size for rotational media on
pools requested by this charm.
bluestore-compression-min-blob-size-ssd:
type: int
default:
description: |
Value of bluestore compression min blob size for solid state media on
pools requested by this charm.
bluestore-compression-max-blob-size:
type: int
default:
description: |
Chunks larger than this are broken into smaller blobs sizing bluestore
compression max blob size before being compressed on pools requested by
this charm.
bluestore-compression-max-blob-size-hdd:
type: int
default:
description: |
Value of bluestore compression max blob size for rotational media on
pools requested by this charm.
bluestore-compression-max-blob-size-ssd:
type: int
default:
description: |
Value of bluestore compression max blob size for solid state media on
pools requested by this charm.

View File

@ -1,77 +0,0 @@
#!/usr/bin/env python3
import shutil
import yaml
import argparse
import tempfile
import os
import glob
from cookiecutter.main import cookiecutter
import subprocess
from datetime import datetime
import sys
def start_msg():
print("This tool is designed to be used after 'charmcraft init' was initially run")
def cookie(output_dir, extra_context):
cookiecutter(
'sunbeam_charm/',
extra_context=extra_context,
output_dir=output_dir)
def arg_parser():
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('charm_path', help='path to charm')
return parser.parse_args()
def read_metadata_file(charm_dir):
with open(f'{charm_dir}/metadata.yaml', 'r') as f:
metadata = yaml.load(f, Loader=yaml.FullLoader)
return metadata
def switch_dir():
abspath = os.path.abspath(__file__)
dname = os.path.dirname(abspath)
os.chdir(dname)
def get_extra_context(charm_dir):
metadata = read_metadata_file(charm_dir)
charm_name = metadata['name']
service_name = charm_name.replace('sunbeam-', '')
service_name = service_name.replace('-operator', '')
service_name = service_name.replace('-k8s', '')
ctxt = {
'service_name': service_name,
'charm_name': charm_name}
# XXX REMOVE
ctxt['db_sync_command'] = 'ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema'
ctxt['ingress_port'] = 6385
return ctxt
def sync_code(src_dir, target_dir):
cmd = ['rsync', '-r', '-v', f'{src_dir}/', target_dir]
subprocess.check_call(cmd)
def main() -> int:
"""Echo the input arguments to standard output"""
start_msg()
args = arg_parser()
charm_dir = args.charm_path
switch_dir()
with tempfile.TemporaryDirectory() as tmpdirname:
extra_context = get_extra_context(charm_dir)
service_name = extra_context['service_name']
cookie(
tmpdirname,
extra_context)
src_dir = f"{tmpdirname}/{service_name}"
shutil.copyfile(
f'{src_dir}/src/templates/wsgi-template.conf.j2',
f'{src_dir}/src/templates/wsgi-{service_name}-api.conf')
sync_code(src_dir, charm_dir)
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@ -1,9 +0,0 @@
{
"service_name": "",
"charm_name": "",
"ingress_port": "",
"db_sync_command": "",
"_copy_without_render": [
"src/templates"
]
}

View File

@ -1,11 +0,0 @@
venv/
build/
*.charm
.tox/
.coverage
__pycache__/
*.py[cod]
.idea
.vscode/
*.swp
.stestr/

View File

@ -1,5 +0,0 @@
[gerrit]
host=review.opendev.org
port=29418
project=openstack/charm-{{ cookiecutter.service_name }}-k8s.git
defaultbranch=main

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=./tests/unit
top_dir=./tests

View File

@ -1,11 +0,0 @@
- project:
templates:
- openstack-python3-charm-jobs
- openstack-cover-jobs
- microk8s-func-test
vars:
charm_build_name: {{ cookiecutter.service_name }}-k8s
juju_channel: 3.2/stable
juju_classic_mode: false
microk8s_channel: 1.26-strict/stable
microk8s_classic_mode: false

View File

@ -1,2 +0,0 @@
# NOTE: no actions yet!
{ }

View File

@ -1,31 +0,0 @@
type: "charm"
bases:
- build-on:
- name: "ubuntu"
channel: "22.04"
run-on:
- name: "ubuntu"
channel: "22.04"
parts:
update-certificates:
plugin: nil
override-build: |
apt update
apt install -y ca-certificates
update-ca-certificates
charm:
after: [update-certificates]
build-packages:
- git
- libffi-dev
- libssl-dev
- rustc
- cargo
- pkg-config
charm-binary-python-packages:
- cryptography
- jsonschema
- pydantic<2.0
- jinja2
- git+https://opendev.org/openstack/charm-ops-sunbeam#egg=ops_sunbeam

View File

@ -1,9 +0,0 @@
options:
debug:
default: False
description: Enable debug logging.
type: boolean
region:
default: RegionOne
description: Name of the OpenStack region
type: string

View File

@ -1,52 +0,0 @@
name: {{ cookiecutter.charm_name }}
summary: OpenStack {{ cookiecutter.service_name }} service
maintainer: OpenStack Charmers <openstack-charmers@lists.ubuntu.com>
description: |
OpenStack {{ cookiecutter.service_name }} provides an HTTP service for managing, selecting,
and claiming providers of classes of inventory representing available
resources in a cloud.
version: 3
bases:
- name: ubuntu
channel: 22.04/stable
assumes:
- k8s-api
- juju >= 3.2
tags:
- openstack
source: https://opendev.org/openstack/charm-{{ cookiecutter.service_name }}-k8s
issues: https://bugs.launchpad.net/charm-{{ cookiecutter.service_name }}-k8s
containers:
{{ cookiecutter.service_name }}-api:
resource: {{ cookiecutter.service_name }}-api-image
resources:
{{ cookiecutter.service_name }}-api-image:
type: oci-image
description: OCI image for OpenStack {{ cookiecutter.service_name }}
requires:
database:
interface: mysql_client
limit: 1
identity-service:
interface: keystone
ingress-internal:
interface: ingress
optional: true
limit: 1
ingress-public:
interface: ingress
limit: 1
amqp:
interface: rabbitmq
provides:
{{ cookiecutter.service_name }}:
interface: {{ cookiecutter.service_name }}
peers:
peers:
interface: {{ cookiecutter.service_name }}-peer

View File

@ -1,9 +0,0 @@
ops
jinja2
git+https://github.com/openstack/charm-ops-sunbeam#egg=ops_sunbeam
lightkube
pydantic<2.0
# Uncomment below if charm relates to ceph
# git+https://github.com/openstack/charm-ops-interface-ceph-client#egg=interface_ceph_client
# git+https://github.com/juju/charm-helpers.git#egg=charmhelpers

View File

@ -1,63 +0,0 @@
#!/usr/bin/env python3
"""{{ cookiecutter.service_name[0]|upper}}{{cookiecutter.service_name[1:] }} Operator Charm.
This charm provide {{ cookiecutter.service_name[0]|upper}}{{cookiecutter.service_name[1:] }} services as part of an OpenStack deployment
"""
import logging
import ops
from ops.framework import StoredState
import ops_sunbeam.charm as sunbeam_charm
logger = logging.getLogger(__name__)
class {{ cookiecutter.service_name[0]|upper}}{{cookiecutter.service_name[1:] }}OperatorCharm(sunbeam_charm.OSBaseOperatorAPICharm):
"""Charm the service."""
_state = StoredState()
service_name = "{{ cookiecutter.service_name }}-api"
wsgi_admin_script = '/usr/bin/{{ cookiecutter.service_name }}-api-wsgi'
wsgi_public_script = '/usr/bin/{{ cookiecutter.service_name }}-api-wsgi'
db_sync_cmds = [
{{ cookiecutter.db_sync_command.split() }}
]
@property
def service_conf(self) -> str:
"""Service default configuration file."""
return "/etc/{{ cookiecutter.service_name }}/{{ cookiecutter.service_name }}.conf"
@property
def service_user(self) -> str:
"""Service user file and directory ownership."""
return '{{ cookiecutter.service_name }}'
@property
def service_group(self) -> str:
"""Service group file and directory ownership."""
return '{{ cookiecutter.service_name }}'
@property
def service_endpoints(self):
"""Return service endpoints for the service."""
return [
{
'service_name': '{{ cookiecutter.service_name }}',
'type': '{{ cookiecutter.service_name }}',
'description': "OpenStack {{ cookiecutter.service_name[0]|upper}}{{cookiecutter.service_name[1:] }} API",
'internal_url': f'{self.internal_url}',
'public_url': f'{self.public_url}',
'admin_url': f'{self.admin_url}'}]
@property
def default_public_ingress_port(self):
"""Ingress Port for API service."""
return {{ cookiecutter.ingress_port }}
if __name__ == "__main__": # pragma: nocover
ops.main({{ cookiecutter.service_name[0]|upper}}{{cookiecutter.service_name[1:] }}OperatorCharm)

View File

@ -1,22 +0,0 @@
###############################################################################
# [ WARNING ]
# ceph configuration file maintained in aso
# local changes may be overwritten.
###############################################################################
[global]
{% if ceph.auth -%}
auth_supported = {{ ceph.auth }}
mon host = {{ ceph.mon_hosts }}
{% endif -%}
keyring = /etc/ceph/$cluster.$name.keyring
log to syslog = false
err to syslog = false
clog to syslog = false
{% if ceph.rbd_features %}
rbd default features = {{ ceph.rbd_features }}
{% endif %}
[client]
{% if ceph_config.rbd_default_data_pool -%}
rbd default data pool = {{ ceph_config.rbd_default_data_pool }}
{% endif %}

View File

@ -1,3 +0,0 @@
{% if database.connection -%}
connection = {{ database.connection }}
{% endif -%}

View File

@ -1,23 +0,0 @@
{% if identity_service.admin_auth_url -%}
auth_url = {{ identity_service.admin_auth_url }}
interface = admin
{% elif identity_service.internal_auth_url -%}
auth_url = {{ identity_service.internal_auth_url }}
interface = internal
{% elif identity_service.internal_host -%}
auth_url = {{ identity_service.internal_protocol }}://{{ identity_service.internal_host }}:{{ identity_service.internal_port }}
interface = internal
{% endif -%}
{% if identity_service.public_auth_url -%}
www_authenticate_uri = {{ identity_service.public_auth_url }}
{% elif identity_service.internal_host -%}
www_authenticate_uri = {{ identity_service.internal_protocol }}://{{ identity_service.internal_host }}:{{ identity_service.internal_port }}
{% endif -%}
auth_type = password
project_domain_name = {{ identity_service.service_domain_name }}
user_domain_name = {{ identity_service.service_domain_name }}
project_name = {{ identity_service.service_project_name }}
username = {{ identity_service.service_user_name }}
password = {{ identity_service.service_password }}
service_token_roles = {{ identity_service.admin_role }}
service_token_roles_required = True

View File

@ -1,3 +0,0 @@
[database]
{% include "parts/database-connection" %}
connection_recycle_time = 200

View File

@ -1,10 +0,0 @@
{% if trusted_dashboards %}
[federation]
{% for dashboard_url in trusted_dashboards -%}
trusted_dashboard = {{ dashboard_url }}
{% endfor -%}
{% endif %}
{% for sp in fid_sps -%}
[{{ sp['protocol-name'] }}]
remote_id_attribute = {{ sp['remote-id-attribute'] }}
{% endfor -%}

View File

@ -1,2 +0,0 @@
[keystone_authtoken]
{% include "parts/identity-data" %}

View File

@ -1,6 +0,0 @@
{% for section in sections -%}
[{{section}}]
{% for key, value in sections[section].items() -%}
{{ key }} = {{ value }}
{% endfor %}
{%- endfor %}

View File

@ -1,15 +0,0 @@
{% if enable_signing -%}
[signing]
{% if certfile -%}
certfile = {{ certfile }}
{% endif -%}
{% if keyfile -%}
keyfile = {{ keyfile }}
{% endif -%}
{% if ca_certs -%}
ca_certs = {{ ca_certs }}
{% endif -%}
{% if ca_key -%}
ca_key = {{ ca_key }}
{% endif -%}
{% endif -%}

View File

@ -1,28 +0,0 @@
Listen {{ wsgi_config.public_port }}
<VirtualHost *:{{ wsgi_config.public_port }}>
WSGIDaemonProcess {{ wsgi_config.group }} processes=3 threads=1 user={{ wsgi_config.user }} group={{ wsgi_config.group }} \
display-name=%{GROUP}
WSGIProcessGroup {{ wsgi_config.group }}
{% if ingress_public.ingress_path -%}
WSGIScriptAlias {{ ingress_public.ingress_path }} {{ wsgi_config.wsgi_public_script }}
{% endif -%}
WSGIScriptAlias / {{ wsgi_config.wsgi_public_script }}
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog {{ wsgi_config.error_log }}
CustomLog {{ wsgi_config.custom_log }} combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>

View File

@ -1,9 +0,0 @@
# This file is managed centrally. If you find the need to modify this as a
# one-off, please don't. Intead, consult #openstack-charms and ask about
# requirements management in charms via bot-control. Thank you.
coverage
mock
flake8
stestr
ops

View File

@ -1 +0,0 @@
aso_charm/{{cookiecutter.service_name}}/src/templates

View File

@ -1,5 +0,0 @@
#!/usr/bin/env bash
[ -e .tox/cookie/bin/activate ] || tox -e cookie
source .tox/cookie/bin/activate
shared_code/sunbeam-charm-init.py $@