This completes the "policy-in-code" work for Cinder by removing
the old policy.json handling. It also adds new in-code checks to
the legacy consistency group code for completeness.
Change-Id: I810b6cb6bba2d95cc5bb477d6e2968ac1734c96b
Depends-on: I364e401227fe43e2bacf8a799e10286ee445f835
Implements: bp policy-in-code
When running NetApp cDot driver using nas_secure_files_operation=false,
managing a volume will always fail if the files permissions are not set
to 0777.
This patch changes the python call from shutil.move to _execute() which
is run using elevated permissions.
Change-Id: I41484ced6269d0d4b7553e84c734ef22ab46fcd9
Closes-bug: #1691771
This patch adds documentation and sample
file for default policy in code feature.
Change-Id: I597971a29ec61a1bf8c991b2715ec7644b2e2692
Partial-Implements: blueprint policy-in-code
This patch adds policy in code support for volume
and volume type resources.
Change-Id: I47d11a2f6423a76ca053abf075791ed70ee84b07
Partial-Implements: blueprint policy-in-code
This patch adds policy in code support for capabilities,
hosts, services, limits and depends on the quota patch [1].
[1]: https://review.openstack.org/#/c/508091/
Change-Id: Ib2bac2d28d950c0d8b734a54e300dd4185d98ca9
Partial-Implements: blueprint policy-in-code
This patch adds policy in code support for qos, quota,
quota class resources and depends on the group&group
snapshot patch.
[1]: https://review.openstack.org/#/c/507812/
Change-Id: Idf27d5fd09365374330ad0d4c0448f68f3cc03e8
Partial-Implements: blueprint policy-in-code
This patch adds policy in code support for group&group
snapshot resources and depends on the backup patch [1].
[1]: https://review.openstack.org/#/c/507015/
Change-Id: If95a8aaa70614902a06420d1afa487827f8a3f03
Partial-Implements: blueprint policy-in-code
This patch adds policy in code support for backup
resources and depends on the basic patch [1].
[1]: https://review.openstack.org/#/c/506976/
Change-Id: I9a79b5ececc587e80129cc980930e168e805b139
Partial-Implements: blueprint policy-in-code
This patch adds policy in code support for snapshot
resources and depends on the basic patch [1].
[1]: https://review.openstack.org/#/c/506976/
Change-Id: I8e1b544f510c1a0af30a5a0b672578226c9fd315
Partial-Implements: blueprint policy-in-code
This patch adds policy in code support for message, worker
and cluster resources and depends on the basic patch [1].
[1]: https://review.openstack.org/#/c/506976/
Change-Id: I04c0b79175c69d25ca6fcb50ec604123f3f09933
Partial-Implements: blueprint policy-in-code
This is the basic patch which consits of the framework
code for default policy in code feature as well as
support in attachment resource.
Change-Id: Ie3ff068e61ea8e0e8fff78deb732e183e036a10c
Partial-Implements: blueprint policy-in-code
Sometimes if users want to create volumes from snapshots
with a specified availability_zone, users may want to know
what snapshots are available. In this case, users always
want to filter the snapshots with "availability_zone" first.
This patch added the availability_zone filter for snapshot list.
Change-Id: I8953eca5f535c1399dc882c4a232fbeef9bb2959
The v1 API has been deprecated for many releases now. We have not
been able to remove it due to SDKs and tooling being slow to
update. This is the latest attempt to see if it has been long
enough.
Change-Id: I03bf2db5bd7e2fdfb4f6032758ccaf2b348a82ba
After using general filter feature, some filters which
were supported by default now aren't supported.
This patch updated the resource_filter.json to let the filters keep
the backward compatibility.
Closes-bug: #1708060
Change-Id: Idf3c4c5f03af81918369c15d05f856cc6eff7596
Change VolumeTypeExtraSpecsController to pass the action
on authorize for create, delete, index, show, update.
Change the policy files to include rules for
types_extra_specs create, delete, index, show, update.
Change-Id: If1e2288ba5b3cdae60c26d330342a994583a55b9
Closes-Bug: #1703933
This change adds the ability to extend 'in-use' volume.
Once the volume size is extended, Nova is informed of the size change
through the external-event extension so the virt driver can perform
the appropriate actions for the host and guest to detect the new volume size.
Tempest related patches:
1. https://review.openstack.org/#/c/480746/
2. https://review.openstack.org/#/c/480778/
Depends-On: If10cffd0dc4c9879f6754ce39bee5fae1d04f474
Blueprint: extend-attached-volume
Co-Authored-By: TommyLike <tommylikehu@gmail.com>
APIImpact
Change-Id: I60c8ea9eb0bbcfe41f5f0a30ed8dc67bdcab3ebc
Add granularity to the volume_extension:qos_specs_manage
policy with the addition of actions for Create/Get/Update/Delete
and add unit tests to cover authorization accordingly.
Change-Id: I1ca996e968a273b989bea0bf3c54b47349ca47fe
Closes-bug: #1623575
This patch implements the spec of reverting volume to
latest snapshot.
Related tempest and client patches:
[1] https://review.openstack.org/#/c/463906/
[2] https://review.openstack.org/#/c/464903/
APIImpact
DocImpact
Partial-Implements: blueprint revert-volume-to-snapshot
Change-Id: Ib20d749c2118c350b5fa0361ed1811296d518a17
This patch adds support for replication group.
It is built upon the generic volume groups.
It supports enable replication, disable replication,
failover replication, and list replication targets.
Client side patch is here:
https://review.openstack.org/#/c/352229/
To test this server side patch using the client side patch:
export OS_VOLUME_API_VERSION=3.38
Make sure the group type has group_replication_enabled or
consistent_group_replication_enabled set in group specs,
and the volume types have replication_enabled set in extra specs
(to be compatible with Cheesecake).
cinder group-type-show my_group_type
+-------------+---------------------------------------+
| Property | Value |
+-------------+---------------------------------------+
| description | None |
| group_specs | group_replication_enabled : <is> True |
| id | 66462b5c-38e5-4a1a-88d6-7a7889ffec55 |
| is_public | True |
| name | my_group_type |
+-------------+---------------------------------------+
cinder type-show my_volume_type
+---------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------+--------------------------------------+
| description | None |
| extra_specs | replication_enabled : <is> True |
| id | 09c1ce01-87d5-489e-82c6-9f084107dc5c |
| is_public | True |
| name | my_volume_type |
| os-volume-type-access:is_public | True |
| qos_specs_id | None |
+---------------------------------+--------------------------------------+
Create a group:
cinder group-create --name my_group my_group_type my_volume_type
cinder group-show my_group
Enable replication group on the primary storage:
cinder group-enable-replication my_group
Expected results: replication_status becomes “enabled”.
Failover replication group to the secondary storage.
If secondary-backend-id is not specified, it will go to the
secondary-backend-id configured in cinder.conf:
cinder group-failover-replication my_group
If secondary-backend-id is specified (not “default”), it will go to
the specified backend id:
cinder group-failover-replication my_group
--secondary-backend-id <backend_id>
Expected results: replication_status becomes “failed-over”.
Run failover replication group again to fail the group back to
the primary storage:
cinder group-failover-replication my_group
--secondary-backend-id default
Expected results: replication_status becomes “enabled”.
Disable replication group:
cinder group-disable-replication my_group
Expected results: replication_status becomes “disabled”.
APIImpact
DocImpact
Implements: blueprint replication-cg
Change-Id: I4d488252bd670b3ebabbcc9f5e29e0e4e913765a
This patch fix the force delete for a consistency group
when the backend object (fileset) doesn't exist.
Change-Id: I81c4fc8fd913be11d88dcbcdd38dde88144af8bd
Closes-bug: #1694189
Add feature that administrators can get back-end storage pools
filtered by volume-type, Cinder will return the pools filtered
by volume-type's extra-spec. This is depended on generalized
resource filtering feature.
APIImpact
Depends-On: ff3d41b15abb2915de87830980147be51e5da971
Implements: blueprint add-volume-type-filter-to-get-pool
Change-Id: If2ae4616340d061db833cbbdffc77f3e976d8254
Due to lack of GPFS path ('/usr/lpp/mmfs/bin/') export,
GPFSNFS driver configuration & GPFSRemote driver configuration are
not able to find the GPFS commands causing the initialization failure.
With this patch, GPFS will be able to run all GPFS commands
irrespective of GPFS path export.
Change-Id: Ia44cc1495320ebd1ee6f442f5d94c45f06383dd3
Closes-bug: #1691739
This patch adds generalized filtering support for these list APIs:
1. list volume
2. list backup
3. list snapshot
4. list group
5. list group-snapshot
6. list attachment
7. list message
8. get pools
DocImpact
APIImpact
Co-Authored-By: TommyLike <tommylikehu@gmail.com>
Change-Id: Icee6c22621489f93614f4adf071329d8d2115637
Partial: blueprint generalized-filtering-for-cinder-list-resource
After image cloning the NFS client cache needs to be refreshed.
This can be accomplished by touching the directory hosting the
cached image file.
See also: https://bugs.launchpad.net/nova/+bug/1617299
Co-Authored-By: Sebastian Schee <sebastian.schee@sap.com>
Co-Authored-By: Goutham Pacha Ravi <gouthampravi@gmail.com>
Closes-bug: #1679716
Change-Id: If392f41f65978721668b53cfab94393f074d24e9
This patch adds policy in attachment APIs, also
add related testcases in API unit testcases.
Closes-Bug: #1680836
Change-Id: I310fec39719ead39f26d97ee4ba95187e1fb2069
This is an implementation of a Cinder driver for Veritas HyperScale
which is a high performance block storage provider for OpenStack.
This implementation features all necessary Cinder functionality
including volume attach, detach and snapshot management.
DocImpact
Change-Id: Ie1af5f5d54b0115974a4024a1756e4e0aa07399a
Implements: blueprint veritas-hyperscale-cinder-driver
Replaces reading quobyte.info xarg with a range
of checks on a given mountpoint. Adds several
unit tests to verify the different check results.
Partial-Bug: 1659328
Change-Id: I7674848d4464205c5fd0f7669e74154961319f1a
The replication promote and reenable API were removed
from Mitaka, these two policies are useless now.
Change-Id: Idc693c3e298f9adff01ea63147f67205811cfc57
When deploying Cinder as an SDS without Glance we have no way to prevent
volume creation from images even when we know they will not succeed.
This patch adds a specific policy so we can prevent this specific
creation action from being accepted. By doing so the user will know
immediately that this is not possible, instead of having to look through
the logs to see that this is not an option.
TrivialFix
Change-Id: Iabc10a1927eea6419dd677a632cfc7d32dc08091
Adds getfattr and mount.quobyte which are run with root
ownership by the Quobyte driver if
nas_secure_file_operations is set to false.
Closes-Bug: #1586953
Change-Id: I135d96afb460e94f4396beb01894f731b562e614
Now that we support having multiple c-vol services using the same
storage backend under one cluster, they no longer clean all resources
from the backend with ongoing statuses in the DB, only those from their
own host because those are failed operations that were left "in the air"
when the service was stopped. So we need a way to trigger the cleanup
of resources that were being processed by another c-vol service that
failed in the same cluster.
This patch adds a new API endpoint (/workers/cleanup) that will trigger
cleanup for c-vol services as microversion 3.19.
The cleanup will be performed by other services that share the same
cluster, so at least one of them must be up to be able to do the
cleanup.
Cleanup cannot be triggered during a cloud upgrade, but a restarted
service will still cleanup it's own resources during an upgrade.
If no arguments are provided cleanup will try to issue a clean message
for all nodes that are down, but we can restrict which nodes we want to
be cleaned using parameters `service_id`, `cluster_name`, `host`,
`binary`, and `disabled`.
Cleaning specific resources is also possible using `resource_type` and
`resource_id` parameters.
We can even force cleanup on nodes that are up with `is_up`, but that's
not recommended and should only used if you know what you are doing.
For example if you know a specific cinder-volume is down even though
it's still not being reported as down when listing the services and you
know the cluster has at least another service to do the cleanup.
API will return a dictionary with 2 lists, one with services that have
been issued a cleanup request (`cleaning` key) and another list with
services that cannot be cleaned right now because there is no
alternative service to do the cleanup in that cluster (`unavailable`
key).
Data returned for each service element in these two lists consist of the
`id`, `host`, `binary`, and `cluster_name`. These are not the services
that will be performing the cleanup, but the services that will be
cleaned up or couldn't be cleaned up.
Specs: https://specs.openstack.org/openstack/cinder-specs/specs/newton/ha-aa-cleanup.html
APIImpact: New /workers/cleanup entry
Implements: blueprint cinder-volume-active-active-support
Change-Id: If336b6569b171846954ed6eb73f5a4314c6c7e2e
Allow volume delete to take parameters "cascade",
or "force", or both.
A new policy field, "volume:force_delete" is added
with the default of "rule:admin_api".
Implements: blueprint volume-delete-parameters
APIImpact: New parameters to volume delete
Change-Id: Ic47cfcf1cc7d172d7f9d5b093233035f797982f5
Currently the administrator could only reset the generic group
status by db operation,this change intends to add new admin
actions to achieve these.
The patch list:
1. group API(this).
2. group snapshot API(https://review.openstack.org/#/c/389577/).
3. cinder client(https://review.openstack.org/390169/).
4. documentation(https://review.openstack.org/#/c/395464).
APIImpact
DocImpact
Partial-Implements: blueprint reset-cg-and-cgs-status
Change-Id: Ib8bffb806f878c67bb12fd5ef7ed8cc15606d1c5
Currently the administrator could only reset the group snapshot
status by db operation, this change intends to add new admin
action to achieve this.
The patch list:
1. group API(https://review.openstack.org/#/c/389091/).
2. group snapshot API(this).
3. cinder client(https://review.openstack.org/390169/).
4. documentation(https://review.openstack.org/#/c/395464/).
APIImpact
DocImpact
Partial-Implements: blueprint reset-cg-and-cgs-status
Change-Id: I9e3a26950c435038cf40bea4b27aea1bd5049e95
Cinder has supported to query project id of volume
and snapshot, for consistency, this will introduce
backup project attribute in querying operation.
APIImpact
Add 'os-backup-project-attr:project_id: xxxx' in querying
response.
Implements: blueprint: backup-tenant-attribute-support
Change-Id: I6fde17baffe88ab4d4e69dcc2fefdbcb8d7a4dc5
When using LVM the various cmds have a number of potential formats due
to options like LVM_SUPPRESS_FD_WARNINGS and LVM_SYSTEM_DIR
specifications. The volume.filters were set up to provide each
combination by having a second, third and fourth version of the filter:
# LVM related show commands
pvs: EnvFilter, env, root, LC_ALL=C, pvs
vgs: EnvFilter, env, root, LC_ALL=C, vgs
lvs: EnvFilter, env, root, LC_ALL=C, lvs
lvdisplay: EnvFilter, env, root, LC_ALL=C, lvdisplay
# -LVM related show commands with suppress fd warnings
pvs_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, pvs
vgs_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, vgs
lvs_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvs
lvs_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvdisplay
This no longer works, the first pvs/vgs/lvs filters will always get picked up
and the cmds for any special configs will fail. We used to use regexs for this
sort of thing, but then we switched to just adding unique filters. I'm not sure
how this was working or why it doesn't seem to work as of Newton, but regardless,
replacing the _xxxx with just an integer seems to work fine, rootwrap apparantly
just doesn't like '_' or '-' in the filter name, so we'll change it to just use
a digit appended to the filter name. There are some other filters that may need
checked here as well.
Change-Id: I1a7c3048841c095a8e92795d7dfa0cb5c2a96645
Close-Bug: #1646053
In order for a user with the admin role to be able to perform
administrative actions, the role must be assigned to a project
that is deemed the "admin" project in the Keystone server. This
prevents someone being assigned admin on some random project
from being admin everywhere.
Change-Id: Ic4294cc1746702c345259c64bad1e20675a7d9ab
Closes-Bug: 968696
For some reason the leaded descriptor warning message coming
from LVM is causing Cinder to fail startup and it appears to be
masking out the vg response in vgs calls.
We typically don't hit this, but due to the nature of Kolla and
I guess going through the different processes via the containers
this gets logged every time vgs is called. Eric Harney rightly
pointed out that rather than use exception handling and such
that we should use the LVM env variable mechanism we already have
in place in Cinder.
For now this patch added a new config option to the LVM driver:
lvm_suppress_fd_warnings=True|False
This is useful for K8 deployments that have an indirect call to the
LVM cmds which results in failures.
For those that are interested, this can also be done outside of
cinder by setting the silence_logs variable in lvm.conf
This is made optional as a config flag to avoid any breakage for
existing deployments during upgrade.
Change-Id: I85612fa49475beea58d30330c8fe8352a2f91123
Closes-Bug: #1619701