143 lines
4.8 KiB
INI
Raw Normal View History

[metadata]
name = cinder
description = OpenStack Block Storage
long_description = file: README.rst
author = OpenStack
author_email = openstack-discuss@lists.openstack.org
url = https://docs.openstack.org/cinder/latest/
python_requires = >=3.8
classifiers =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: Implementation :: CPython
Programming Language :: Python :: 3 :: Only
Programming Language :: Python :: 3
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Programming Language :: Python :: 3.10
Programming Language :: Python :: 3.11
project_urls:
Source=https://opendev.org/openstack/cinder
Tracker=https://bugs.launchpad.net/cinder
[files]
data_files =
etc/cinder =
etc/cinder/api-paste.ini
etc/cinder/rootwrap.conf
etc/cinder/resource_filters.json
etc/cinder/rootwrap.d = etc/cinder/rootwrap.d/*
packages =
cinder
[entry_points]
cinder.scheduler.filters =
AvailabilityZoneFilter = cinder.scheduler.filters.availability_zone_filter:AvailabilityZoneFilter
CapabilitiesFilter = cinder.scheduler.filters.capabilities_filter:CapabilitiesFilter
CapacityFilter = cinder.scheduler.filters.capacity_filter:CapacityFilter
Add affinity/anti-affinity filters Cinder has done a good job hiding the details of storage backends from end users by using volume types. However there are use cases where users who build their application on top of volumes would like to be able to 'choose' where a volume be created on. How can Cinder provide such capability without hurting the simplicity we have been keeping? Affinity/anti-affinity is one of the flexibility we can provide without exposing details to backends. The term affinity/anti-affinity here is to to describe the relationship between two sets of volumes in terms of location. To limit the scope, we describe one volume is affinity with the other one only when they reside in the same volume back-end (this notion can be extended to volume pools if volume pool support lands in Cinder); on the contrary, 'anti-affinity' relation between two sets of volumes simply implies they are on different Cinder back-ends (pools). This affinity/anti-affinity filter filters Cinder backend based on hint specified by end user. The hint expresses the affinity or anti-affinity relation between new volumes and existing volume(s). This allows end users to provide hints like 'please put this volume to a place that is different from where Volume-XYZ resides in'. This change adds two new filters to Cinder - SameBackendFilter and DifferentBackendFilter. These two filters will look at the scheduler hint provided by end users (via scheduler hint extension) and filter backends by checking the 'host' of old and new volumes see if a backend meets the requirement (being on the same backend as existing volume or not being on the same backend(s) as existing volume(s)). For example: Volume A is on 'backend 1', to create Volume B on the same backend as A, use: cinder create --hint same_host=VolA-UUID SIZE To create Volume C on different backend than that of A, use: cinder create --hint different_host=VolA-UUID SIZE Now, to create Volume D on different backend other than those of A and C, use: cinder create --hint different_host=VolA-UUID --hint different_host=VolC-UUID SIZE or: cinder create --hint different_host="[VolA-UUID, VolC-UUID]" SIZE implements bp: affinity-antiaffinity-filter DocImpact Change-Id: I19f298bd87b0069c0d1bb133202188d3bf65b770
2014-01-28 14:23:04 +08:00
DifferentBackendFilter = cinder.scheduler.filters.affinity_filter:DifferentBackendFilter
DriverFilter = cinder.scheduler.filters.driver_filter:DriverFilter
JsonFilter = cinder.scheduler.filters.json_filter:JsonFilter
RetryFilter = cinder.scheduler.filters.ignore_attempted_hosts_filter:IgnoreAttemptedHostsFilter
Add affinity/anti-affinity filters Cinder has done a good job hiding the details of storage backends from end users by using volume types. However there are use cases where users who build their application on top of volumes would like to be able to 'choose' where a volume be created on. How can Cinder provide such capability without hurting the simplicity we have been keeping? Affinity/anti-affinity is one of the flexibility we can provide without exposing details to backends. The term affinity/anti-affinity here is to to describe the relationship between two sets of volumes in terms of location. To limit the scope, we describe one volume is affinity with the other one only when they reside in the same volume back-end (this notion can be extended to volume pools if volume pool support lands in Cinder); on the contrary, 'anti-affinity' relation between two sets of volumes simply implies they are on different Cinder back-ends (pools). This affinity/anti-affinity filter filters Cinder backend based on hint specified by end user. The hint expresses the affinity or anti-affinity relation between new volumes and existing volume(s). This allows end users to provide hints like 'please put this volume to a place that is different from where Volume-XYZ resides in'. This change adds two new filters to Cinder - SameBackendFilter and DifferentBackendFilter. These two filters will look at the scheduler hint provided by end users (via scheduler hint extension) and filter backends by checking the 'host' of old and new volumes see if a backend meets the requirement (being on the same backend as existing volume or not being on the same backend(s) as existing volume(s)). For example: Volume A is on 'backend 1', to create Volume B on the same backend as A, use: cinder create --hint same_host=VolA-UUID SIZE To create Volume C on different backend than that of A, use: cinder create --hint different_host=VolA-UUID SIZE Now, to create Volume D on different backend other than those of A and C, use: cinder create --hint different_host=VolA-UUID --hint different_host=VolC-UUID SIZE or: cinder create --hint different_host="[VolA-UUID, VolC-UUID]" SIZE implements bp: affinity-antiaffinity-filter DocImpact Change-Id: I19f298bd87b0069c0d1bb133202188d3bf65b770
2014-01-28 14:23:04 +08:00
SameBackendFilter = cinder.scheduler.filters.affinity_filter:SameBackendFilter
Add an instance-locality filter Having an instance and an attached volume on the same physical host (i.e. data locality) can be desirable in some configurations, in order to achieve high-performance disk I/O. This patch adds an InstanceLocalityFilter filter that allow users to request creation of volumes 'local' to an existing instance, without specifying the hypervisor's hostname, and without any knowledge of the underlying back-ends. In order to work: - At least one physical host should run both nova-compute and cinder-volume services. - The Extended Server Attributes extension needs to be active in Nova (this is by default), so that the 'OS-EXT-SRV-ATTR:host' property is returned when requesting instance info. - The user making the call needs to have sufficient rights for the property to be returned by Nova. This can be achieved either by changing Nova's policy.json (the 'extended_server_attributes' option), or by setting an account with privileged rights in Cinder conf. For example: Instance 01234567-89ab-cdef is running in a hypervisor on the physical host 'my-host'. To create a 42 GB volume in a back-end hosted by 'my-host': cinder create --hint local_to_instance=01234567-89ab-cdef 42 Note: Currently it is not recommended to allow instance migrations for hypervisors where this hint will be used. In case of instance migration, a previously locally-created volume will not be automatically migrated. Also in case of instance migration during the volume's scheduling, the result is unpredictable. DocImpact: New Cinder scheduler filter Change-Id: Id428fa2132c1afed424443083645787ee3cb0399
2014-12-05 16:09:10 +01:00
InstanceLocalityFilter = cinder.scheduler.filters.instance_locality_filter:InstanceLocalityFilter
cinder.scheduler.weights =
Add AllocatedCapacityWeigher AllocatedCapacityWeigher is a weigher that weigh hosts by their allocated capacity. The main purpose of this weigher is to simulate the SimpleScheduler's behavior, which sorts hosts by the size of all volumes on them. So by allocated capacity, it equals to the sum of size of all volumes on target host. In order to keep track of 'allocated' capacity, host state is updated to add a 'allocated_capacity_gb' attribute to record the value, which means each back-end must report one extra stats to scheduler. Fortunately, the 'allocated' capacity we are interested in here is pure Cinder level capacity, the volume manager can take all the burden to calculate this value without having to query back-ends. The volume manager does the initial calculation in init_host() by the time when it has to query all existing volumes from DB for ensure_export(). After initial calculation, volume manager/scheduler will keep track of every new request that changes 'allocated_capacity' and make sure this value is up to date. !DriverImpact! Cinder driver developers, please read on: This patch contains a change that might IMPACT volume drivers: volume manager now uses 'stats' attribute to save 'allocated_capacity_gb'. And this information will be merged with those stats drivers provide as a whole for scheduler to consume. If you plan to report any form of allocated space other than the apparent Cinder level value, (e.g. actual capacity allocated), Please choose a key name other than 'allocated_capacity_gb', otherwise it will *OVERWRITE* the value volume manager has calculated and confuse scheduler. Partially implements bp: deprecate-chance-and-simple-schedulers Change-Id: I306230b8973c2d1ad77bcab14ccde68e997ea816
2013-12-11 21:46:38 +08:00
AllocatedCapacityWeigher = cinder.scheduler.weights.capacity:AllocatedCapacityWeigher
CapacityWeigher = cinder.scheduler.weights.capacity:CapacityWeigher
ChanceWeigher = cinder.scheduler.weights.chance:ChanceWeigher
GoodnessWeigher = cinder.scheduler.weights.goodness:GoodnessWeigher
VolumeNumberWeigher = cinder.scheduler.weights.volume_number:VolumeNumberWeigher
Dynamically create cinder.conf.sample As it stands, the opts.py file that is passed into oslo-config-generator isn't being generated dynamically and the old way of generating the cinder.conf.sample is dependent on oslo-incubator which Cinder is trying to move away from. oslo-config-generator works differently than oslo-incubator so a number of changes had to be made in order to make this switch. This patch adds the config directory to Cinder and in it are two files: -generate_cinder_opts.py that will take the results of a grep command to create the opts.py file to be passed into oslo-config-generator. -cinder.conf which is the new configuration for oslo-config-generator. The file is inside the config directory to be consistent with other projects. Some changes were made to the generate_sample.sh file in order to give the base directories and target directories to the generate_cinder_opts.py program. tox.ini was edited to remove the checkonly option because all that needs to happen in check_uptodate.sh is a check to ensure that the cinder.conf.sample is actually being generated with no issues. All options were removed from the check_uptodate.sh because they were unnecessary given the new, more simple way of generating the cinder.conf.sample. setup.cfg was also edited in order to add information oslo-config-generator needs to run. Co-Authored By: Jay Bryant <jsbryant@us.ibm.com> Co-Authored By: Jacob Gregor <jgregor@us.ibm.com> Change-Id: I643dbe5675ae9280e204f691781e617266f570d5 Closes-Bug: 1473768 Closes-Bug: 1437904 Closes-Bug: 1381563
2015-08-13 10:17:36 -05:00
oslo.config.opts =
cinder = cinder.opts:list_opts
oslo.config.opts.defaults =
cinder = cinder.common.config:set_external_library_defaults
oslo.policy.enforcer =
cinder = cinder.policy:get_enforcer
oslo.policy.policies =
# The sample policies will be ordered by entry point and then by list
# returned from that entry point. If more control is desired split out each
# list_rules method into a separate entry point rather than using the
# aggregate method.
cinder = cinder.policies:list_rules
console_scripts =
cinder-api = cinder.cmd.api:main
cinder-backup = cinder.cmd.backup:main
cinder-manage = cinder.cmd.manage:main
cinder-rootwrap = oslo_rootwrap.cmd:main
cinder-rtstool = cinder.cmd.rtstool:main
cinder-scheduler = cinder.cmd.scheduler:main
cinder-status = cinder.cmd.status:main
cinder-volume = cinder.cmd.volume:main
cinder-volume-usage-audit = cinder.cmd.volume_usage_audit:main
wsgi_scripts =
cinder-wsgi = cinder.wsgi.wsgi:initialize_application
[extras]
all =
websocket-client>=1.3.2 # LGPLv2+
pyOpenSSL>=17.5.0 # Apache-2.0
storops>=0.5.10 # Apache-2.0
pywbem>=0.7.0 #LGPLv2.1+
python-3parclient>=4.2.10 # Apache-2.0
krest>=1.3.0 # Apache-2.0
infinisdk>=103.0.1 # BSD-3
purestorage>=1.17.0 # BSD
rsd-lib>=1.1.0 # Apache-2.0
storpool>=4.0.0 # Apache-2.0
storpool.spopenstack>=2.2.1 # Apache-2.0
dfs-sdk>=1.2.25 # Apache-2.0
rbd-iscsi-client>=0.1.8 # Apache-2.0
python-linstor>=1.7.0 # LGPLv3
datacore =
websocket-client>=1.3.2 # LGPLv2+
powermax =
pyOpenSSL>=17.5.0 # Apache-2.0
vnx =
storops>=0.5.10 # Apache-2.0
unity =
storops>=0.5.10 # Apache-2.0
fujitsu =
pywbem>=0.7.0 #LGPLv2.1+
hpe3par =
python-3parclient>=4.2.10 # Apache-2.0
kaminario =
krest>=1.3.0 # Apache-2.0
ds8k =
pyOpenSSL>=17.5.0 # Apache-2.0
infinidat =
infinisdk>=103.0.1 # BSD-3
pure =
purestorage>=1.17.0 # BSD
rsd =
rsd-lib>=1.1.0 # Apache-2.0
storpool =
storpool>=4.0.0 # Apache-2.0
storpool.spopenstack>=2.2.1 # Apache-2.0
Update in-tree Datera Cinder driver New attempt to update Datera Cinder in-tree driver. This review request builds on https://review.opendev.org/#/c/661359/ with fixed zuul gating tests. Full list of changes since last update: * Added Pypi packaging installation support * Dropping support for v2 API. No Datera products exist in production with this API version. * Added v2.2 API support * Rewrite of the driver to use the Datera Python-SDK instead of hand-rolled connections. Usage requires the dfs_sdk python package * Dropping support for default_storage_name and default_volume_name volume-type keys * Added CHAP support * Implemented fast-path Glance-->Datera image cloning with clone_image RPC * Implemented fast-path volume retype * Rewrote unit tests from scratch * Added iops_per_gb and bandwidth_per_gb volume-type keys * Implemented update_migrated_volume * Increased number of stats reported with get_volume_stats * API fallback now only occurs during driver initialization. This increases driver performance * Added config option for customizing volume-type default values * Implemented template size override * Implemented LDAP support * Added support for filter_functions and goodness_functions * Changed version string to date-based * Implemented manage_existing_snapshot and related RPCs * Removed ancient py25 compatibility imports * Updated Copyright to 2020 * Fixed almost all requests From walter and Sean * Added comprehensive version history Change-Id: I56a1a24d60a7bc0dc59bfcfa89da23f43696a31e
2020-01-24 13:26:23 +00:00
datera =
dfs-sdk>=1.2.25 # Apache-2.0
rbd_iscsi =
rbd-iscsi-client>=0.1.8 # Apache-2.0
linstor =
python-linstor>=1.7.0 # LGPLv3
[mypy]
show_column_numbers = true
show_error_context = true
ignore_missing_imports = true
follow_imports = skip
incremental = true
check_untyped_defs = true
warn_unused_ignores = true
show_error_codes = true
pretty = true
html_report = mypy-report
no_implicit_optional = true
[options]
packages = cinder