1
0
mirror of https://github.com/gryf/openstack.git synced 2025-12-17 03:20:25 +01:00

Added fixes and readme for the feature

This commit is contained in:
Roman Dobosz
2018-02-23 13:36:12 +01:00
parent b0cd31de21
commit 7c2f00d09d
10 changed files with 546 additions and 12 deletions

228
affinity_ocata/README.rst Normal file
View File

@@ -0,0 +1,228 @@
Aggregate affinity
==================
This series of patches add ability for creating aggregation of ironic nodes in
Nova. This work is based on work of `Jay Pipes series`_ back ported to Ocata,
with some additional fixes.
After applying those patches on Ocata tree nova and novaclient, it will be
possible to create aggregates which contain ironic nodes and a group with one
of two new policies:
* aggregate-affinity
* aggregate-anti-affinity
Note, that if openstackclient is used, it is needed to overwrite
``OS_COMPUTE_API_VERSION`` environment variable to value ``2.43``.
Given, that we are working on devstack, and have available four Ironic nodes,
basic flow to test it is as follows:
.. code:: shell-session
$ export OS_COMPUTE_API_VERSION=2.43
$ openstack aggregate create rack1
$ openstack aggregate create rack2
$ openstack aggregate add host rack1 $(openstack baremetal node list|grep node-0|awk '{print $2}')
$ openstack aggregate add host rack1 $(openstack baremetal node list|grep node-1|awk '{print $2}')
$ openstack aggregate add host rack2 $(openstack baremetal node list|grep node-2|awk '{print $2}')
$ openstack aggregate add host rack2 $(openstack baremetal node list|grep node-3|awk '{print $2}')
$ openstack server group create --policy aggregate-anti-affinity group1
$ openstack server create \
--image=$(openstack image list|grep x86_64-disk| awk '{print $2}') \
--flavor=1 \
--nic net-id=$(openstack network list |grep private | awk '{print $2}') \
--hint group=$(openstack server group list | grep group1 | awk '{print $2}') \
instance1
$ openstack server create \
--image=$(openstack image list|grep x86_64-disk| awk '{print $2}') \
--flavor=1 \
--nic net-id=$(openstack network list |grep private | awk '{print $2}') \
--hint group=$(openstack server group list | grep group1 | awk '{print $2}') \
instance2
this should place two ironic instances on two different `rack` aggregates.
Creation of instances in a bulk
===============================
Unfortunately, creating instance in bulk isn't possible. Here is a full
explanation.
Currently, if we schedule a bulk creation for ironic instances, (or any bulk
creation of instances) filtered_scheduler will perform a filtering on each
available hosts on each requested instance.
Let's take an example, that we have 4 available ironic hosts, divided in two
groups with *aggregate-affinity* policy:
.. code:: shell-session
ubuntu@ubuntu ~/devstack ◆ (stable/ocata) $ openstack baremetal node list
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
| 959734ed-8dda-4878-9d5c-ddd9a95b65ec | node-0 | None | power off | available | False |
| c105d862-2eca-4845-901e-cd8194a39248 | node-1 | None | power off | available | False |
| a204e33f-6803-4d92-ad47-5b6928e3cede | node-2 | None | power off | available | False |
| 6ee27372-884d-4db4-af27-f697fffcb7c0 | node-3 | None | power off | available | False |
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
ubuntu@ubuntu ~/devstack ◆ (stable/ocata) $ openstack server group list
+--------------------------------------+--------+--------------------+
| ID | Name | Policies |
+--------------------------------------+--------+--------------------+
| 0b96ffc0-8e96-4613-b9a8-ea4e6c7ff0e8 | group1 | aggregate-affinity |
+--------------------------------------+--------+--------------------+
ubuntu@ubuntu ~/devstack ◆ (stable/ocata) $ openstack aggregate list
+----+-------+-------------------+
| ID | Name | Availability Zone |
+----+-------+-------------------+
| 1 | rack1 | None |
| 2 | rack2 | None |
+----+-------+-------------------+
ubuntu@ubuntu ~/devstack ◆ (stable/ocata) $ openstack aggregate show rack1
+-------------------+------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------------------------------------+
| availability_zone | None |
| created_at | 2018-02-21T08:10:35.000000 |
| deleted | False |
| deleted_at | None |
| hosts | [u'959734ed-8dda-4878-9d5c-ddd9a95b65ec', u'c105d862-2eca-4845-901e-cd8194a39248'] |
| id | 1 |
| name | rack1 |
| properties | |
| updated_at | None |
| uuid | bf7a251a-edff-4688-81d7-d6cf8b201847 |
+-------------------+------------------------------------------------------------------------------------+
ubuntu@ubuntu ~/devstack ◆ (stable/ocata) $ openstack aggregate show rack2
+-------------------+------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------------------------------------+
| availability_zone | None |
| created_at | 2018-02-21T08:10:37.000000 |
| deleted | False |
| deleted_at | None |
| hosts | [u'a204e33f-6803-4d92-ad47-5b6928e3cede', u'6ee27372-884d-4db4-af27-f697fffcb7c0'] |
| id | 2 |
| name | rack2 |
| properties | |
| updated_at | None |
| uuid | 7ca81b0e-2a87-4d41-af1b-b688aedc7b25 |
+-------------------+------------------------------------------------------------------------------------+
Next, given that we are able to have only two nodes in each aggregare, lets
create two instances in a bulk:
.. code:: shell-session
ubuntu@ubuntu ~/devstack ◆ (stable/ocata) $ openstack server create \
--image=$(openstack image list|grep x86_64-disk|awk '{print $2}') \
--flavor=1 \
--nic net-id=$(openstack network list|grep private|awk '{print $2}') \
--hint group=$(openstack server group list|grep group1|awk '{print $2}') \
--min 2 --max 2 instance
which will results running a filters, like those from scheduler logs:
.. code:: shell-session
:number-lines:
2018-02-21 09:16:53.303 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter RetryFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.304 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter AvailabilityZoneFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.304 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter RamFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.304 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter DiskFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.305 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter ComputeFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.305 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter ComputeCapabilitiesFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.305 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter ImagePropertiesFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.305 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter ServerGroupAntiAffinityFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.306 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter ServerGroupAffinityFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.306 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter SameHostFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.306 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter DifferentHostFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.306 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter ServerGroupAggregateAffinityFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.307 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter ServerGroupAggregateAntiAffinityFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.307 DEBUG nova.scheduler.filter_scheduler [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filtered [(ubuntu, c105d862-2eca-4845-901e-cd8194a39248) ram: 1280MB disk: 10240MB io_ops: 0 instances: 0, (ubuntu, a204e33f-6803-4d92-ad47-5b6928e3cede) ram: 1280MB disk: 10240MB io_ops: 0 instances: 0, (ubuntu, 6ee27372-884d-4db4-af27-f697fffcb7c0) ram: 1280MB disk: 10240MB io_ops: 0 instances: 0, (ubuntu, 959734ed-8dda-4878-9d5c-ddd9a95b65ec) ram: 1280MB disk: 10240MB io_ops: 0 instances: 0] from (pid=11395) _schedule /opt/stack/nova/nova/scheduler/filter_scheduler.py:115
2018-02-21 09:16:53.307 DEBUG nova.scheduler.filter_scheduler [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Weighed [WeighedHost [host: (ubuntu, c105d862-2eca-4845-901e-cd8194a39248) ram: 1280MB disk: 10240MB io_ops: 0 instances: 0, weight: 2.0], WeighedHost [host: (ubuntu, a204e33f-6803-4d92-ad47-5b6928e3cede) ram: 1280MB disk: 10240MB io_ops: 0 instances: 0, weight: 2.0], WeighedHost [host: (ubuntu, 6ee27372-884d-4db4-af27-f697fffcb7c0) ram: 1280MB disk: 10240MB io_ops: 0 instances: 0, weight: 2.0], WeighedHost [host: (ubuntu, 959734ed-8dda-4878-9d5c-ddd9a95b65ec) ram: 1280MB disk: 10240MB io_ops: 0 instances: 0, weight: 2.0]] from (pid=11395) _schedule /opt/stack/nova/nova/scheduler/filter_scheduler.py:120
2018-02-21 09:16:53.308 DEBUG nova.scheduler.filter_scheduler [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Selected host: WeighedHost [host: (ubuntu, a204e33f-6803-4d92-ad47-5b6928e3cede) ram: 1280MB disk: 10240MB io_ops: 0 instances: 0, weight: 2.0] from (pid=11395) _schedule /opt/stack/nova/nova/scheduler/filter_scheduler.py:127
2018-02-21 09:16:53.308 DEBUG oslo_concurrency.lockutils [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Lock "(u'ubuntu', u'a204e33f-6803-4d92-ad47-5b6928e3cede')" acquired by "nova.scheduler.host_manager._locked" :: waited 0.000s from (pid=11395) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
2018-02-21 09:16:53.308 DEBUG oslo_concurrency.lockutils [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Lock "(u'ubuntu', u'a204e33f-6803-4d92-ad47-5b6928e3cede')" released by "nova.scheduler.host_manager._locked" :: held 0.000s from (pid=11395) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2018-02-21 09:16:53.308 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Starting with 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:70
so, for the first iteration, filters return all four nodes (new aggregate
filters are on lines 12 and 13), which can be used to fulfill the request. Next
second iteration is done:
.. code:: shell-session
:number-lines:
2018-02-21 09:16:53.310 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter RetryFilter returned 4 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.310 DEBUG nova.scheduler.filters.ram_filter [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] (ubuntu, a204e33f-6803-4d92-ad47-5b6928e3cede) ram: 0MB disk: 0MB io_ops: 0 instances: 0 does not have 512 MB usable ram, it only has 0.0 MB usable ram. from (pid=11395) host_passes /opt/stack/nova/nova/scheduler/filters/ram_filter.py:61
2018-02-21 09:16:53.310 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter RamFilter returned 3 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.310 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter DiskFilter returned 3 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.310 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter ServerGroupAntiAffinityFilter returned 3 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.311 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter ServerGroupAffinityFilter returned 3 host(s) from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:104
2018-02-21 09:16:53.311 DEBUG nova.scheduler.filters.affinity_filter [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] aggregate-affinity: check if set([1]) is a subset of set([]),host nodes: set([u'ubuntu']) from (pid=11395) host_passes /opt/stack/nova/nova/scheduler/filters/affinity_filter.py:213
2018-02-21 09:16:53.311 DEBUG nova.scheduler.filters.affinity_filter [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] aggregate-affinity: check if set([2]) is a subset of set([]),host nodes: set([u'ubuntu']) from (pid=11395) host_passes /opt/stack/nova/nova/scheduler/filters/affinity_filter.py:213
2018-02-21 09:16:53.311 DEBUG nova.scheduler.filters.affinity_filter [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] aggregate-affinity: check if set([1]) is a subset of set([]),host nodes: set([u'ubuntu']) from (pid=11395) host_passes /opt/stack/nova/nova/scheduler/filters/affinity_filter.py:213
2018-02-21 09:16:53.312 INFO nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filter ServerGroupAggregateAffinityFilter returned 0 hosts
2018-02-21 09:16:53.312 DEBUG nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filtering removed all hosts for the request with instance ID '9a7f787c-5074-4af3-80a2-38eaecf882a2'. Filter results: [('RetryFilter', [(u'ubuntu', u'c105d862-2eca-4845-901e-cd8194a39248'), (u'ubuntu', u'a204e33f-6803-4d92-ad47-5b6928e3cede'), (u'ubuntu', u'6ee27372-884d-4db4-af27-f697fffcb7c0'), (u'ubuntu', u'959734ed-8dda-4878-9d5c-ddd9a95b65ec')]), ('RamFilter', [(u'ubuntu', u'c105d862-2eca-4845-901e-cd8194a39248'), (u'ubuntu', u'6ee27372-884d-4db4-af27-f697fffcb7c0'), (u'ubuntu', u'959734ed-8dda-4878-9d5c-ddd9a95b65ec')]), ('DiskFilter', [(u'ubuntu', u'c105d862-2eca-4845-901e-cd8194a39248'), (u'ubuntu', u'6ee27372-884d-4db4-af27-f697fffcb7c0'), (u'ubuntu', u'959734ed-8dda-4878-9d5c-ddd9a95b65ec')]), ('ServerGroupAntiAffinityFilter', [(u'ubuntu', u'c105d862-2eca-4845-901e-cd8194a39248'), (u'ubuntu', u'6ee27372-884d-4db4-af27-f697fffcb7c0'), (u'ubuntu', u'959734ed-8dda-4878-9d5c-ddd9a95b65ec')]), ('ServerGroupAffinityFilter', [(u'ubuntu', u'c105d862-2eca-4845-901e-cd8194a39248'), (u'ubuntu', u'6ee27372-884d-4db4-af27-f697fffcb7c0'), (u'ubuntu', u'959734ed-8dda-4878-9d5c-ddd9a95b65ec')]), ('ServerGroupAggregateAffinityFilter', None)] from (pid=11395) get_filtered_objects /opt/stack/nova/nova/filters.py:129
2018-02-21 09:16:53.312 INFO nova.filters [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Filtering removed all hosts for the request with instance ID '9a7f787c-5074-4af3-80a2-38eaecf882a2'. Filter results: ['RetryFilter: (start: 4, end: 4)', 'RamFilter: (start: 4, end: 3)', 'DiskFilter: (start: 3, end: 3)', 'ServerGroupAntiAffinityFilter: (start: 3, end: 3)', 'ServerGroupAffinityFilter: (start: 3, end: 3)', 'ServerGroupAggregateAffinityFilter: (start: 3, end: 0)']
2018-02-21 09:16:53.312 DEBUG nova.scheduler.filter_scheduler [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] There are 1 hosts available but 2 instances requested to build. from (pid=11395) select_destinations /opt/stack/nova/nova/scheduler/filter_scheduler.py:76
2018-02-21 09:16:53.312 DEBUG oslo_messaging.rpc.server [req-6b671371-ea58-4b1d-8657-a6376d2d1d88 admin admin] Expected exception during message handling () from (pid=11395) _process_incoming /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py:158
This time, as we can see in line 10, *ServerGroupAffinityFilter* returns 0
hosts. A log lines 7-9 gives us a hint, that none of the candidates fulfill
requirement, which looks like this (I've removed some comments and non
interesting parts for readability):
.. code:: python
:number-lines:
def host_passes(self, host_state, spec_obj):
# ...
host_aggs = set(agg.id for agg in host_state.aggregates)
if not host_aggs:
return self.REVERSE_CHECK
# Take all hypervisors nodenames and hostnames
host_nodes = set(spec_obj.instance_group.nodes +
spec_obj.instance_group.hosts)
if not host_nodes:
# There are no members of the server group yet
return True
# Grab all aggregates for all hosts in the server group and ensure we
# have an intersection with this host's aggregates
group_aggs = set()
for node in host_nodes:
group_aggs |= self.host_manager.host_aggregates_map[node]
LOG.debug(...)
if self.REVERSE_CHECK:
return host_aggs.isdisjoint(group_aggs)
return host_aggs.issubset(group_aggs
In this filter first we check if host belongs to any aggregate and store it as
a set. If there is an empty set, it means that node either cannot satisfy
aggregate affinity constraint in case of *aggregate-affinity* policy or it's
does satisfy the constraint in case of *aggregate-anti-affinity*.
Next, there is a check for ``instance_group`` hosts and nodes (``nodes`` field
is added for Ironic case, otherwise we don't have Ironic nodes hostnames other
than… hostname which origin from compute service). In case there is no instance
yet created, that means we can pass current host, since there is no hosts in
the group yet.
If we have some nodenames/hostnames in the set, we trying to match host
aggregates with the each nodenames/hostnames (line 20). And here is the issue.
``instance_group`` provided by request spec object (``spec_obj``) have
``hosts`` field filled out during scheduling, but ``nodes`` field not, until
**there is an instance created**, so this is the reason why we can create
instances one by one, but not in the bulk.
.. _Jay Pipes series: https://review.openstack.org/#/q/topic:bp/aggregate-affinity

View File

@@ -1,7 +1,7 @@
From 0f820a60994586debef47a59ebf8d9eef225b69c Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com>
Date: Wed, 27 Dec 2017 13:51:25 +0100
Subject: [PATCH 1/4] allow compute nodes to be associated with host agg
Subject: [PATCH 1/8] allow compute nodes to be associated with host agg
This is basically an Ocata backport patch from Jay Pipes:
https://review.openstack.org/#/c/526753
@@ -211,5 +211,5 @@ index 0000000000..7946fddcfe
+ self.agg_api.remove_host_from_aggregate(self.ctxt, agg_id,
+ nodename)
--
2.13.6
2.16.1

View File

@@ -1,7 +1,7 @@
From f5e23e436d341a44dafe5a18876cfcadc809b46b Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com>
Date: Mon, 8 Jan 2018 14:33:45 +0100
Subject: [PATCH 2/4] Remove server group sched filter support caching
Subject: [PATCH 2/8] Remove server group sched filter support caching
Backport of https://review.openstack.org/#/c/529200 by Jay Pipes to
Ocata.
@@ -237,5 +237,5 @@ index 1893a7e212..63035e742a 100644
def _create_server_group(self, policy='anti-affinity'):
--
2.13.6
2.16.1

View File

@@ -1,7 +1,7 @@
From 69d0e023edfc2edc123fd5ed29b79ebbd3abe97f Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com>
Date: Wed, 10 Jan 2018 10:37:54 +0100
Subject: [PATCH 3/4] get instance group's aggregate associations
Subject: [PATCH 3/8] get instance group's aggregate associations
Ocata backport for patch from Jay Pipes:
https://review.openstack.org/#/c/531243/
@@ -34,10 +34,11 @@ index 670813b77e..2be47278b2 100644
fields = {
'id': fields.IntegerField(),
@@ -455,6 +457,38 @@ class InstanceGroup(base.NovaPersistentObject, base.NovaObject,
@@ -454,6 +456,38 @@ class InstanceGroup(base.NovaPersistentObject, base.NovaObject,
return list(set([instance.host for instance in instances
if instance.host]))
@base.remotable
+ @base.remotable
+ def get_aggregate_uuids(self, exclude=None):
+ """Returns a set of aggregate UUIDs associated with all compute nodes
+ that are housing all non-deleted instances in the group
@@ -69,10 +70,9 @@ index 670813b77e..2be47278b2 100644
+ agg_uuids = [r[0] for r in res]
+ return set(agg_uuids)
+
+ @base.remotable
@base.remotable
def count_members_by_user(self, user_id):
"""Count the number of instances in a group belonging to a user."""
filter_uuids = self.members
diff --git a/nova/tests/functional/db/test_instance_group.py b/nova/tests/functional/db/test_instance_group.py
index 4c4f627fe2..b4c7ef3fd8 100644
--- a/nova/tests/functional/db/test_instance_group.py
@@ -250,5 +250,5 @@ index 71b919597f..a577820d0c 100644
'InstanceInfoCache': '1.5-cd8b96fefe0fc8d4d337243ba0bf0e1e',
'InstanceList': '2.2-ff71772c7bf6d72f6ef6eee0199fb1c9',
--
2.13.6
2.16.1

View File

@@ -1,7 +1,7 @@
From f69827ff3502552a45a19a50ef2cfad30c41af2d Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com>
Date: Thu, 18 Jan 2018 09:17:04 +0100
Subject: [PATCH 4/4] Support aggregate affinity filters
Subject: [PATCH 4/8] Support aggregate affinity filters
Jay patch for two new policies: aggregate-affinity and
aggregate-antiaffinity backported to Ocata.
@@ -465,5 +465,5 @@ index 5e52088c14..52af7688bb 100644
+ self.assertNotEqual(_host_from_instance(inst1),
+ _host_from_instance(inst2))
--
2.13.6
2.16.1

View File

@@ -0,0 +1,170 @@
From 9014195f11d981da4dc158ab9b9b6bb594c8ea0d Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com>
Date: Fri, 23 Feb 2018 07:26:05 +0100
Subject: [PATCH 5/8] Added node field for InstanceGroup objects
Currently, there is only a way for getting the information which hosts
belongs for certain instance group. By 'hosts' it means a hostname, on
which compute service is running. In case of bare metal instances, there
is no way to get the information out of instance group object which
ironic nodes are belonging for such group. This patch adds an ability
for fetching such information.
InstanceGroup class now have new field - nodes - and corresponding method
get_nodes, to gather information about nodes out of instance objects. Also
request spec object was updated to reset new InstanceGroup nodes field during
group population.
---
nova/objects/instance_group.py | 34 ++++++++++++++++++++-----
nova/objects/request_spec.py | 5 ++--
nova/tests/functional/db/test_instance_group.py | 2 +-
nova/tests/unit/objects/test_instance_group.py | 6 +++--
nova/tests/unit/objects/test_objects.py | 2 +-
5 files changed, 37 insertions(+), 12 deletions(-)
diff --git a/nova/objects/instance_group.py b/nova/objects/instance_group.py
index 2be47278b2..142fff6128 100644
--- a/nova/objects/instance_group.py
+++ b/nova/objects/instance_group.py
@@ -32,7 +32,7 @@ from nova.objects import base
from nova.objects import fields
-LAZY_LOAD_FIELDS = ['hosts']
+LAZY_LOAD_FIELDS = ['hosts', 'nodes']
def _instance_group_get_query(context, id_field=None, id=None):
@@ -124,7 +124,8 @@ class InstanceGroup(base.NovaPersistentObject, base.NovaObject,
# Version 1.9: Add get_by_instance_uuid()
# Version 1.10: Add hosts field
# Version 1.11: Add get_aggregate_uuids()
- VERSION = '1.11'
+ # Version 1.12: Add nodes field
+ VERSION = '1.12'
fields = {
'id': fields.IntegerField(),
@@ -138,6 +139,7 @@ class InstanceGroup(base.NovaPersistentObject, base.NovaObject,
'policies': fields.ListOfStringsField(nullable=True),
'members': fields.ListOfStringsField(nullable=True),
'hosts': fields.ListOfStringsField(nullable=True),
+ 'nodes': fields.ListOfStringsField(nullable=True),
}
def obj_make_compatible(self, primitive, target_version):
@@ -283,12 +285,13 @@ class InstanceGroup(base.NovaPersistentObject, base.NovaObject,
def obj_load_attr(self, attrname):
# NOTE(sbauza): Only hosts could be lazy-loaded right now
- if attrname != 'hosts':
+ if attrname not in LAZY_LOAD_FIELDS:
raise exception.ObjectActionError(
action='obj_load_attr', reason='unable to load %s' % attrname)
self.hosts = self.get_hosts()
- self.obj_reset_changes(['hosts'])
+ self.nodes = self.get_nodes()
+ self.obj_reset_changes(LAZY_LOAD_FIELDS)
@base.remotable_classmethod
def get_by_uuid(cls, context, uuid):
@@ -348,8 +351,9 @@ class InstanceGroup(base.NovaPersistentObject, base.NovaObject,
# field explicitly, we prefer to raise an Exception so the developer
# knows he has to call obj_reset_changes(['hosts']) right after setting
# the field.
- if 'hosts' in updates:
- raise exception.InstanceGroupSaveException(field='hosts')
+ for attribute in LAZY_LOAD_FIELDS:
+ if attribute in updates:
+ raise exception.InstanceGroupSaveException(field=attribute)
if not updates:
return
@@ -456,6 +460,24 @@ class InstanceGroup(base.NovaPersistentObject, base.NovaObject,
return list(set([instance.host for instance in instances
if instance.host]))
+ @base.remotable
+ def get_nodes(self, exclude=None):
+ """Get a list of nodes for non-deleted instances in the group
+
+ This method allows you to get a list of the (ironic) hosts where
+ instances in this group are currently running. There's also an option
+ to exclude certain instance UUIDs from this calculation.
+
+ """
+ filter_uuids = self.members
+ if exclude:
+ filter_uuids = set(filter_uuids) - set(exclude)
+ filters = {'uuid': filter_uuids, 'deleted': False}
+ instances = objects.InstanceList.get_by_filters(self._context,
+ filters=filters)
+ return list(set([instance.node for instance in instances
+ if instance.node]))
+
@base.remotable
def get_aggregate_uuids(self, exclude=None):
"""Returns a set of aggregate UUIDs associated with all compute nodes
diff --git a/nova/objects/request_spec.py b/nova/objects/request_spec.py
index 9040735153..24eaef9327 100644
--- a/nova/objects/request_spec.py
+++ b/nova/objects/request_spec.py
@@ -200,8 +200,9 @@ class RequestSpec(base.NovaObject):
self.instance_group = objects.InstanceGroup(policies=policies,
hosts=hosts,
members=members)
- # hosts has to be not part of the updates for saving the object
- self.instance_group.obj_reset_changes(['hosts'])
+ # hosts and nodes cannot be a part of the updates for saving the
+ # object
+ self.instance_group.obj_reset_changes(['hosts', 'nodes'])
else:
# Set the value anyway to avoid any call to obj_attr_is_set for it
self.instance_group = None
diff --git a/nova/tests/functional/db/test_instance_group.py b/nova/tests/functional/db/test_instance_group.py
index b4c7ef3fd8..3c608b929f 100644
--- a/nova/tests/functional/db/test_instance_group.py
+++ b/nova/tests/functional/db/test_instance_group.py
@@ -221,7 +221,7 @@ class InstanceGroupObjectTestCase(test.TestCase):
api_models = sorted(api_models, key=key_func)
orig_main_models = sorted(orig_main_models, key=key_func)
ignore_fields = ('id', 'hosts', 'deleted', 'deleted_at', 'created_at',
- 'updated_at')
+ 'updated_at', 'nodes')
for i in range(len(api_models)):
for field in instance_group.InstanceGroup.fields:
if field not in ignore_fields:
diff --git a/nova/tests/unit/objects/test_instance_group.py b/nova/tests/unit/objects/test_instance_group.py
index 8da6712f6e..37a71b57ce 100644
--- a/nova/tests/unit/objects/test_instance_group.py
+++ b/nova/tests/unit/objects/test_instance_group.py
@@ -271,8 +271,10 @@ class _TestInstanceGroupObject(object):
@mock.patch.object(objects.InstanceList, 'get_by_filters')
def test_load_hosts(self, mock_get_by_filt):
- mock_get_by_filt.return_value = [objects.Instance(host='host1'),
- objects.Instance(host='host2')]
+ mock_get_by_filt.return_value = [objects.Instance(host='host1',
+ node='node1'),
+ objects.Instance(host='host2',
+ node='node2')]
obj = objects.InstanceGroup(self.context, members=['uuid1'])
self.assertEqual(2, len(obj.hosts))
diff --git a/nova/tests/unit/objects/test_objects.py b/nova/tests/unit/objects/test_objects.py
index a577820d0c..f80182357c 100644
--- a/nova/tests/unit/objects/test_objects.py
+++ b/nova/tests/unit/objects/test_objects.py
@@ -1106,7 +1106,7 @@ object_data = {
'InstanceExternalEvent': '1.1-6e446ceaae5f475ead255946dd443417',
'InstanceFault': '1.2-7ef01f16f1084ad1304a513d6d410a38',
'InstanceFaultList': '1.2-6bb72de2872fe49ded5eb937a93f2451',
- 'InstanceGroup': '1.11-bdd9fa6ab3c80e92fd43b3ba5393e368',
+ 'InstanceGroup': '1.12-4eaaffc4d20d0901cd0cfaef9e8a41cd',
'InstanceGroupList': '1.7-be18078220513316abd0ae1b2d916873',
'InstanceInfoCache': '1.5-cd8b96fefe0fc8d4d337243ba0bf0e1e',
'InstanceList': '2.2-ff71772c7bf6d72f6ef6eee0199fb1c9',
--
2.16.1

View File

@@ -0,0 +1,57 @@
From 3e4ef01cb6f3fa5545cd3be31d84295d65f73fa7 Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com>
Date: Fri, 23 Feb 2018 09:19:54 +0000
Subject: [PATCH 6/8] Add ability to search aggregate map via ironic node as a
key in HostManager
With this change now it will be possible for mapping nodes with aggregate.
Changed signature of _get_aggregates_info in scheduler HostManager class to be
able to accept compute object as a parameter, so that in HostManager (base
class) aggregate map will be searched by host, while in IronicHostManager
(subclass) it will search by hypervisor_hostname - which is the UUID of the
node, and which is stored as an member of aggregate.
---
nova/scheduler/host_manager.py | 6 +++---
nova/scheduler/ironic_host_manager.py | 4 ++++
2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/nova/scheduler/host_manager.py b/nova/scheduler/host_manager.py
index 7347722a94..8612a36328 100644
--- a/nova/scheduler/host_manager.py
+++ b/nova/scheduler/host_manager.py
@@ -631,7 +631,7 @@ class HostManager(object):
# happening after setting this field for the first time
host_state.update(compute,
dict(service),
- self._get_aggregates_info(host),
+ self._get_aggregates_info(compute),
self._get_instance_info(context, compute))
seen_nodes.add(state_key)
@@ -652,9 +652,9 @@ class HostManager(object):
return (self.host_state_map[host] for host in seen_nodes
if host in self.host_state_map)
- def _get_aggregates_info(self, host):
+ def _get_aggregates_info(self, compute):
return [self.aggs_by_id[agg_id] for agg_id in
- self.host_aggregates_map[host]]
+ self.host_aggregates_map[compute.host]]
def _get_instance_info(self, context, compute):
"""Gets the host instance info from the compute host.
diff --git a/nova/scheduler/ironic_host_manager.py b/nova/scheduler/ironic_host_manager.py
index 5156ed6df9..c703a810a9 100644
--- a/nova/scheduler/ironic_host_manager.py
+++ b/nova/scheduler/ironic_host_manager.py
@@ -123,3 +123,7 @@ class IronicHostManager(host_manager.HostManager):
else:
return super(IronicHostManager, self)._get_instance_info(context,
compute)
+
+ def _get_aggregates_info(self, compute):
+ return [self.aggs_by_id[agg_id] for agg_id in
+ self.host_aggregates_map[compute.hypervisor_hostname]]
--
2.16.1

View File

@@ -0,0 +1,32 @@
From 6f8af77366402aca0555005abe469b29509d0eb3 Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com>
Date: Fri, 23 Feb 2018 11:28:52 +0000
Subject: [PATCH 7/8] Add nodes to group hosts to be checked against
aggregation
Currently, only hostnames (which origin from machine, on which compute service
is running, and which belong to the requested group) was checked against host
aggregates. This patch adds also instance_group.nodes to the set of the keys
being a criteria for a search in aggregates.
---
nova/scheduler/filters/affinity_filter.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/nova/scheduler/filters/affinity_filter.py b/nova/scheduler/filters/affinity_filter.py
index f025df45df..587293f832 100644
--- a/nova/scheduler/filters/affinity_filter.py
+++ b/nova/scheduler/filters/affinity_filter.py
@@ -177,8 +177,8 @@ class ServerGroupAggregateAffinityFilter(filters.BaseHostFilter):
# constraint
return True
- group_hosts = (spec_obj.instance_group.hosts
- if spec_obj.instance_group else [])
+ group_hosts = set(spec_obj.instance_group.nodes +
+ spec_obj.instance_group.hosts)
if not group_hosts:
# There are no members of the server group yet, so this host meets
# the aggregate affinity (or anti-affinity) constraint
--
2.16.1

View File

@@ -0,0 +1,25 @@
From 72af3e6b58c3a732549b40fbb24067a41c7065ac Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com>
Date: Fri, 23 Feb 2018 11:37:16 +0000
Subject: [PATCH 8/8] Fix for checking policies in non existing instance_group
---
nova/scheduler/filters/affinity_filter.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/nova/scheduler/filters/affinity_filter.py b/nova/scheduler/filters/affinity_filter.py
index 587293f832..a316aafbcb 100644
--- a/nova/scheduler/filters/affinity_filter.py
+++ b/nova/scheduler/filters/affinity_filter.py
@@ -158,7 +158,7 @@ class ServerGroupAggregateAffinityFilter(filters.BaseHostFilter):
REVERSE_CHECK = False
def host_passes(self, host_state, spec_obj):
- if not spec_obj.instance_group.policies:
+ if not (spec_obj.instance_group and spec_obj.instance_group.policies):
return True
policy = spec_obj.instance_group.policies[0]
if self.POLICY_NAME != policy:
--
2.16.1

View File

@@ -0,0 +1,22 @@
From 908c71544de1323e109cfec66f146ea68a71d91f Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com>
Date: Fri, 23 Feb 2018 12:45:01 +0100
Subject: [PATCH] Bump novaclient API version to 2.43
---
novaclient/__init__.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/novaclient/__init__.py b/novaclient/__init__.py
index 0816b4f6..61377cca 100644
--- a/novaclient/__init__.py
+++ b/novaclient/__init__.py
@@ -25,4 +25,4 @@ API_MIN_VERSION = api_versions.APIVersion("2.1")
# when client supported the max version, and bumped sequentially, otherwise
# the client may break due to server side new version may include some
# backward incompatible change.
-API_MAX_VERSION = api_versions.APIVersion("2.41")
+API_MAX_VERSION = api_versions.APIVersion("2.43")
--
2.16.1