1
0
mirror of https://github.com/gryf/openstack.git synced 2025-12-17 11:30:24 +01:00

Added soft anti/affinity weight.

This commit is contained in:
Roman Dobosz
2018-03-21 12:56:37 +01:00
parent 7c2f00d09d
commit 04bfd51bf0
10 changed files with 405 additions and 12 deletions

View File

@@ -1,7 +1,7 @@
Aggregate affinity Aggregate affinity
================== ==================
This series of patches add ability for creating aggregation of ironic nodes in This series of patches adds ability for creating aggregation of ironic nodes in
Nova. This work is based on work of `Jay Pipes series`_ back ported to Ocata, Nova. This work is based on work of `Jay Pipes series`_ back ported to Ocata,
with some additional fixes. with some additional fixes.
@@ -15,8 +15,9 @@ of two new policies:
Note, that if openstackclient is used, it is needed to overwrite Note, that if openstackclient is used, it is needed to overwrite
``OS_COMPUTE_API_VERSION`` environment variable to value ``2.43``. ``OS_COMPUTE_API_VERSION`` environment variable to value ``2.43``.
Given, that we are working on devstack, and have available four Ironic nodes, Given, that we are working on devstack, and have available four Ironic nodes
basic flow to test it is as follows: (it need to be changed in devstacks' ``local.conf`` by setting variable
``IRONIC_VM_COUNT`` to ``4``), basic flow to test it is as follows:
.. code:: shell-session .. code:: shell-session
@@ -41,7 +42,84 @@ basic flow to test it is as follows:
--hint group=$(openstack server group list | grep group1 | awk '{print $2}') \ --hint group=$(openstack server group list | grep group1 | awk '{print $2}') \
instance2 instance2
this should place two ironic instances on two different `rack` aggregates. this should place two ironic instances on two different *rack* aggregates. In
similar fashion it might be group created with policy ``aggregate-affinity``.
Soft aggregate affinity
=======================
This is similar feature to `soft (anti) affinity* feature`_ which was
done for compute hosts. There are two new weight introduced:
* aggregate-soft-affinity
* aggregate-soft-anti-affinity
and can be used for scattering instances between two aggregates within
an instance group with two policies - to keep instances within an
aggregate (affinity), or to spread them around on different aggregates.
If there would be not possible to put an instance together on an
aggregate (in case of affinity) or on different one (in case of
anti-affinity), it will be placed in specified group anyway.
Simple usage is as follows, using environment described above in
*aggregate-affinity* feature:
.. code:: shell-session
$ export OS_COMPUTE_API_VERSION=2.43
$ openstack aggregate create rack1
$ openstack aggregate create rack2
$ openstack aggregate add host rack1 $(openstack baremetal node list|grep node-0|awk '{print $2}')
$ openstack aggregate add host rack1 $(openstack baremetal node list|grep node-1|awk '{print $2}')
$ openstack aggregate add host rack2 $(openstack baremetal node list|grep node-2|awk '{print $2}')
$ openstack aggregate add host rack2 $(openstack baremetal node list|grep node-3|awk '{print $2}')
$ openstack server group create --policy aggregate-soft-anti-affinity group1
$ openstack server create \
--image=$(openstack image list|grep x86_64-disk| awk '{print $2}') \
--flavor=1 \
--nic net-id=$(openstack network list |grep private | awk '{print $2}') \
--hint group=$(openstack server group list | grep group1 | awk '{print $2}') \
instance1
$ openstack server create \
--image=$(openstack image list|grep x86_64-disk| awk '{print $2}') \
--flavor=1 \
--nic net-id=$(openstack network list |grep private | awk '{print $2}') \
--hint group=$(openstack server group list | grep group1 | awk '{print $2}') \
instance2
$ openstack server create \
--image=$(openstack image list|grep x86_64-disk| awk '{print $2}') \
--flavor=1 \
--nic net-id=$(openstack network list |grep private | awk '{print $2}') \
--hint group=$(openstack server group list | grep group1 | awk '{print $2}') \
instance3
Unlike in ``aggregate-anti-affinity`` policy, creating ``instance3`` will
pass, since regardless of not available aggregate with no group members, it
will be placed in the group anyway on one of the available host within the
group.
Configuration
-------------
As for soft aggregate (anti) affinity there is another limitation, which comes
with how weights works right now in Nova. Because of `this commit`_ change of
behaviour was introduced on how scheduler selects hosts. It's concerns all of
affinity/anti-affinity weights, not only this particular newly added for
aggregation.
That change introduce a blind selection of the host form a group of the weighed
hosts, which are originally sorted from best fitting. For affinity weight it
will always return full list of the hosts (since they are not a filters), which
is ordered from best to worst hosts. There is a high chance, that ``nova.conf``
will need to have a scheduler filter option ``host_subset_size`` set to ``1``,
like:
.. code:: ini
[filter_scheduler]
host_subset_size = 1
Creation of instances in a bulk Creation of instances in a bulk
@@ -226,3 +304,5 @@ instances one by one, but not in the bulk.
.. _Jay Pipes series: https://review.openstack.org/#/q/topic:bp/aggregate-affinity .. _Jay Pipes series: https://review.openstack.org/#/q/topic:bp/aggregate-affinity
.. _this commit: https://review.openstack.org/#/c/19823/
.. _soft (anti) affinity* feature: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/soft-affinity-for-server-group.html

View File

@@ -1,7 +1,7 @@
From 0f820a60994586debef47a59ebf8d9eef225b69c Mon Sep 17 00:00:00 2001 From 0f820a60994586debef47a59ebf8d9eef225b69c Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com> From: Roman Dobosz <roman.dobosz@intel.com>
Date: Wed, 27 Dec 2017 13:51:25 +0100 Date: Wed, 27 Dec 2017 13:51:25 +0100
Subject: [PATCH 1/8] allow compute nodes to be associated with host agg Subject: [PATCH 1/9] allow compute nodes to be associated with host agg
This is basically an Ocata backport patch from Jay Pipes: This is basically an Ocata backport patch from Jay Pipes:
https://review.openstack.org/#/c/526753 https://review.openstack.org/#/c/526753

View File

@@ -1,7 +1,7 @@
From f5e23e436d341a44dafe5a18876cfcadc809b46b Mon Sep 17 00:00:00 2001 From f5e23e436d341a44dafe5a18876cfcadc809b46b Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com> From: Roman Dobosz <roman.dobosz@intel.com>
Date: Mon, 8 Jan 2018 14:33:45 +0100 Date: Mon, 8 Jan 2018 14:33:45 +0100
Subject: [PATCH 2/8] Remove server group sched filter support caching Subject: [PATCH 2/9] Remove server group sched filter support caching
Backport of https://review.openstack.org/#/c/529200 by Jay Pipes to Backport of https://review.openstack.org/#/c/529200 by Jay Pipes to
Ocata. Ocata.

View File

@@ -1,7 +1,7 @@
From 69d0e023edfc2edc123fd5ed29b79ebbd3abe97f Mon Sep 17 00:00:00 2001 From 69d0e023edfc2edc123fd5ed29b79ebbd3abe97f Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com> From: Roman Dobosz <roman.dobosz@intel.com>
Date: Wed, 10 Jan 2018 10:37:54 +0100 Date: Wed, 10 Jan 2018 10:37:54 +0100
Subject: [PATCH 3/8] get instance group's aggregate associations Subject: [PATCH 3/9] get instance group's aggregate associations
Ocata backport for patch from Jay Pipes: Ocata backport for patch from Jay Pipes:
https://review.openstack.org/#/c/531243/ https://review.openstack.org/#/c/531243/

View File

@@ -1,7 +1,7 @@
From f69827ff3502552a45a19a50ef2cfad30c41af2d Mon Sep 17 00:00:00 2001 From f69827ff3502552a45a19a50ef2cfad30c41af2d Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com> From: Roman Dobosz <roman.dobosz@intel.com>
Date: Thu, 18 Jan 2018 09:17:04 +0100 Date: Thu, 18 Jan 2018 09:17:04 +0100
Subject: [PATCH 4/8] Support aggregate affinity filters Subject: [PATCH 4/9] Support aggregate affinity filters
Jay patch for two new policies: aggregate-affinity and Jay patch for two new policies: aggregate-affinity and
aggregate-antiaffinity backported to Ocata. aggregate-antiaffinity backported to Ocata.

View File

@@ -1,7 +1,7 @@
From 9014195f11d981da4dc158ab9b9b6bb594c8ea0d Mon Sep 17 00:00:00 2001 From 9014195f11d981da4dc158ab9b9b6bb594c8ea0d Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com> From: Roman Dobosz <roman.dobosz@intel.com>
Date: Fri, 23 Feb 2018 07:26:05 +0100 Date: Fri, 23 Feb 2018 07:26:05 +0100
Subject: [PATCH 5/8] Added node field for InstanceGroup objects Subject: [PATCH 5/9] Added node field for InstanceGroup objects
Currently, there is only a way for getting the information which hosts Currently, there is only a way for getting the information which hosts
belongs for certain instance group. By 'hosts' it means a hostname, on belongs for certain instance group. By 'hosts' it means a hostname, on

View File

@@ -1,7 +1,7 @@
From 3e4ef01cb6f3fa5545cd3be31d84295d65f73fa7 Mon Sep 17 00:00:00 2001 From 3e4ef01cb6f3fa5545cd3be31d84295d65f73fa7 Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com> From: Roman Dobosz <roman.dobosz@intel.com>
Date: Fri, 23 Feb 2018 09:19:54 +0000 Date: Fri, 23 Feb 2018 09:19:54 +0000
Subject: [PATCH 6/8] Add ability to search aggregate map via ironic node as a Subject: [PATCH 6/9] Add ability to search aggregate map via ironic node as a
key in HostManager key in HostManager
With this change now it will be possible for mapping nodes with aggregate. With this change now it will be possible for mapping nodes with aggregate.

View File

@@ -1,7 +1,7 @@
From 6f8af77366402aca0555005abe469b29509d0eb3 Mon Sep 17 00:00:00 2001 From 6f8af77366402aca0555005abe469b29509d0eb3 Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com> From: Roman Dobosz <roman.dobosz@intel.com>
Date: Fri, 23 Feb 2018 11:28:52 +0000 Date: Fri, 23 Feb 2018 11:28:52 +0000
Subject: [PATCH 7/8] Add nodes to group hosts to be checked against Subject: [PATCH 7/9] Add nodes to group hosts to be checked against
aggregation aggregation
Currently, only hostnames (which origin from machine, on which compute service Currently, only hostnames (which origin from machine, on which compute service

View File

@@ -1,7 +1,7 @@
From 72af3e6b58c3a732549b40fbb24067a41c7065ac Mon Sep 17 00:00:00 2001 From 72af3e6b58c3a732549b40fbb24067a41c7065ac Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com> From: Roman Dobosz <roman.dobosz@intel.com>
Date: Fri, 23 Feb 2018 11:37:16 +0000 Date: Fri, 23 Feb 2018 11:37:16 +0000
Subject: [PATCH 8/8] Fix for checking policies in non existing instance_group Subject: [PATCH 8/9] Fix for checking policies in non existing instance_group
--- ---
nova/scheduler/filters/affinity_filter.py | 2 +- nova/scheduler/filters/affinity_filter.py | 2 +-

View File

@@ -0,0 +1,313 @@
From 85c5a788ebe71089d06bc82a57a5a4b10dd72fe8 Mon Sep 17 00:00:00 2001
From: Roman Dobosz <roman.dobosz@intel.com>
Date: Wed, 14 Mar 2018 14:01:55 +0100
Subject: [PATCH 9/9] Added weight for aggregate soft (anti) affinity.
This is similar feature to soft (anti) affinity feature[1] which was
done for compute hosts. In This commit we introducing two new weights:
- aggregate-soft-affinity
- aggregate-soft-anti-affinity
which can be used for scattering instances between two aggregates within
an instance group with two policies - to keep instances within an
aggregate (affinity), or to spread them around on different aggregates.
If there would be not possible to put an instance together on an
aggregate (in case of affinity) or on different one (in case of
anti-affinity), it will be placed in specified group anyway.
[1] http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/soft-affinity-for-server-group.html
---
.../api/openstack/compute/schemas/server_groups.py | 4 +-
nova/compute/manager.py | 2 +-
nova/conf/scheduler.py | 24 ++++
nova/scheduler/utils.py | 6 +-
nova/scheduler/weights/affinity.py | 66 +++++++++++
.../scheduler/weights/test_weights_affinity.py | 123 +++++++++++++++++++++
6 files changed, 222 insertions(+), 3 deletions(-)
diff --git a/nova/api/openstack/compute/schemas/server_groups.py b/nova/api/openstack/compute/schemas/server_groups.py
index 4b274e3251..408a559d99 100644
--- a/nova/api/openstack/compute/schemas/server_groups.py
+++ b/nova/api/openstack/compute/schemas/server_groups.py
@@ -47,4 +47,6 @@ policies['items'][0]['enum'].extend(['soft-anti-affinity', 'soft-affinity'])
create_v243 = copy.deepcopy(create_v215)
policies = create_v243['properties']['server_group']['properties']['policies']
policies['items'][0]['enum'].extend(['aggregate-anti-affinity',
- 'aggregate-affinity'])
+ 'aggregate-affinity',
+ 'aggregate-soft-anti-affinity',
+ 'aggregate-soft-affinity'])
diff --git a/nova/compute/manager.py b/nova/compute/manager.py
index 10ed9d3df0..8040d2fa7c 100644
--- a/nova/compute/manager.py
+++ b/nova/compute/manager.py
@@ -1328,7 +1328,7 @@ class ComputeManager(manager.Manager):
raise exception.RescheduledException(
instance_uuid=instance.uuid,
reason=msg)
- else:
+ elif 'aggregate-anti-affinity' == group_policy:
group_aggs = group.get_aggregate_uuids(
exclude=[instance.uuid])
if not node_aggs.isdisjoint(group_aggs):
diff --git a/nova/conf/scheduler.py b/nova/conf/scheduler.py
index 6b69f9d1a2..710eebcad6 100644
--- a/nova/conf/scheduler.py
+++ b/nova/conf/scheduler.py
@@ -462,6 +462,30 @@ Multiplier used for weighing hosts for group soft-anti-affinity.
Possible values:
+* An integer or float value, where the value corresponds to weight multiplier
+ for hosts with group soft anti-affinity. Only a positive value are
+ meaningful, as negative values would make this behave as a soft affinity
+ weigher.
+"""),
+ cfg.FloatOpt("aggregate_soft_affinity_weight_multiplier",
+ default=1.0,
+ help="""
+Multiplier used for weighing hosts for group soft-affinity.
+
+Possible values:
+
+* An integer or float value, where the value corresponds to weight multiplier
+ for hosts with group soft affinity. Only a positive value are meaningful, as
+ negative values would make this behave as a soft anti-affinity weigher.
+"""),
+ cfg.FloatOpt(
+ "aggregate_soft_anti_affinity_weight_multiplier",
+ default=1.0,
+ help="""
+Multiplier used for weighing hosts for group soft-anti-affinity.
+
+Possible values:
+
* An integer or float value, where the value corresponds to weight multiplier
for hosts with group soft anti-affinity. Only a positive value are
meaningful, as negative values would make this behave as a soft affinity
diff --git a/nova/scheduler/utils.py b/nova/scheduler/utils.py
index 57a306e07a..57f8cf343f 100644
--- a/nova/scheduler/utils.py
+++ b/nova/scheduler/utils.py
@@ -311,7 +311,11 @@ def _get_group_details(context, instance_uuid, user_group_hosts=None):
'aggregate-affinity': (
_validate_filter, 'ServerGroupAggregateAffinityFilter'),
'aggregate-anti-affinity': (
- _validate_filter, 'ServerGroupAggregateAntiAffinityFilter')
+ _validate_filter, 'ServerGroupAggregateAntiAffinityFilter'),
+ 'aggregate-soft-affinity': (
+ _validate_weigher, 'ServerGroupAggregateSoftAffinityWeigher'),
+ 'aggregate-soft-anti-affinity': (
+ _validate_weigher, 'ServerGroupAggregateSoftAntiAffinityWeigher')
}
check_fn, class_name = checks[group_policy]
diff --git a/nova/scheduler/weights/affinity.py b/nova/scheduler/weights/affinity.py
index 1a9a277b86..9f98c9a510 100644
--- a/nova/scheduler/weights/affinity.py
+++ b/nova/scheduler/weights/affinity.py
@@ -95,3 +95,69 @@ class ServerGroupSoftAntiAffinityWeigher(_SoftAffinityWeigherBase):
weight = super(ServerGroupSoftAntiAffinityWeigher, self)._weigh_object(
host_state, request_spec)
return -1 * weight
+
+
+class ServerGroupAggregateSoftAffinityWeigher(weights.BaseHostWeigher):
+ """ServerGroupAggregateSoftAffinityWeigher implements the soft-affinity
+ policy for server groups by preferring the aggregates that has more
+ instances from the given group.
+ """
+
+ POLICY_NAME = 'aggregate-soft-affinity'
+ CONF = CONF.filter_scheduler.aggregate_soft_affinity_weight_multiplier
+
+ def _pre_checks(self, host_state, request_spec):
+ if not (request_spec.instance_group and
+ request_spec.instance_group.policies):
+ return 0
+
+ policy = request_spec.instance_group.policies[0]
+ if self.POLICY_NAME != policy:
+ return 0
+
+ self.group_hosts = set(request_spec.instance_group.nodes +
+ request_spec.instance_group.hosts)
+
+ if not self.group_hosts:
+ # There are no members of the server group yet, so this host meets
+ # the aggregate affinity (or anti-affinity) constraint
+ return 0
+
+ return 1
+
+ def _weigh_object(self, host_state, request_spec):
+ """Higher weights win."""
+ if not self._pre_checks(host_state, request_spec):
+ return 0
+
+ weight = []
+ for aggregate in host_state.aggregates:
+ aggregate_weight = 0
+ for hostname in aggregate.hosts:
+ if hostname in self.group_hosts:
+ aggregate_weight += 1
+ weight.append(aggregate_weight)
+
+ if not weight:
+ return 0
+
+ return float(sum(weight)) / len(weight)
+
+ def weight_multiplier(self):
+ """How weighted this weigher should be."""
+ return self.CONF
+
+
+class ServerGroupAggregateSoftAntiAffinityWeigher(
+ ServerGroupAggregateSoftAffinityWeigher):
+ """ServerGroupAggregateSoftAntiAffinityWeigher implements the
+ soft-affinity policy for server groups by preferring the aggregates that
+ has less instances from the given group.
+ """
+
+ POLICY_NAME = 'aggregate-soft-anti-affinity'
+ CONF = CONF.filter_scheduler.aggregate_soft_anti_affinity_weight_multiplier
+
+ def _weigh_object(self, host_state, request_spec):
+ return -1 * super(ServerGroupAggregateSoftAntiAffinityWeigher,
+ self)._weigh_object(host_state, request_spec)
diff --git a/nova/tests/unit/scheduler/weights/test_weights_affinity.py b/nova/tests/unit/scheduler/weights/test_weights_affinity.py
index 21dbc19c9f..f5b898228a 100644
--- a/nova/tests/unit/scheduler/weights/test_weights_affinity.py
+++ b/nova/tests/unit/scheduler/weights/test_weights_affinity.py
@@ -157,3 +157,126 @@ class SoftAntiAffinityWeigherTestCase(SoftWeigherTestBase):
expected_weight=0.0,
expected_host='host2')
self.assertEqual(1, mock_log.warning.call_count)
+
+
+class _FakeAggregate(object):
+ def __init__(self, hosts):
+ self.hosts = hosts
+
+
+class AggregateSoftWeigherTestBase(test.NoDBTestCase):
+
+ def setUp(self):
+ super(AggregateSoftWeigherTestBase, self).setUp()
+ hosts = (('host1', 'iron1',
+ {'aggregates': [_FakeAggregate(['iron1',
+ 'iron2'])],
+ 'instances': {'i1': mock.sentinel,
+ 'i2': mock.sentinel}}),
+ ('host1', 'iron2',
+ {'aggregates': [_FakeAggregate(['iron1',
+ 'iron2'])],
+ 'instances': {'i3': mock.sentinel}}),
+ ('host1', 'iron3',
+ {'aggregates': [_FakeAggregate(['iron3',
+ 'iron4'])],
+ 'instances': {'i3': mock.sentinel}}),
+ ('host1', 'iron4',
+ {'aggregates': [_FakeAggregate(['iron3',
+ 'iron4'])],
+ 'instances': {'i3': mock.sentinel}}))
+
+ self.hs_list = []
+ for host in hosts:
+ self.hs_list.append(fakes.FakeHostState(*host))
+
+
+class TestAggregateSoftAntiAffinityWeigher(AggregateSoftWeigherTestBase):
+
+ def setUp(self):
+ super(TestAggregateSoftAntiAffinityWeigher, self).setUp()
+ self.weighers = [affinity.
+ ServerGroupAggregateSoftAntiAffinityWeigher()]
+ self.weight_handler = weights.HostWeightHandler()
+
+ def test_no_instances(self):
+
+ ig = objects.InstanceGroup(policies=['aggregate-soft-anti-affinity'],
+ hosts=[],
+ nodes=[])
+
+ req_spec = objects.RequestSpec(instance_group=ig)
+
+ res = self.weight_handler.get_weighed_objects(self.weighers,
+ self.hs_list, req_spec)
+ self.assertIn(res[0].obj.nodename,
+ ('iron1', 'iron2', 'iron3', 'iron4'))
+
+ def test_instance_in_first_aggregate(self):
+
+ ig = objects.InstanceGroup(policies=['aggregate-soft-anti-affinity'],
+ hosts=['host1'],
+ nodes=['iron1'])
+
+ req_spec = objects.RequestSpec(instance_group=ig)
+
+ res = self.weight_handler.get_weighed_objects(self.weighers,
+ self.hs_list, req_spec)
+ self.assertIn(res[0].obj.nodename, ('iron3', 'iron4'))
+
+ def test_two_instances_in_first_aggregate(self):
+
+ ig = objects.InstanceGroup(policies=['aggregate-soft-anti-affinity'],
+ hosts=['host1'],
+ nodes=['iron1', 'iron2'])
+
+ req_spec = objects.RequestSpec(instance_group=ig)
+
+ res = self.weight_handler.get_weighed_objects(self.weighers,
+ self.hs_list, req_spec)
+ self.assertIn(res[0].obj.nodename, ('iron3', 'iron4'))
+
+
+class TestAggregateSoftAffinityWeigher(AggregateSoftWeigherTestBase):
+
+ def setUp(self):
+ super(TestAggregateSoftAffinityWeigher, self).setUp()
+ self.weight_handler = weights.HostWeightHandler()
+ self.weighers = [affinity.ServerGroupAggregateSoftAffinityWeigher()]
+
+ def test_no_instances(self):
+
+ ig = objects.InstanceGroup(policies=['aggregate-soft-anti-affinity'],
+ hosts=[],
+ nodes=[])
+
+ req_spec = objects.RequestSpec(instance_group=ig)
+
+ res = self.weight_handler.get_weighed_objects(self.weighers,
+ self.hs_list, req_spec)
+ self.assertIn(res[0].obj.nodename,
+ ('iron1', 'iron2', 'iron3', 'iron4'))
+
+ def test_instance_in_first_aggregate(self):
+
+ ig = objects.InstanceGroup(policies=['aggregate-soft-anti-affinity'],
+ hosts=['host1'],
+ nodes=['iron1'])
+
+ req_spec = objects.RequestSpec(instance_group=ig)
+
+ res = self.weight_handler.get_weighed_objects(self.weighers,
+ self.hs_list, req_spec)
+ self.assertIn(res[0].obj.nodename, ('iron1', 'iron2'))
+
+ def test_two_instances_in_first_aggregate(self):
+
+ ig = objects.InstanceGroup(policies=['aggregate-soft-anti-affinity'],
+ hosts=['host1'],
+ nodes=['iron1', 'iron2'])
+
+ req_spec = objects.RequestSpec(instance_group=ig)
+
+ res = self.weight_handler.get_weighed_objects(self.weighers,
+ self.hs_list, req_spec)
+ self.assertIn(res[0].obj.nodename, ('iron1', 'iron2'))
--
2.16.1