Comment 15 for bug 1364040

Revision history for this message
Dennis Dmitriev (ddmitriev) wrote : Re: Deployment ha_flat_scalability finished with errors in puppet log

This scalability test contains several steps.
On the first step, HA cluster is deploing with only one controller.
When the cluster is ready, on the second step two more controllers is adding to the cluster and deploy starts again.

Looks like on the second step 'astute' configures an additional primary controller from those two additional controllers that (possible) causes a resource collision in corosync.

According to the astute.log on the master node, the first 'primary' controller was 'node-2':
======= astute.log
2014-11-19T12:44:09 debug: [417] Process message from worker queue:
...
\"nodes\": [{\"swift_zone\": \"2\", \"uid\": \"2\", \"public_address\": \"10.108.100.3\", \"internal_netmask\": \"255.255.255.0\", \"fqdn\": \"node-2.test.domain.local\", \"role\": \"primary-controller\"

And then the second 'primary' controller 'node-1' was added:
======= astute.log
2014-11-19T13:03:43 debug: [397] Process message from worker queue:
...
\"nodes\": [{\"swift_zone\": \"1\", \"uid\": \"1\", \"public_address\": \"10.108.100.4\", \"internal_netmask\": \"255.255.255.0\", \"fqdn\": \"node-1.test.domain.local\", \"role\": \"primary-controller\"