After restart of all nodes in cluster cinder-volume services on compute/storage nodes do not start correctly, because of connection error to MySQL server:
1. Create new cluster: Ubuntu, HA, NeutronGre
2. Add 1 controller and 2 compute+cinder nodes
3. Deploy changes. Everything works fine after deployment, cluster passes health checks.
4. Shutdown all nodes in cluster (/sbin/shutdown -Ph now)
5. Start all nodes simultaneously.
Expected result:
- all nodes are booted correctly, cluster passes health checks
Actual:
- cinder-volume services are down
Here is an output of cinder service-list command on controller:
+------------------+--------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+--------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | node-2 | nova | enabled | up | 2015-02-13T09:42:52.000000 | None |
| cinder-volume | node-1 | nova | enabled | down | 2015-02-13T06:32:46.000000 | None |
| cinder-volume | node-3 | nova | enabled | down | 2015-02-13T06:32:54.000000 | None |
+------------------+--------+------+---------+-------+----------------------------+-----------------+
On compute/cinder cinder-volume services nodes were down:
root@node-1:~# /etc/init.d/cinder-volume status
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service cinder-volume status
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the status(8) utility, e.g. status cinder-volume
cinder-volume stop/waiting
After restart of all nodes in cluster cinder-volume services on compute/storage nodes do not start correctly, because of connection error to MySQL server:
http:// paste.openstack .org/show/ 172778/
Steps to reproduce:
1. Create new cluster: Ubuntu, HA, NeutronGre
2. Add 1 controller and 2 compute+cinder nodes
3. Deploy changes. Everything works fine after deployment, cluster passes health checks.
4. Shutdown all nodes in cluster (/sbin/shutdown -Ph now)
5. Start all nodes simultaneously.
Expected result:
- all nodes are booted correctly, cluster passes health checks
Actual:
- cinder-volume services are down
Here is an output of cinder service-list command on controller:
+------ ------- -----+- ------- +------ +------ ---+--- ----+-- ------- ------- ------- -----+- ------- ------- --+ ------- -----+- ------- +------ +------ ---+--- ----+-- ------- ------- ------- -----+- ------- ------- --+ 13T09:42: 52.000000 | None | 13T06:32: 46.000000 | None | 13T06:32: 54.000000 | None | ------- -----+- ------- +------ +------ ---+--- ----+-- ------- ------- ------- -----+- ------- ------- --+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------
| cinder-scheduler | node-2 | nova | enabled | up | 2015-02-
| cinder-volume | node-1 | nova | enabled | down | 2015-02-
| cinder-volume | node-3 | nova | enabled | down | 2015-02-
+------
On compute/cinder cinder-volume services nodes were down:
root@node-1:~# /etc/init. d/cinder- volume status
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service cinder-volume status
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the status(8) utility, e.g. status cinder-volume
cinder-volume stop/waiting
After 'cinder-volume' services restart they began to work fine. Diagnostic snapshot is attached. Fuel version info: http:// paste.openstack .org/show/ 172789/