I would actually expect "juju status" to show a public address if possible,
rather than an internal address. That doesn't mean it is the address that a
given unit should be advertising to its peers (that would be "juju run
--unit ceph-mon/0 network-get --primary-address BINDING").
Did you actually deploy the services bound into spaces, or you just
deployed them on a system that has more than one network interface and
hoped it would always pick what is "right" for you?
On Thu, Jan 26, 2017 at 9:18 PM, Narinder Gupta <email address hidden>
wrote:
> here is one example where juju machine show different subnet for public
> address. I have two network 10.200.2.x and 10.200.5.x and bot routable
> to network. PXE boot network is 10.200.5.x but nodes public address is
> on 10.200.2.x
>
> Unit Workload Agent Machine Public address
> Ports Message
> aodh/0 waiting allocating 0/lxd/0
> waiting for machine
> ceilometer/0 waiting allocating 2/lxd/0
> waiting for machine
> ceph-mon/0 waiting allocating 2/lxd/1
> waiting for machine
> ceph-mon/1 waiting allocating 1/lxd/0
> waiting for machine
> ceph-mon/2 waiting allocating 0/lxd/1
> waiting for machine
> ceph-osd/0 waiting allocating 0 10.200.2.15
> waiting for machine
> ceph-osd/1 waiting allocating 1 10.200.2.13
> waiting for machine
> ceph-osd/2 waiting allocating 2 10.200.2.14
> waiting for machine
> ceph-radosgw/0 waiting allocating 0/lxd/2
> waiting for machine
> cinder/0 waiting allocating 1/lxd/1
> waiting for machine
> glance/0 waiting allocating 0/lxd/3
> waiting for machine
> heat/0 waiting allocating 0/lxd/4
> waiting for machine
> keystone/0 waiting allocating 2/lxd/2
> waiting for machine
> mongodb/0 waiting allocating 0/lxd/5
> waiting for machine
> mysql/0 waiting allocating 0/lxd/6
> waiting for machine
> neutron-api/0 waiting allocating 1/lxd/2
> waiting for machine
> neutron-gateway/0 waiting allocating 0 10.200.2.15
> waiting for machine
> nodes/0 waiting allocating 0 10.200.2.15
> waiting for machine
> nodes/1 waiting allocating 1 10.200.2.13
> waiting for machine
> nodes/2 waiting allocating 2 10.200.2.14
> waiting for machine
> nova-cloud-controller/0 waiting allocating 1/lxd/3
> waiting for machine
> nova-compute/0 waiting allocating 1 10.200.2.13
> waiting for machine
> nova-compute/1 waiting allocating 2 10.200.2.14
> waiting for machine
> openstack-dashboard/0 waiting allocating 1/lxd/4
> waiting for machine
> opnfv-promise/0 waiting allocating 0/lxd/7
> waiting for machine
> rabbitmq-server/0 waiting allocating 2/lxd/3
> waiting for machine
>
> --
> You received this bug notification because you are subscribed to juju-
> core.
> https://bugs.launchpad.net/bugs/1659102
>
> Title:
> juju status shows ip address from public-api space rather than
> internal-api space
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju-core/+bug/1659102/+subscriptions
>
I would actually expect "juju status" to show a public address if possible,
rather than an internal address. That doesn't mean it is the address that a
given unit should be advertising to its peers (that would be "juju run
--unit ceph-mon/0 network-get --primary-address BINDING").
Did you actually deploy the services bound into spaces, or you just
deployed them on a system that has more than one network interface and
hoped it would always pick what is "right" for you?
On Thu, Jan 26, 2017 at 9:18 PM, Narinder Gupta <email address hidden>
wrote:
> here is one example where juju machine show different subnet for public controller/ 0 waiting allocating 1/lxd/3 dashboard/ 0 waiting allocating 1/lxd/4 /bugs.launchpad .net/bugs/ 1659102 /bugs.launchpad .net/juju- core/+bug/ 1659102/ +subscriptions
> address. I have two network 10.200.2.x and 10.200.5.x and bot routable
> to network. PXE boot network is 10.200.5.x but nodes public address is
> on 10.200.2.x
>
> Unit Workload Agent Machine Public address
> Ports Message
> aodh/0 waiting allocating 0/lxd/0
> waiting for machine
> ceilometer/0 waiting allocating 2/lxd/0
> waiting for machine
> ceph-mon/0 waiting allocating 2/lxd/1
> waiting for machine
> ceph-mon/1 waiting allocating 1/lxd/0
> waiting for machine
> ceph-mon/2 waiting allocating 0/lxd/1
> waiting for machine
> ceph-osd/0 waiting allocating 0 10.200.2.15
> waiting for machine
> ceph-osd/1 waiting allocating 1 10.200.2.13
> waiting for machine
> ceph-osd/2 waiting allocating 2 10.200.2.14
> waiting for machine
> ceph-radosgw/0 waiting allocating 0/lxd/2
> waiting for machine
> cinder/0 waiting allocating 1/lxd/1
> waiting for machine
> glance/0 waiting allocating 0/lxd/3
> waiting for machine
> heat/0 waiting allocating 0/lxd/4
> waiting for machine
> keystone/0 waiting allocating 2/lxd/2
> waiting for machine
> mongodb/0 waiting allocating 0/lxd/5
> waiting for machine
> mysql/0 waiting allocating 0/lxd/6
> waiting for machine
> neutron-api/0 waiting allocating 1/lxd/2
> waiting for machine
> neutron-gateway/0 waiting allocating 0 10.200.2.15
> waiting for machine
> nodes/0 waiting allocating 0 10.200.2.15
> waiting for machine
> nodes/1 waiting allocating 1 10.200.2.13
> waiting for machine
> nodes/2 waiting allocating 2 10.200.2.14
> waiting for machine
> nova-cloud-
> waiting for machine
> nova-compute/0 waiting allocating 1 10.200.2.13
> waiting for machine
> nova-compute/1 waiting allocating 2 10.200.2.14
> waiting for machine
> openstack-
> waiting for machine
> opnfv-promise/0 waiting allocating 0/lxd/7
> waiting for machine
> rabbitmq-server/0 waiting allocating 2/lxd/3
> waiting for machine
>
> --
> You received this bug notification because you are subscribed to juju-
> core.
> https:/
>
> Title:
> juju status shows ip address from public-api space rather than
> internal-api space
>
> To manage notifications about this bug go to:
> https:/
>