Forgot to include that detail. Trusty
Sent from my iPhone
> On Apr 25, 2014, at 2:58 PM, Andreas Hasenack <email address hidden> wrote: > > As a counterpoint, it worked for me with juju 1.18.1: > > $ juju status > environment: scapestack > machines: > "0": > agent-state: started > agent-version: 1.18.1 > dns-name: some.node > instance-id: /MAAS/api/1.0/nodes/node-some-uuid > series: precise > services: {} > > $ juju ssh 0 ifconfig br0 > br0 Link encap:Ethernet HWaddr aa:aa:aa:aa:aa:aa > inet addr:1.2.3.4 Bcast:1.2.3.255 Mask:255.255.255.0 > (...) > > Is your bootstrap node on precise or trusty? > > -- > You received this bug notification because you are subscribed to the bug > report. > https://bugs.launchpad.net/bugs/1271144 > > Title: > br0 not brought up by cloud-init script with MAAS provider > > Status in juju-core: > Fix Released > Status in “juju-core” package in Ubuntu: > Fix Released > Status in “juju-core” source package in Trusty: > Fix Released > > Bug description: > Setup: a virtual OpenStack deployment on the cabeiri host in qalab. > > There are three KVM VMs: > > - virtmaas is the MAAS controller; > - virtjuju is the Juju bootstrap node (machine 0); > - virtstack is the OpenStack deployment target (machine 1). > > Running the command: > > test@virtmaas:~$ juju deploy --config=openstack.cfg ceph --to lxc:1 > > this error appears after a while: > > test@virtmaas:~$ juju status > environment: maas > machines: > "0": > agent-state: started > agent-version: 1.17.0.1 > dns-name: virtjuju.master > instance-id: /MAAS/api/1.0/nodes/node-6df73c0a-7ed2-11e3-bac3-5254006e0119/ > series: precise > "1": > agent-state: started > agent-version: 1.17.0.1 > dns-name: virtstack.master > instance-id: /MAAS/api/1.0/nodes/node-ef9619e0-7f84-11e3-b750-5254006e0119/ > series: precise > containers: > 1/lxc/0: > agent-state-info: '(error: error executing "lxc-start": command get_init_pid > failed to receive response)' > instance-id: pending > series: precise > services: > ceph: > charm: cs:precise/ceph-19 > exposed: false > relations: > mon: > - ceph > units: > ceph/0: > agent-state: pending > machine: 1/lxc/0 > > On virtstack: > > /var/log/juju/machine-1.log > > 2014-01-21 11:09:50 INFO juju runner.go:262 worker: start "lxc-provisioner" > 2014-01-21 11:09:50 INFO juju.provisioner provisioner_task.go:114 Starting up provisioner task machine-1 > 2014-01-21 11:09:50 INFO juju.provisioner provisioner_task.go:298 found machine "1/lxc/0" pending provisioning > 2014-01-21 11:09:50 INFO juju.provisioner.lxc lxc-broker.go:54 starting lxc container for machineId: 1/lxc/0 > 2014-01-21 11:10:22 ERROR juju.container.lxc lxc.go:129 container failed to start: error executing "lxc-start": command get_init_pid failed to receive response > 2014-01-21 11:10:22 ERROR juju.provisioner.lxc lxc-broker.go:85 failed to start container: error executing "lxc-start": command get_init_pid failed to receive response > 2014-01-21 11:10:22 ERROR juju.provisioner provisioner_task.go:399 cannot start instance for machine "1/lxc/0": error executing "lxc-start": command get_init_pid failed to receive response > > /var/log/lxc/juju-machine-1-lxc-0.log: empty > > Apparently br0, needed by MAAS, is not brought up by Juju's cloud-init > script: > > ubuntu@virtstack:~$ ifconfig > eth0 Link encap:Ethernet HWaddr 52:54:00:6c:c6:c1 > inet addr:192.168.100.152 Bcast:192.168.100.255 Mask:255.255.255.0 > inet6 addr: fe80::5054:ff:fe6c:c6c1/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:84169 errors:0 dropped:0 overruns:0 frame:0 > TX packets:26339 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:307004880 (307.0 MB) TX bytes:2763330 (2.7 MB) > > lo Link encap:Local Loopback > inet addr:127.0.0.1 Mask:255.0.0.0 > inet6 addr: ::1/128 Scope:Host > UP LOOPBACK RUNNING MTU:16436 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > lxcbr0 Link encap:Ethernet HWaddr 22:14:e3:a6:66:d0 > inet addr:10.0.3.1 Bcast:10.0.3.255 Mask:255.255.255.0 > inet6 addr: fe80::2014:e3ff:fea6:66d0/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:0 (0.0 B) TX bytes:2016 (2.0 KB) > > Bringing it up manually: > > ubuntu@virtstack:~$ sudo bash -c "ifdown eth0; ifup eth0; ifup br0" > * Disconnecting iSCSI targets > ...done. > * Stopping iSCSI initiator service > ...done. > * Starting iSCSI initiator service iscsid > ...done. > * Setting up iSCSI targets > ...done. > ssh stop/waiting > ssh start/running, process 1369 > > Waiting for br0 to get ready (MAXWAIT is 32 seconds). > * Setting up iSCSI targets > ...done. > ssh stop/waiting > ssh start/running, process 1486 > > ubuntu@virtstack:~$ ifconfig > br0 Link encap:Ethernet HWaddr 52:54:00:6c:c6:c1 > inet addr:192.168.100.152 Bcast:192.168.100.255 Mask:255.255.255.0 > inet6 addr: fe80::5054:ff:fe6c:c6c1/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:66 errors:0 dropped:0 overruns:0 frame:0 > TX packets:51 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:5632 (5.6 KB) TX bytes:5539 (5.5 KB) > > eth0 Link encap:Ethernet HWaddr 52:54:00:6c:c6:c1 > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:84338 errors:0 dropped:0 overruns:0 frame:0 > TX packets:26466 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:307020587 (307.0 MB) TX bytes:2778824 (2.7 MB) > > lo Link encap:Local Loopback > inet addr:127.0.0.1 Mask:255.0.0.0 > inet6 addr: ::1/128 Scope:Host > UP LOOPBACK RUNNING MTU:16436 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > lxcbr0 Link encap:Ethernet HWaddr 22:14:e3:a6:66:d0 > inet addr:10.0.3.1 Bcast:10.0.3.255 Mask:255.255.255.0 > inet6 addr: fe80::2014:e3ff:fea6:66d0/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:0 (0.0 B) TX bytes:2016 (2.0 KB) > > Deploying ceph again now works. > > To manage notifications about this bug go to: > https://bugs.launchpad.net/juju-core/+bug/1271144/+subscriptions
Forgot to include that detail. Trusty
Sent from my iPhone
> On Apr 25, 2014, at 2:58 PM, Andreas Hasenack <email address hidden> wrote: 1.0/nodes/ node-some- uuid /bugs.launchpad .net/bugs/ 1271144 openstack. cfg ceph --to lxc:1 1.0/nodes/ node-6df73c0a- 7ed2-11e3- bac3-5254006e01 19/ 1.0/nodes/ node-ef9619e0- 7f84-11e3- b750-5254006e01 19/ juju/machine- 1.log task.go: 114 Starting up provisioner task machine-1 task.go: 298 found machine "1/lxc/0" pending provisioning r.lxc lxc-broker.go:54 starting lxc container for machineId: 1/lxc/0 r.lxc lxc-broker.go:85 failed to start container: error executing "lxc-start": command get_init_pid failed to receive response task.go: 399 cannot start instance for machine "1/lxc/0": error executing "lxc-start": command get_init_pid failed to receive response lxc/juju- machine- 1-lxc-0. log: empty 168.100. 152 Bcast:192. 168.100. 255 Mask:255.255.255.0 ff:fe6c: c6c1/64 Scope:Link e3ff:fea6: 66d0/64 Scope:Link 168.100. 152 Bcast:192. 168.100. 255 Mask:255.255.255.0 ff:fe6c: c6c1/64 Scope:Link e3ff:fea6: 66d0/64 Scope:Link /bugs.launchpad .net/juju- core/+bug/ 1271144/ +subscriptions
>
> As a counterpoint, it worked for me with juju 1.18.1:
>
> $ juju status
> environment: scapestack
> machines:
> "0":
> agent-state: started
> agent-version: 1.18.1
> dns-name: some.node
> instance-id: /MAAS/api/
> series: precise
> services: {}
>
> $ juju ssh 0 ifconfig br0
> br0 Link encap:Ethernet HWaddr aa:aa:aa:aa:aa:aa
> inet addr:1.2.3.4 Bcast:1.2.3.255 Mask:255.255.255.0
> (...)
>
> Is your bootstrap node on precise or trusty?
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https:/
>
> Title:
> br0 not brought up by cloud-init script with MAAS provider
>
> Status in juju-core:
> Fix Released
> Status in “juju-core” package in Ubuntu:
> Fix Released
> Status in “juju-core” source package in Trusty:
> Fix Released
>
> Bug description:
> Setup: a virtual OpenStack deployment on the cabeiri host in qalab.
>
> There are three KVM VMs:
>
> - virtmaas is the MAAS controller;
> - virtjuju is the Juju bootstrap node (machine 0);
> - virtstack is the OpenStack deployment target (machine 1).
>
> Running the command:
>
> test@virtmaas:~$ juju deploy --config=
>
> this error appears after a while:
>
> test@virtmaas:~$ juju status
> environment: maas
> machines:
> "0":
> agent-state: started
> agent-version: 1.17.0.1
> dns-name: virtjuju.master
> instance-id: /MAAS/api/
> series: precise
> "1":
> agent-state: started
> agent-version: 1.17.0.1
> dns-name: virtstack.master
> instance-id: /MAAS/api/
> series: precise
> containers:
> 1/lxc/0:
> agent-state-info: '(error: error executing "lxc-start": command get_init_pid
> failed to receive response)'
> instance-id: pending
> series: precise
> services:
> ceph:
> charm: cs:precise/ceph-19
> exposed: false
> relations:
> mon:
> - ceph
> units:
> ceph/0:
> agent-state: pending
> machine: 1/lxc/0
>
> On virtstack:
>
> /var/log/
>
> 2014-01-21 11:09:50 INFO juju runner.go:262 worker: start "lxc-provisioner"
> 2014-01-21 11:09:50 INFO juju.provisioner provisioner_
> 2014-01-21 11:09:50 INFO juju.provisioner provisioner_
> 2014-01-21 11:09:50 INFO juju.provisione
> 2014-01-21 11:10:22 ERROR juju.container.lxc lxc.go:129 container failed to start: error executing "lxc-start": command get_init_pid failed to receive response
> 2014-01-21 11:10:22 ERROR juju.provisione
> 2014-01-21 11:10:22 ERROR juju.provisioner provisioner_
>
> /var/log/
>
> Apparently br0, needed by MAAS, is not brought up by Juju's cloud-init
> script:
>
> ubuntu@virtstack:~$ ifconfig
> eth0 Link encap:Ethernet HWaddr 52:54:00:6c:c6:c1
> inet addr:192.
> inet6 addr: fe80::5054:
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:84169 errors:0 dropped:0 overruns:0 frame:0
> TX packets:26339 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:307004880 (307.0 MB) TX bytes:2763330 (2.7 MB)
>
> lo Link encap:Local Loopback
> inet addr:127.0.0.1 Mask:255.0.0.0
> inet6 addr: ::1/128 Scope:Host
> UP LOOPBACK RUNNING MTU:16436 Metric:1
> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>
> lxcbr0 Link encap:Ethernet HWaddr 22:14:e3:a6:66:d0
> inet addr:10.0.3.1 Bcast:10.0.3.255 Mask:255.255.255.0
> inet6 addr: fe80::2014:
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> TX packets:24 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:0 (0.0 B) TX bytes:2016 (2.0 KB)
>
> Bringing it up manually:
>
> ubuntu@virtstack:~$ sudo bash -c "ifdown eth0; ifup eth0; ifup br0"
> * Disconnecting iSCSI targets
> ...done.
> * Stopping iSCSI initiator service
> ...done.
> * Starting iSCSI initiator service iscsid
> ...done.
> * Setting up iSCSI targets
> ...done.
> ssh stop/waiting
> ssh start/running, process 1369
>
> Waiting for br0 to get ready (MAXWAIT is 32 seconds).
> * Setting up iSCSI targets
> ...done.
> ssh stop/waiting
> ssh start/running, process 1486
>
> ubuntu@virtstack:~$ ifconfig
> br0 Link encap:Ethernet HWaddr 52:54:00:6c:c6:c1
> inet addr:192.
> inet6 addr: fe80::5054:
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:66 errors:0 dropped:0 overruns:0 frame:0
> TX packets:51 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:5632 (5.6 KB) TX bytes:5539 (5.5 KB)
>
> eth0 Link encap:Ethernet HWaddr 52:54:00:6c:c6:c1
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:84338 errors:0 dropped:0 overruns:0 frame:0
> TX packets:26466 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:307020587 (307.0 MB) TX bytes:2778824 (2.7 MB)
>
> lo Link encap:Local Loopback
> inet addr:127.0.0.1 Mask:255.0.0.0
> inet6 addr: ::1/128 Scope:Host
> UP LOOPBACK RUNNING MTU:16436 Metric:1
> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>
> lxcbr0 Link encap:Ethernet HWaddr 22:14:e3:a6:66:d0
> inet addr:10.0.3.1 Bcast:10.0.3.255 Mask:255.255.255.0
> inet6 addr: fe80::2014:
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> TX packets:24 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:0 (0.0 B) TX bytes:2016 (2.0 KB)
>
> Deploying ceph again now works.
>
> To manage notifications about this bug go to:
> https:/