Is there something that needs to be tweaked inside the instances to be able
to resolve '*.internal' addresses? They certainly are generally available.
On Mon, Oct 3, 2022 at 11:20 AM Bas de Bruijne <email address hidden>
wrote:
> Public bug reported:
>
> In test run
>
> https://solutions.qa.canonical.com/testruns/testRun/6d96629c-3db5-442f-a603-b7f0e08b1d1b
> ,
> openstack-dashboard/0 fails to install with the following messages in
> the log:
>
> ```
> unit-openstack-dashboard-0: 22:42:59 TRACE juju.worker.uniter.relation no
> create relation operation to run
> unit-openstack-dashboard-0: 22:42:59 DEBUG juju.worker.uniter.operation
> running operation run install hook for openstack-dashboard/0
> unit-openstack-dashboard-0: 22:42:59 DEBUG juju.worker.uniter.operation
> preparing operation "run install hook" for openstack-dashboard/0
> unit-openstack-dashboard-0: 22:42:59 DEBUG juju.worker.uniter.operation
> executing operation "run install hook" for openstack-dashboard/0
> unit-openstack-dashboard-0: 22:43:06 WARNING
> unit.openstack-dashboard/0.install E: Failed to fetch
> http://archive.ubuntu.com/ubuntu/pool/main/i/ieee-data/ieee-data_20210605.1_all.deb
> Temporary failure resolving 'squid.internal'
> unit-openstack-dashboard-0: 22:43:06 WARNING
> unit.openstack-dashboard/0.install E: Failed to fetch
> http://archive.ubuntu.com/ubuntu/pool/main/p/python-netaddr/python3-netaddr_0.8.0-2_all.deb
> Temporary failure resolving 'squid.internal'
> unit-openstack-dashboard-0: 22:43:06 WARNING
> unit.openstack-dashboard/0.install E: Unable to fetch some archives, maybe
> run apt-get update or try with --fix-missing?
> unit-openstack-dashboard-0: 22:43:06 ERROR juju.worker.uniter.operation
> hook "install" (via explicit, bespoke hook script) failed: exit status 100
> unit-openstack-dashboard-0: 22:43:06 DEBUG juju.worker.uniter.operation
> lock released for openstack-dashboard/0
> unit-openstack-dashboard-0: 22:43:06 TRACE juju.worker.uniter.relation
> create relation resolver next op for new remote relations
> map[int]remotestate.RelationSnapshot{
> ```
>
> It could be a squid outage, but since we have started testing OpenStack
> on Jammy we have been seeing this error a lot on many different charms.
> It seems that the proxy connection is not stable on jammy. It is
> possible that this is a bug in lxd or OpenStack instead of juju.
>
> Crashdumps and logs for this run can be found here:
>
> https://oil-jenkins.canonical.com/artifacts/6d96629c-3db5-442f-a603-b7f0e08b1d1b/index.html
>
> ** Affects: juju
> Importance: Undecided
> Status: New
>
> --
> You received this bug notification because you are subscribed to juju.
> Matching subscriptions: juju bugs
> https://bugs.launchpad.net/bugs/1991552
>
> Title:
> Jammy machines can't connect to proxy
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju/+bug/1991552/+subscriptions
>
>
Temporary failure resolving 'squid.internal'.
Is there something that needs to be tweaked inside the instances to be able
to resolve '*.internal' addresses? They certainly are generally available.
On Mon, Oct 3, 2022 at 11:20 AM Bas de Bruijne <email address hidden>
wrote:
> Public bug reported: /solutions. qa.canonical. com/testruns/ testRun/ 6d96629c- 3db5-442f- a603-b7f0e08b1d 1b dashboard/ 0 fails to install with the following messages in dashboard- 0: 22:42:59 TRACE juju.worker. uniter. relation no dashboard- 0: 22:42:59 DEBUG juju.worker. uniter. operation dashboard/ 0 dashboard- 0: 22:42:59 DEBUG juju.worker. uniter. operation dashboard/ 0 dashboard- 0: 22:42:59 DEBUG juju.worker. uniter. operation dashboard/ 0 dashboard- 0: 22:43:06 WARNING dashboard/ 0.install E: Failed to fetch archive. ubuntu. com/ubuntu/ pool/main/ i/ieee- data/ieee- data_20210605. 1_all.deb dashboard- 0: 22:43:06 WARNING dashboard/ 0.install E: Failed to fetch archive. ubuntu. com/ubuntu/ pool/main/ p/python- netaddr/ python3- netaddr_ 0.8.0-2_ all.deb dashboard- 0: 22:43:06 WARNING dashboard/ 0.install E: Unable to fetch some archives, maybe dashboard- 0: 22:43:06 ERROR juju.worker. uniter. operation dashboard- 0: 22:43:06 DEBUG juju.worker. uniter. operation dashboard/ 0 dashboard- 0: 22:43:06 TRACE juju.worker. uniter. relation remotestate. RelationSnapsho t{ /oil-jenkins. canonical. com/artifacts/ 6d96629c- 3db5-442f- a603-b7f0e08b1d 1b/index. html /bugs.launchpad .net/bugs/ 1991552 /bugs.launchpad .net/juju/ +bug/1991552/ +subscriptions
>
> In test run
>
> https:/
> ,
> openstack-
> the log:
>
> ```
> unit-openstack-
> create relation operation to run
> unit-openstack-
> running operation run install hook for openstack-
> unit-openstack-
> preparing operation "run install hook" for openstack-
> unit-openstack-
> executing operation "run install hook" for openstack-
> unit-openstack-
> unit.openstack-
> http://
> Temporary failure resolving 'squid.internal'
> unit-openstack-
> unit.openstack-
> http://
> Temporary failure resolving 'squid.internal'
> unit-openstack-
> unit.openstack-
> run apt-get update or try with --fix-missing?
> unit-openstack-
> hook "install" (via explicit, bespoke hook script) failed: exit status 100
> unit-openstack-
> lock released for openstack-
> unit-openstack-
> create relation resolver next op for new remote relations
> map[int]
> ```
>
> It could be a squid outage, but since we have started testing OpenStack
> on Jammy we have been seeing this error a lot on many different charms.
> It seems that the proxy connection is not stable on jammy. It is
> possible that this is a bug in lxd or OpenStack instead of juju.
>
> Crashdumps and logs for this run can be found here:
>
> https:/
>
> ** Affects: juju
> Importance: Undecided
> Status: New
>
> --
> You received this bug notification because you are subscribed to juju.
> Matching subscriptions: juju bugs
> https:/
>
> Title:
> Jammy machines can't connect to proxy
>
> To manage notifications about this bug go to:
> https:/
>
>