Reboot host didn't restart instances due to libvirt lifecycle event change instance's power_stat as shutdown

Bug #1293480 reported by ChangBo Guo(gcb)
40
This bug affects 7 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
Medium
Thomas Bechtold
Juno
Fix Released
Undecided
Unassigned

Bug Description

1. Libvirt driver can receive libvirt lifecycle events(registered in https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1004), then handle it in https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L969 , that means shutdown a domain will send out shutdown lifecycle event and nova compute will try to sync the instance's power_state.

2. When reboot compute service , compute service is trying to reboot instance which were running before reboot.
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L911. Compute service only checks the power_state in database. the value of power_state can be changed in 3. That leads out reboot host, some instances which were running before reboot can't be restarted.

3. When reboot the host, the code path like 1)libvirt-guests will shutdown all the domain, 2)then sendout lifecycle event , 3)nova compute receive it and 4)save power_state 'shutoff' in db , 5)then try to stop it. Compute service may be killed in any step, In my test enviroment, two running instances , only one instance was restarted succefully. another was set power_state with 'shutoff', task_state with 'power off' in step 4) . So it can't pass the check in https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L911. won't be restarted.

Not sure this is a bug , wonder if there is solution for this .

Revision history for this message
ChangBo Guo(gcb) (glongwave) wrote :

I'm not familar with the process of libvirt event detect/deliver , just think stop compute service before reboot host can avoid this .

Tracy Jones (tjones-i)
tags: added: libvirt
Revision history for this message
Solly Ross (sross-7) wrote :

can you please indicate what platform you are using, as well as which versions of libvirt and OpenStack you are using?

Changed in nova:
status: New → Incomplete
Revision history for this message
Eric Xie (mark-xiett) wrote :

I met this issue several times. Nova compute maybe sync vm's power_state when rebooting. At the same time, if libvirt already shut down instances, vm_state can be synced to STOPPED.
        elif vm_state == vm_states.ACTIVE:
            # The only rational power state should be RUNNING
            if vm_power_state in (power_state.SHUTDOWN,
                                  power_state.CRASHED):
                LOG.warn(_("Instance shutdown by itself. Calling "
                           "the stop API."), instance=db_instance)
                try:
                    # Note(maoy): here we call the API instead of
                    # brutally updating the vm_state in the database
                    # to allow all the hooks and checks to be performed.
                    self.compute_api.stop(context, db_instance)
 After reboot host, instances which started by libvirt can be shut down because of vm_state STOPPED.
        elif vm_state == vm_states.STOPPED:
            if vm_power_state not in (power_state.NOSTATE,
                                      power_state.SHUTDOWN,
                                      power_state.CRASHED):
                LOG.warn(_("Instance is not stopped. Calling "
                           "the stop API."), instance=db_instance)
                try:
                    # NOTE(russellb) Force the stop, because normally the
                    # compute API would not allow an attempt to stop a stopped
                    # instance.
                    self.compute_api.force_stop(context, db_instance)

Nova version: Nova 2014.1 release

Changed in nova:
assignee: nobody → Thomas Bechtold (toabctl)
Revision history for this message
Thomas Bechtold (toabctl) wrote :

I have the same problem using OpenStack Icehouse and Xen. Seems that a KVM VM doesn't emit lifecycle events during a reboot (reboot started just with "reboot" command inside of the VM). That's why a reboot with a KVM VM just works. A VM using Xen emits a VIR_DOMAIN_EVENT_STOPPED event which leads nova to shutdown the VM (see https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L5533 ). The nova-compute.log looks like this:

2014-08-06 21:02:47.628 12222 INFO nova.compute.manager [-] Lifecycle event 1 on VM 6fab54d3-3131-460a-a025-66b937f71255
2014-08-06 21:02:47.816 12222 WARNING nova.compute.manager [-] [instance: 6fab54d3-3131-460a-a025-66b937f71255] Instance shutdown by itself. Calling the stop API.
2014-08-06 21:02:48.092 12222 INFO nova.virt.libvirt.driver [-] [instance: 6fab54d3-3131-460a-a025-66b937f71255] Instance destroyed successfully.

Changed in nova:
status: Incomplete → Confirmed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/112946

Changed in nova:
status: Confirmed → In Progress
tags: added: icehouse-backport-potential
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/112946
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=bd8329b34098436d18441a8129f3f20af53c2b91
Submitter: Jenkins
Branch: master

commit bd8329b34098436d18441a8129f3f20af53c2b91
Author: Thomas Bechtold <email address hidden>
Date: Tue Aug 19 17:41:57 2014 +0200

    Delay STOPPED lifecycle event for Xen domains

    When using libvirt, a reboot from inside of a kvm VM doesn't trigger
    any libvirt lifecycle event. That's fine. But rebooting a Xen VM leads
    to the events VIR_DOMAIN_EVENT_STOPPED and VIR_DOMAIN_EVENT_STARTED.
    Nova compute manager catches these events and tries to sync the power
    state of the VM with the power state in the database. In the case the VM
    state is ACTIVE but the power state is something that doesn't fit, the
    stop API call is executed to trigger all stop hooks. This leads to the
    problem that a reboot of a Xen VM without using the API isn't possible.
    To fix it, delay the emission of the STOPPED lifecycle event a couple of
    seconds. If a VIR_DOMAIN_EVENT_STARTED event is received while the STOPPED
    event is pending, cancel the pending STOPPED lifecycle event so the VM
    can continue to run.

    Closes-Bug: #1293480
    Change-Id: I690d3d700ab4d057554350da143ff77d78b509c6

Changed in nova:
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/icehouse)

Fix proposed to branch: stable/icehouse
Review: https://review.openstack.org/131982

Thierry Carrez (ttx)
Changed in nova:
milestone: none → kilo-1
status: Fix Committed → Fix Released
Matt Riedemann (mriedem)
tags: added: juno-backport-potential
removed: icehouse-backport-potential
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/juno)

Fix proposed to branch: stable/juno
Review: https://review.openstack.org/163378

Revision history for this message
Matt Riedemann (mriedem) wrote :

We also have someone reporting this on icehouse and has a recreate in juno with some better logging that shows that during a soft reboot (non-xen domain in this case), the soft reboot is successful and shortly after that (8 seconds) the libvirt driver sends the stopped lifecycle event which, since the instance doesn't have a task_state set, triggers _sync_instance_power_state to call the force_stop() API to power off the instance. 1 second after the stopped event, we see the started event in the libvirt driver but b/c we're already in the process of stopping the instance from the previous event, we're left in shutdown state.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Attaching a juno log with a recreate, the instance uuid is 9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78.

Revision history for this message
Matt Riedemann (mriedem) wrote :

There was a bug introduced with the original fix, so if we backport the first fix we also need this:

https://review.openstack.org/#/c/166184/

Revision history for this message
Matt Riedemann (mriedem) wrote :

Opened related bug 1443186 for kvm.

Matt Riedemann (mriedem)
tags: removed: juno-backport-potential
Revision history for this message
Yang Zhang (bjzyang) wrote :

Hi, I hotfixed my Juno environment with latest code change in https://review.openstack.org/163378, but the issue can still be able to be reproduced.

Revision history for this message
Matt Riedemann (mriedem) wrote :

@Yang, it didn't fix the problem because the patches for xen are (1) not handling kvm and (2) it's not a host reboot case, so the compute service is running when libvirt is sending lifecycle events. I have a patch for kvm here:

https://review.openstack.org/#/c/172775/

The original problem reported in this bug is when the compute host is being rebooted from under nova, so libvirt and nova are racing to shutdown and libvirt is sending events when shutting down guests and nova is trying to handle them, which races with the compute service itself going down, so the instances can be left in an inconsistent state and init_host might not clean them up properly on host reboot.

One issue here is I think we need to stop handling lifecycle events from the driver when the compute service (host) is going down, just like how we don't handle incoming neutron events on compute service shutdown.

Changed in nova:
importance: Undecided → Medium
Revision history for this message
Matt Riedemann (mriedem) wrote :

Reported bug 1444630 for comment 14.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/174069

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.openstack.org/174069
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=d1fb8d0fbdd6cb95c43b02f754409f1c728e8cd0
Submitter: Jenkins
Branch: master

commit d1fb8d0fbdd6cb95c43b02f754409f1c728e8cd0
Author: Matt Riedemann <email address hidden>
Date: Wed Apr 15 11:51:26 2015 -0700

    compute: stop handling virt lifecycle events in cleanup_host()

    When rebooting a compute host, guest VMs can be getting shutdown
    automatically by the hypervisor and the virt driver is sending events to
    the compute manager to handle them. If the compute service is still up
    while this happens it will try to call the stop API to power off the
    instance and update the database to show the instance as stopped.

    When the compute service comes back up and events come in from the virt
    driver that the guest VMs are running, nova will see that the vm_state
    on the instance in the nova database is STOPPED and shut down the
    instance by calling the stop API (basically ignoring what the virt
    driver / hypervisor tells nova is the state of the guest VM).

    Alternatively, if the compute service shuts down after changing the
    intance task_state to 'powering-off' but before the stop API cast is
    complete, the instance can be in a strange vm_state/task_state
    combination that requires the admin to manually reset the task_state to
    recover the instance.

    Let's just try to avoid some of this mess by disconnecting the event
    handling when the compute service is shutting down like we do for
    neutron VIF plugging events. There could still be races here if the
    compute service is shutting down after the hypervisor (e.g. libvirtd),
    but this is at least a best attempt to do the mitigate the potential
    damage.

    Closes-Bug: #1444630
    Related-Bug: #1293480
    Related-Bug: #1408176

    Change-Id: I1a321371dff7933cdd11d31d9f9c2a2f850fd8d9

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/kilo)

Related fix proposed to branch: stable/kilo
Review: https://review.openstack.org/174477

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/juno)

Reviewed: https://review.openstack.org/163378
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=f60214b4fdbfb56c8428808a29b9cf911c0b1aa2
Submitter: Jenkins
Branch: stable/juno

commit f60214b4fdbfb56c8428808a29b9cf911c0b1aa2
Author: Thomas Bechtold <email address hidden>
Date: Tue Aug 19 17:41:57 2014 +0200

    Delay STOPPED lifecycle event for Xen domains

    When using libvirt, a reboot from inside of a kvm VM doesn't trigger
    any libvirt lifecycle event. That's fine. But rebooting a Xen VM leads
    to the events VIR_DOMAIN_EVENT_STOPPED and VIR_DOMAIN_EVENT_STARTED.
    Nova compute manager catches these events and tries to sync the power
    state of the VM with the power state in the database. In the case the VM
    state is ACTIVE but the power state is something that doesn't fit, the
    stop API call is executed to trigger all stop hooks. This leads to the
    problem that a reboot of a Xen VM without using the API isn't possible.
    To fix it, delay the emission of the STOPPED lifecycle event a couple of
    seconds. If a VIR_DOMAIN_EVENT_STARTED event is received while the STOPPED
    event is pending, cancel the pending STOPPED lifecycle event so the VM
    can continue to run.

    Closes-Bug: #1293480
    (cherry picked from commit bd8329b34098436d18441a8129f3f20af53c2b91)

    ----
    NOTE(mriedem): The fix for bug 1293480 introduced bug 1433049 so we
    have to backport both together, hence the squashed commits.
    ----

    libvirt: Delay only STOPPED event for Xen domain.

    This fix change bd8329b34098436d18441a8129f3f20af53c2b91 (Delay STOPPED
    lifecycle event for Xen domains)

    Without this patch, a STOPPED event could be ignore if it was following a
    STARTED event.

    A scenario that have the issue on tempest is
    ServerActionsTestJSON:test_resize_server_confirm_from_stopped, and it
    happens as follow:
    - instance is stopped
    nova start instance
    - libvirt STARTED event received and delayed
    nova stop instance
    - libvirt STOPPED event received and ignored as there is a delayed event
    nova resize instance 42
    - resize finished
    - the delayed STARTED event is emited
    nova confirme-resize instance
    nova show instance
    - instance is show as ACTIVE, but should be SHUTOFF

    Also fix unit tests.

    Conflicts:
            nova/tests/unit/virt/libvirt/test_host.py
            nova/virt/libvirt/host.py

    NOTE(mriedem): The conflicts are due to the code being moved to the
    nova.virt.libvirt.host module in Kilo.

    Closes-Bug: #1433049
    Change-Id: If340f9b849b930c34238c5681018a29bc826798d
    (cherry picked from commit b5a9c4e4d04d011c59fca5306be651906792f411)

    --

    Change-Id: I690d3d700ab4d057554350da143ff77d78b509c6

tags: added: in-stable-juno
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (stable/kilo)

Reviewed: https://review.openstack.org/174477
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b19764d2c6a8160102a806c1d6811c4182a8bac8
Submitter: Jenkins
Branch: stable/kilo

commit b19764d2c6a8160102a806c1d6811c4182a8bac8
Author: Matt Riedemann <email address hidden>
Date: Wed Apr 15 11:51:26 2015 -0700

    compute: stop handling virt lifecycle events in cleanup_host()

    When rebooting a compute host, guest VMs can be getting shutdown
    automatically by the hypervisor and the virt driver is sending events to
    the compute manager to handle them. If the compute service is still up
    while this happens it will try to call the stop API to power off the
    instance and update the database to show the instance as stopped.

    When the compute service comes back up and events come in from the virt
    driver that the guest VMs are running, nova will see that the vm_state
    on the instance in the nova database is STOPPED and shut down the
    instance by calling the stop API (basically ignoring what the virt
    driver / hypervisor tells nova is the state of the guest VM).

    Alternatively, if the compute service shuts down after changing the
    intance task_state to 'powering-off' but before the stop API cast is
    complete, the instance can be in a strange vm_state/task_state
    combination that requires the admin to manually reset the task_state to
    recover the instance.

    Let's just try to avoid some of this mess by disconnecting the event
    handling when the compute service is shutting down like we do for
    neutron VIF plugging events. There could still be races here if the
    compute service is shutting down after the hypervisor (e.g. libvirtd),
    but this is at least a best attempt to do the mitigate the potential
    damage.

    Closes-Bug: #1444630
    Related-Bug: #1293480
    Related-Bug: #1408176

    Change-Id: I1a321371dff7933cdd11d31d9f9c2a2f850fd8d9
    (cherry picked from commit d1fb8d0fbdd6cb95c43b02f754409f1c728e8cd0)

tags: added: in-stable-kilo
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.openstack.org/159275
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=d09785b97a282e8538642f6f8bcdd8491197ed74
Submitter: Jenkins
Branch: master

commit d09785b97a282e8538642f6f8bcdd8491197ed74
Author: Matt Riedemann <email address hidden>
Date: Wed Feb 25 14:13:45 2015 -0800

    Add config option to disable handling virt lifecycle events

    Historically the _sync_power_states periodic task has had the potential
    for race conditions and several changes have been made to try and
    tighten up this code:

    cc5388bbe81aba635fb757e202d860aeed98f3e8
    aa1792eb4c1d10e9a192142ce7e20d37871d916a
    baabab45e0ae0e9e35872cae77eb04bdb5ee0545
    bd8329b34098436d18441a8129f3f20af53c2b91

    The handle_lifecycle_events method which gets power state change events
    from the compute driver (currently only implemented by the libvirt
    driver) and calls _sync_instance_power_state - the same method that the
    _sync_power_states periodic task uses, except the periodic task at least
    locks when it's running - expands the scope for race problems in the
    compute manager so cloud providers should be able to turn it off. It is
    also known to have races with reboot where rebooted instances are
    automatically shutdown because of delayed lifecycle events that the
    instance is stopped even though it's running.

    This is consistent with the view that Nova should manage it's own state
    and not rely on external events telling it what to do about state
    changes. For example, in _sync_instance_power_state, if the Nova
    database thinks an instance is stopped but the hypervisor says it's
    running, the compute manager issues a force-stop on the instance.

    Also, although not documented (at least from what I can find), Nova has
    historically held a stance that it does not support out-of-band
    discovery and management of instances, so allowing external events to
    change state somewhat contradicts that stance and should be at least a
    configurable deployment option.

    DocImpact: New config option "handle_virt_lifecycle_events" in the
               DEFAULT group of nova.conf. By default the value is True
               so there is no upgrade impact or change in functionality.

    Related-Bug: #1293480
    Partial-Bug: #1443186
    Partial-Bug: #1444630

    Change-Id: I26a1bc70939fb40dc38e9c5c43bf58ed1378bcc7

Thierry Carrez (ttx)
Changed in nova:
milestone: kilo-1 → 2015.1.0
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/179284

Revision history for this message
iain MacDonnell (dseven) wrote :

I'm finding that the delay patch is ineffective for larger VMs. On my servers, when the VM has more than about 16G of memory, it can take more than 15 seconds for the new one to start up on reboot.

Turning off lifecycle event management doesn't work either ... because then if the VM is shutdown from inside (user types "shutdown", "halt", etc), nova doesn't know, and thinks it's still up and active.

Still not sure what the right solution is .... maybe to not try and destroy the VM when it has already shutdown by itself... ?

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)
Download full text (18.1 KiB)

Reviewed: https://review.openstack.org/179284
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=5228d4e418734164ffa5ccd91d2865d9cc659c00
Submitter: Jenkins
Branch: master

commit 906ab9d6522b3559b4ad36d40dec3af20397f223
Author: He Jie Xu <email address hidden>
Date: Thu Apr 16 07:09:34 2015 +0800

    Update rpc version aliases for kilo

    Update all of the rpc client API classes to include a version alias
    for the latest version implemented in Kilo. This alias is needed when
    doing rolling upgrades from Kilo to Liberty. With this in place, you can
    ensure all services only send messages that both Kilo and Liberty will
    understand.

    Closes-Bug: #1444745

    Conflicts:
     nova/conductor/rpcapi.py

    NOTE(alex_xu): The conflict is due to there are some logs already added
    into the master.

    Change-Id: I2952aec9aae747639aa519af55fb5fa25b8f3ab4
    (cherry picked from commit 78a8b5802ca148dcf37c5651f75f2126d261266e)

commit f191a2147a21c7e50926b288768a96900cf4c629
Author: Hans Lindgren <email address hidden>
Date: Fri Apr 24 13:10:39 2015 +0200

    Add security group calls missing from latest compute rpc api version bump

    The recent compute rpc api version bump missed out on the security group
    related calls that are part of the api.

    One possible reason is that both compute and security group client side
    rpc api:s share a single target, which is of little value and only cause
    mistakes like this.

    This change eliminates future problems like this by combining them into
    one to get a 1:1 relationship between client and server api:s.

    Change-Id: I9207592a87fab862c04d210450cbac47af6a3fd7
    Closes-Bug: #1448075
    (cherry picked from commit bebd00b117c68097203adc2e56e972d74254fc59)

commit a2872a9262985bd0ee2c6df4f7593947e0516406
Author: Dan Smith <email address hidden>
Date: Wed Apr 22 09:02:03 2015 -0700

    Fix migrate_flavor_data() to catch instances with no instance_extra rows

    The way the query was being performed previously, we would not see any
    instances that didn't have a row in instance_extra. This could happen if
    an instance hasn't been touched for several releases, or if the data
    set is old.

    The fix is a simple change to use outerjoin instead of join. This patch
    includes a test that ensures that instances with no instance_extra rows
    are included in the migration. If we query an instance without such a
    row, we create it before doing a save on the instance.

    Closes-Bug: #1447132
    Change-Id: I2620a8a4338f5c493350f26cdba3e41f3cb28de7
    (cherry picked from commit 92714accc49e85579f406de10ef8b3b510277037)

commit e3a7b83834d1ae2064094e9613df75e3b07d77cd
Author: OpenStack Proposal Bot <email address hidden>
Date: Thu Apr 23 02:18:41 2015 +0000

    Updated from global requirements

    Change-Id: I5d4acd36329fe2dccb5772fed3ec55b442597150

commit 8c9b5e620eef3233677b64cd234ed2551e6aa182
Author: Divya <email address hidden>
Date: Tue Apr 21 08:26:29 2015 +0200

    Control create/delete flavor api permissions using policy.json

    The permissions of ...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (stable/icehouse)

Change abandoned by Matt Riedemann (<email address hidden>) on branch: stable/icehouse
Review: https://review.openstack.org/131982

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/juno)

Related fix proposed to branch: stable/juno
Review: https://review.openstack.org/192244

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (stable/juno)

Reviewed: https://review.openstack.org/192244
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=7bc4be781564c6b9e7a519aecea84ddbee6bd935
Submitter: Jenkins
Branch: stable/juno

commit 7bc4be781564c6b9e7a519aecea84ddbee6bd935
Author: Matt Riedemann <email address hidden>
Date: Wed Apr 15 11:51:26 2015 -0700

    compute: stop handling virt lifecycle events in cleanup_host()

    When rebooting a compute host, guest VMs can be getting shutdown
    automatically by the hypervisor and the virt driver is sending events to
    the compute manager to handle them. If the compute service is still up
    while this happens it will try to call the stop API to power off the
    instance and update the database to show the instance as stopped.

    When the compute service comes back up and events come in from the virt
    driver that the guest VMs are running, nova will see that the vm_state
    on the instance in the nova database is STOPPED and shut down the
    instance by calling the stop API (basically ignoring what the virt
    driver / hypervisor tells nova is the state of the guest VM).

    Alternatively, if the compute service shuts down after changing the
    intance task_state to 'powering-off' but before the stop API cast is
    complete, the instance can be in a strange vm_state/task_state
    combination that requires the admin to manually reset the task_state to
    recover the instance.

    Let's just try to avoid some of this mess by disconnecting the event
    handling when the compute service is shutting down like we do for
    neutron VIF plugging events. There could still be races here if the
    compute service is shutting down after the hypervisor (e.g. libvirtd),
    but this is at least a best attempt to do the mitigate the potential
    damage.

    Closes-Bug: #1444630
    Related-Bug: #1293480
    Related-Bug: #1408176

    Conflicts:
     nova/compute/manager.py
     nova/tests/unit/compute/test_compute_mgr.py

    Change-Id: I1a321371dff7933cdd11d31d9f9c2a2f850fd8d9
    (cherry picked from commit d1fb8d0fbdd6cb95c43b02f754409f1c728e8cd0)

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.