nagios monitoring of package imports needed
Bug #589521 reported by
Robert Collins
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ubuntu Distributed Development |
Triaged
|
Critical
|
Unassigned |
Bug Description
The mass importer can fail from time to time; in various ways. We need alerting when this happens.
Specific known failures:
- jobs lock up one by one until they are all hung and the importer is halted
- ..
To post a comment you must log in.
The original rt #39614 that related to this was about nagios integration, but grew into moving the package importer under losa control. So I'm posting a bit of 'why this can and cannot work today' to this bug. Though really there are probably several bugs that should be split out of my post.
We talked a fair amount of this at the recent sprint. I wasn't aware of this rt, though I was aware of trying to get the importer under LOSA control.
For a quick summary:
I think we can migrate to another machine with minimal fuss.
We'll still need direct login to the new machine for the foreseeable
future because most maintenance tasks (restarting a failing import)
require manual intervention.
I would like to see at least a little nagios integration, so that we
can move polling the state of the import from being manually done to
being automated.
At the moment, there are a few steps of this, which I think are relevant.
1) package-import is currently monitored manually. Which prior to this
week basically meant whenever James Westby got around to checking
on it. (Or someone complained sufficiently about a failure.)
It would be nice to get some level of nagios warning/critical so
that we don't have to manually poll the service.
Since the imports aren't perfect yet, we can't just say "we have any
failing imports", but we could say "we normally have 500 failed
imports, and now we have 1000". Which would help catch the "can no
longer reach archive.debian.org through Canonical's firewall" cases.
As we improve the UDD workflow, eventually this sort of
infrastructure either becomes critical, or becomes obsolete. (People
start depending on the branches to exist, but they may also start
creating the branches directly, rather than having the importer
doing the work.)
2) Jubany is a powerful server which is meant to be assigned to another task.
a) We shouldn't need this much hardware. It really depends on the QoS
we want to provide after major updates. Most of the time there
aren't huge numbers of packages getting .deb updates. Except when
we open up a new release series, etc. Also notable here are when
we fix a major bug and suddenly 600 packages need to be re-scanned.
b) Load on the system can probably be easily tuned by how many
parallel imports we run. On Jubany it is 8. This relates to how
many CPUs, how much peak memory, etc.
c) The code isn't particularly optimized for low load per import yet.
Depends on whether it is better to tweak that, or just spend $ for
more hardware.
d) The system doesn't scale to multiple machines particularly well.
It currently uses an sqlite database for tracking its state. We
could probably migrate it to a postgres db, etc, and then have a
clearer way to scale it horizontally. (Ideally you could run it as
a cloud-ish service, and then on a new release just fire up 20
instances to churn through the queue.)
e) Anyway, no real blockers *today* to just hosting the service on a
new machine, as long as the state gets copied over correctly.
(just copying the /srv/package-imp...