MemoryError on update_db script caused by a 196M oops report
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
python-oops-tools |
Triaged
|
Low
|
Unassigned |
Bug Description
The update_db.py script is failing with the following traceback:
Traceback (most recent call last):
File "bin/update_db", line 41, in <module>
oopstools.
File "/srv/lp-
for oops in oops_store.
File "/srv/lp-
oops = self._load_
File "/srv/lp-
os.path.
File "/srv/lp-
data, reqvars, statements, traceback = _parse_msg(msg)
File "/srv/lp-
exception_type, msg.getheader(
File "/srv/lp-
evalue = replace_
File "/srv/lp-
s = re.sub(
File "/usr/lib/
return _compile(pattern, 0).sub(repl, string, count)
MemoryError
This is caused by an 196M OOPS report which contains a huge SQL statement (The oops file can be found at: devpad:
One workaround is to move that oops out of the way so update_db can continue to do its job.
description: | updated |
Changed in oops-tools: | |
status: | New → Triaged |
importance: | Undecided → Critical |
affects: | oops-tools → python-oops-tools |
what does that re expect to replace? If its expecting e.g. per-line, then running it on each line of the sql might avoid whatever pathological behaviour is occuring.
Another possibility is that the oops updater isn't very efficient on memory and this OOPS report simply shows that up. How much memory do we typically use per OOPS, and do we free it all after each OOPS?