record import matching can fail with "shared memory" PostgreSQL errors
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Evergreen |
Triaged
|
Undecided
|
Unassigned |
Bug Description
Attempting to load a large number of bib records into a Vandelay import queue can stall and fail, with no records getting staged into the import queue. The following PostgreSQL errors are associated with this failure:
[4272-1] WARNING: out of shared memory
[4272-2] CONTEXT: SQL statement "DROP TABLE _vandelay_
[4272-3] #011PL/pgSQL function vandelay.
[4272-4] #011PL/pgSQL function vandelay.
[4273-1] ERROR: out of shared memory
[4273-2] HINT: You might need to increase max_locks_
[4273-3] CONTEXT: SQL statement "DROP TABLE _vandelay_
[4273-4] #011PL/pgSQL function vandelay.
[4273-5] #011PL/pgSQL function vandelay.
[4273-6] STATEMENT: INSERT INTO vandelay.
My diagnosis is that one or more locks get consumed each time the _vandelay_tmp_jrows and _vandelay_tmp_qrows temporary tables get created or dropped in the course of record matching by the vandelay.
Here is the part of the Pg documentation that describes the Pg settings that influence the number of locks that can be tracked:
"The shared lock table tracks locks on max_locks_
Possible solutions:
- use a mechanism other than temporary tables to pass the jrow and qrow data around (e.g., rewrite the affected functions in PL/Perl and use the %_SHARED hash)
- decrease the number of tables touched in each transaction by committing after processing each record in the spool, and write application-side logic to handle failures of individual records
Evergreen: 2.5.1
PostgreSQL: 9.3
tags: | added: vandelay |
tags: | added: cataloging |
description: | updated |
description: | updated |
Changed in evergreen: | |
status: | New → Triaged |
tags: |
added: cat-importexport removed: cataloging vandelay |
Another option would be using permanent unlogged tables rather than temporary tables.