huge memory usage on big files

Bug #1005478 reported by nocturo
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Duplicity
New
Medium
Unassigned

Bug Description

Hello,

I'm running the following:
duplicity 0.6.19
Python 2.4.3
CentOS 5.8 x64

this is all on local backend.

Synopsis:
When backing up LVM space with huge 140GB file (image file) duplicity is using huge amount of memory.

root 2781 0.4 80.7 2500696 1269952 ? D May26 13:29 /usr/bin/python /usr/bin/duplicity --archive-dir /backup/node/ --name vm2842 --no-encryption --verbosity 4 --full-if-older-than 4W --volsize 100 --allow-source-mismatch --exclude-globbing-filelist /etc/duply/vm2842/exclude /mnt/lvm/vm2842 file:///backup/node/vm2842

free -m output:
             total used free shared buffers cached
Mem: 1536 1526 9 0 0 14
-/+ buffers/cache: 1511 24
Swap: 6143 2462 3680

I've attached pmap reference as well. It's stuck at this file for a while:
-rw------- 1 x x 138G May 23 08:45 /mnt/lvm/vm2842/home/x/imgs/msm-org-production-sparse-small.img

Revision history for this message
nocturo (nocturo) wrote :
Revision history for this message
papukaija (papukaija) wrote :

Thank you for taking the time to report this bug and helping to make Ubuntu better. Could you tell me whether you get this issue with smaller files? I guess your system becomes unresponsive too. Can you also tell me if your memory usage suddenly starts to grow up very quickly? Thanks in advance.

Revision history for this message
nocturo (nocturo) wrote :

This issue only happens with big files, usually after 50GB it uses my 1.5G RAM I have available. This doesn't happen with smaller files tho, only when it's processing a single huge file, etc...

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote : Re: [Bug 1005478] Re: huge memory usage on big files

Would it be possible to mount the LVM and backup the individual files.
 In the long run, that would be more efficient since the deltas would
be easier to spot and manage.

On Thu, Feb 21, 2013 at 12:20 PM, nocturo <email address hidden> wrote:
> This issue only happens with big files, usually after 50GB it uses my
> 1.5G RAM I have available. This doesn't happen with smaller files tho,
> only when it's processing a single huge file, etc...
>
> --
> You received this bug notification because you are subscribed to
> Duplicity.
> https://bugs.launchpad.net/bugs/1005478
>
> Title:
> huge memory usage on big files
>
> Status in Duplicity - Bandwidth Efficient Encrypted Backup:
> New
>
> Bug description:
> Hello,
>
> I'm running the following:
> duplicity 0.6.19
> Python 2.4.3
> CentOS 5.8 x64
>
> this is all on local backend.
>
> Synopsis:
> When backing up LVM space with huge 140GB file (image file) duplicity is using huge amount of memory.
>
> root 2781 0.4 80.7 2500696 1269952 ? D May26 13:29
> /usr/bin/python /usr/bin/duplicity --archive-dir /backup/node/ --name
> vm2842 --no-encryption --verbosity 4 --full-if-older-than 4W --volsize
> 100 --allow-source-mismatch --exclude-globbing-filelist
> /etc/duply/vm2842/exclude /mnt/lvm/vm2842 file:///backup/node/vm2842
>
> free -m output:
> total used free shared buffers cached
> Mem: 1536 1526 9 0 0 14
> -/+ buffers/cache: 1511 24
> Swap: 6143 2462 3680
>
> I've attached pmap reference as well. It's stuck at this file for a while:
> -rw------- 1 x x 138G May 23 08:45 /mnt/lvm/vm2842/home/x/imgs/msm-org-production-sparse-small.img
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/duplicity/+bug/1005478/+subscriptions

Revision history for this message
nocturo (nocturo) wrote :

That's exactly how it works, I'm using lvm_snapshot.sh with duply as pre/post script and then backup from it. It's the problem when in the LVM filesystem I have a huge file, duplicity consumes all of RAM, goes into swap and still doesn't complete the backup. I exclude those files as a workaround and I opened this bug report to see if we can somehow make it better with memory usage?

Revision history for this message
Fjodor (sune-molgaard) wrote :

Seems to be an issue still on 0.7.06.

64Gb file in particular - read speeds between 200Kb and a few Mb per second, RAM usage after 30 minutes (with no files yet created), as reported by htop: VIRT: 1,7Gb, RES 1,5Gb - both steadily rising.

Surely, something must be possible to do...

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

On huge files a great number of sigs are generated and held until the file is complete. To mitigate this problem use "--max-blocksize=20480" or higher to avoid this issue.

Changed in duplicity:
status: New → In Progress
importance: Undecided → Medium
assignee: nobody → Kenneth Loafman (kenneth-loafman)
milestone: none → 0.8.00
Revision history for this message
Fjodor (sune-molgaard) wrote :

The last few entries at https://bugs.launchpad.net/duplicity/+bug/582962 might be of relevance here...

Changed in duplicity:
milestone: 0.8.00 → 0.8.01
Changed in duplicity:
milestone: 0.8.01 → none
Changed in duplicity:
status: In Progress → New
assignee: Kenneth Loafman (kenneth-loafman) → nobody
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.