Comment 6 for bug 2012678

Revision history for this message
Sébastien Lamy (lamyseba-b) wrote :

Thank you for your Answer.

I insisted on the diff with xpdf-reader because it seemed to me it could not be the same library that leads to results so differents in term of performance when dealing with "big" files.

Qpdfview is a great tool with a nice UI, and it fits perfect for my everyday use on "normal" documents (mainly text, or images with not so high resolution, that I do not need to zoom). On this kind of everyday documents it is indeed a fast and lightweight reader, convenient for low-resource computers as mine.

Xpdfreader UI is much less friendly, and lacks features (like thumbnails of pages). Scrolling with mouse wheel is a pain (very small steps, so very slow to get down even one page), you cannot grab document to scroll.

It seemed to me that the "light-weight" characteristic was indeed a goal for Qpdfview, maybe I was mistaken. Anyway, this is why I thought the problem was interesting to report here. And I talked about Xpdfreader to encourage "other already do it, they are open-source so maybe you can take advantage of their codebase and libraries". I understand this is more difficult than it may seem at first look from outside.

Qpdfview is often reported as a lightweight reader in external reviews, but indeed on the official page and github page, this is not written as an explicit goal, so maybe I was mistaken.

As of xpdfreader, I know I have no chance about reporting "your UI is just a disaster, could you change this to be like qpdfview, please ?". But true, I can switch to xpdf when dealing with big document, even if browsing this documents will be quite a pain with this "no grab" interface.

I took the time to launch qpdf view alone on my desktop, disabled "prefetch" option (did not find the "keep obsolete pixmap" option), increased cache size to 256MB, open sample_map2 and wait till rendering is done at 500% in non-tiling mode. It took 15 minutes to finally display document (!!!), with a peak RAM usage at 1,1GB, and 750MB RAM used when done rendering. After the 15minutes waiting qpdfview to load, it seamed that full document page was available for scrolling, there was then no blank parts that had to be recomputed. During loading, CPU usage was not at peak, it is clearly only a memory usage/management problem.

Problem of redrawing on focus gain/lost was indeed directly linked to cache size. 256Mo was enough to get rid of the redraw problem. 64Mo was to small.

To my surprise, tiling mode at 500% was more efficient (at scale 100% or less it was lot slower than non-tiling). More efficient means, taking like 5/6sec. to display the part of the document inside the window. Scrolling to a part still blank will take 5 to 15 seconds to show, wich is already a problematic delay.

So here are my suggestions, I'm not sure to be writing anything relevant, and understand that anyway optimizing resource usage may be lots of work, that nobody may be wanting to do if usability on small configurations is not a goal.

* Why do the cache need to have the full document page (or even more pages if prefetch enabled) ? At least for redrawing at focus gain/focus lost, it will be ok to just have the part of the document that is displayed on screen. It is not necessary to have full document page for this kind of redraw.
* You say xpdf is drawing directly. Is it possible to do same thing, and putting image in cache afterward, rather than before displaying ?
* Is there no way to have constant RAM usage, making a kind of "hybrid" tiling mode ? I mean
  - when the part of the document showed on screen has to split into various tiles (this happens at small zoom scale), then not use tiling mode, because it is a lot slower. Moreover, in this configuration tiling mode seems to eat more RAM than non-tiling.
  - When one tile is about the same size as the part shown on screen (this happens at big zoom scale), then enter tiling mode. In this configuration tiling mode seems to eat lot less RAM than non-tiling.
* Once a document page is fully or partially loaded at a big scale, could this "picture" of the document be used as a base when [unzooming at smaller scale]/[rezooming at same scale]. If the document page was only partially loaded, it could be then completed if necessary by computing tiles that where not drawn. So that zooming/unzooming at scale that stays smaller or equal will be as fluid as zooming/unzomming a simple picture in a picture viewer. It seems indeed a lot lighter/faster to compute a picture unzoom than to compute again the pdf display at wanted zoom level.
* Zoom level in drop down menu has limited choices, but zoom with mouse wheel or UI buttons has no limits and may have any value. It is set to a X factor in the preferences. Maybe we could have a check a box in preferences, so that zoom with wheel/button it is linked to the same values as in the drop down choice, with special values like "page" placed in between, at the scale level they correspond. So there will be zoom limits, and a limited number of possible zoom values. With limited zoom possible values, caching of zoomed/unzoomed pictures will make more sense and ease browsing.

Regards