Cockpit Memory Usage display

Per the caption and “Top” output below Cockpit seems to be displaying the virtual memory usage rather than resident memory for the backuppc user. What utility does this choice have in evaluating actual memory usage?

top - 10:33:35 up 3 days, 34 min,  1 user,  load average: 0.09, 0.03, 0.00
Tasks: 252 total,   1 running, 251 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni, 99.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   5864.0 total,    953.4 free,    905.3 used,   4311.9 buff/cache
MiB Swap:  12288.0 total,  12287.2 free,      0.8 used.   4958.6 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                
    838 backuppc  20   0   23644  13824   5120 S   0.0   0.2   0:01.21 BackupPC               
   1091 backuppc  20   0   28292   6956   2560 S   0.0   0.1   0:00.02 /usr/sbin/httpd        
   1092 backuppc  20   0 2429628  21192   5760 S   0.0   0.4   2:40.45 /usr/sbin/httpd        
   1093 backuppc  20   0 2232956  20240   6528 S   0.0   0.3   2:18.96 /usr/sbin/httpd        
   1136 backuppc  20   0 2232956  20228   6400 S   0.0   0.3   2:18.81 /usr/sbin/httpd        
  64804 backuppc  20   0 2232956  19132   5760 S   0.0   0.3   0:54.02 /usr/sbin/httpd    

As you can see in the resident memory column backuppc usage is less than 100 KiB.

Since nobody else responded, and I was curious about the large difference, I asked Google Gemini to explain, and it told me the following (copy-and-paste, quoting):

It’s common for the virtual memory (VIRT) reported for BackupPC processes to be in the gigabytes, but this high number is often misleading regarding the actual physical memory (RAM) being used.

The high Virtual Memory Size (VIRT) is typically due to how Linux reports memory usage and how BackupPC operates, especially when using the rsync transfer method.


Why Virtual Memory is High

The very large VIRT number is usually a reflection of the total address space the process has reserved, which includes:

  1. Memory-Mapped Files and Libraries: Linux often uses memory-mapping to handle files and shared libraries. The entire size of these mapped files and shared libraries (like the Perl interpreter and its modules) contributes to the VIRT size, even if only a small portion is currently loaded into physical RAM or actively being used. The size of the backup pool indexes can be a significant factor here.

  2. Kernel Over-Commitment: By default, Linux can “over-commit” memory, meaning it grants a process’s request for a large block of virtual memory without immediately allocating the corresponding physical RAM. This requested but unaccessed memory contributes to VIRT.

  3. Rsync File List: When using rsync (the default and most recommended method), the BackupPC process needs to build an in-memory table of all the filenames for comparison. If you have millions of files across many clients, this list can be very large, directly increasing the actual memory footprint (Resident Set Size, or RSS). However, even for large file counts, the RSS/RAM usage is usually much lower than the VIRT.

Using ps_mem you’ll also get different results for ram usage, which also doesn’t seem to look a RES either.

Obviously the cockpit devs chose VIRT for a reason, the same like the author of ps_mem also chose to display in the way they did. Maybe they feel that’s the best value to display. Others may argue otherwise.

Note that when BackupPC is actually doing something, then the RES value could also be far higher, than when for say it’s idle and not doing anything.

“man top” provided a reasonable explanation of what I was seeing, which I learned prior to posting this topic, and the responses here do not contradict that understanding. I just find it a useless measure, the cockpit displayed value, for my usage.

I don’t find it useless: from a potential performance analysis standpoint – if the numbers tell you that in a system with 4.92 GB of RAM, that one program is requiring 3.77 GB (77%) so if the system must utilize swap (now or in the future) it will run slower.

it is interesting: that 77% also strikes me as an example of the “Pareto Principle” (aka the 80/20 rule).

Tony

But I haven’t seen any backuppc RES usage greater than 1GiB during it’s most active data eval and transfer.

I went back to your original post, and there was a unit of measure typo in your statement, should be in MiB not KiB.

Specifically, the RES values from the output are:

13824+6956+21192+20240+20228+19132=101,572 KiB = 99.19 MiB

Tony