/contrib/famzah

Enthusiasm never stops


Leave a comment

Arduino ESP32 development under Linux

I am using a Windows desktop and I run Linux as a Virtualbox guest. ESP32 development under Windows is super easy to set up – you only need to install the Arduino IDE. Unfortunately, it really bugged me that compilation time is so slow. I’m not enough experienced and test a lot on the real hardware, and slow compilation really slows down the entire development process.

The Arduino IDE v2 has improved and supports parallel compilation within a submodule, but still it works slower than expected on my computer and is not ideally parallelized. Additionally, I noticed that some libraries are recompiled every time which is a huge waste of time (and resources) because libraries don’t change. Only my sketch source code changes.

I decided to see how compilation performs on Linux. The Arduino project has a command-line tool to manage, compile and upload code. The tool is still in active development but works flawlessly for me.

Here is how I installed it on my Linux desktop:

wget https://downloads.arduino.cc/arduino-cli/arduino-cli_latest_Linux_64bit.tar.gz
tar -xf arduino-cli_latest_Linux_64bit.tar.gz
mv arduino-cli ~/bin/
rm arduino-cli_latest_Linux_64bit.tar.gz

arduino-cli config init
vi /home/famzah/.arduino15/arduino-cli.yaml
  # board_manager -> additional_urls -> add "https://raw.githubusercontent.com/espressif/arduino-esp32/gh-pages/package_esp32_index.json"
  # library -> enable_unsafe_install -> set to "true" # so that we can install .zip libraries
arduino-cli core update-index
arduino-cli core install esp32:esp32
arduino-cli board listall # you must see lots of "esp32" boards now

Here is how to compile a sketch and upload it to an ESP32 board:

cd %the_folder_where_you_saved_your_Arduino_sketch%

arduino-cli compile --warnings all -b esp32:esp32:nodemcu-32s WifiRelay.ino --output-dir /tmp/this-sketch-tmp

arduino-cli upload -v --input-dir /tmp/this-sketch-tmp -p /dev/ttyUSB0 -b esp32:esp32:nodemcu-32s

arduino-cli monitor -p /dev/ttyUSB0 -c baudrate=115200

I have created a small script to automate this task which also uses a dynamically created temporary folder, in order to avoid race conditions and artefacts. The sketch folder on my Windows host is shared (read-only) with my Linux guest. I still write the sketch source code on my Windows machine using the Arduino IDE.

The ESP32 board is connected to my Windows host but I need to communicate with it from my Linux guest. This is done easily in Virtualbox using USB filters. A USB filter allows you to do a direct pass-through of the ESP32 serial port. It also works if you plug in the ESP32 board dynamically while the Linux machine is already running.


Leave a comment

Custom key fetch script for cryptsetup @ systemd

Disk encryption in a Linux environment is a common setup and is easily done as demonstrated by this tutorial, for example. However the easy part vanishes quickly if you don’t want to keep the encryption key on the same machine and if you don’t want to enter the key manually on each server reboot.

How do we auto-mount (unattended boot) an encrypted disk but protect the data if the server gets stolen physically? One possible solution is to store the encryption key in a remote network location and fetch it ephemerally (if you are still allowed to) when the server starts. Cloudflare did this for their servers in Russia due to the military conflict.

A custom script to fetch the encryption key, use it and then discard it sounds like a good approach. Such a script is broadly called a “keyscript”. But this is not supported by systemd in the standard “/etc/crypttab” file which describes the encrypted block devices that are set up during system boot. Luckily, there is another way to get this done by using the following feature of “/etc/crypttab”:

If the specified key file path refers to an AF_UNIX stream socket in the file system, the key is acquired by connecting to the socket and reading it from the connection. This allows the implementation of a service to provide key information dynamically, at the moment when it is needed.

With “systemd” you can easily build a service which responds to Unix sockets (or any other socket type as described in the man page). The socket is controlled and supervised by “systemd” and the mechanism is called “socket-based activation”. You have the option to execute a new process for each socket request, or a single program can process all requests. In this case I’m using the first approach because it’s very simple to implement and because the load on this service is negligible.

Here is how the socket service definition looks like. It’s stored in a file named “/etc/systemd/system/fetch-luks-key-volume1.socket”:

[Unit]

Description=Socket activator for service "fetch-luks-key-volume1"
After=local-fs.target

# recommended by "man systemd.socket"
CollectMode=inactive-or-failed

[Socket]

ListenStream=/run/luks-key-volume1.sock
SocketUser=root
SocketGroup=root
SocketMode=0600
RemoveOnStop=true

# execute a new Service process for each request
Accept=yes

[Install]

WantedBy=sockets.target

A typical “systemd” service unit needs to be configured with the same name as the socket service. This is where the custom logic to fetch the key is executed. Because “systemd” feeds additional meta data to the service unit, its name must be suffixed with “@”. The whole file name is “/etc/systemd/system/fetch-luks-key-volume1@.service” and contains the following code:

[Unit]

Description=Remotely fetch LUKS key for "volume1"

After=network-online.target
Wants=network-online.target

[Service]

Type=simple
RuntimeMaxSec=10
ExecStart=curl --max-time 5 -sS https://my-restricted.secure/key.location
StandardOutput=socket
StandardError=journal
# ignore the LUKS request packet which specifies the volume (man crypttab)
StandardInput=null

The new files are activated in “systemd” in the following way:

systemctl daemon-reload
systemctl enable fetch-luks-key-volume1.socket
systemctl start fetch-luks-key-volume1.socket

There is no need to enable the “service” unit because it’s activated by the socket when needed and is then immediately terminated upon completion.

Here is a command-line test of the new system:

# ls -la /run/luks-key-volume1.sock
srw------- 1 root root 0 Mar  4 18:09 /run/luks-key-volume1.sock

# nc -U /run/luks-key-volume1.sock|md5sum
4f7bac5cf51037495e323e338100ad46  -

# journalctl -n 100
Mar 04 18:09:38 home-server systemd[1]: Reloading.
Mar 04 18:09:45 home-server systemd[1]: Starting Socket activator for service "fetch-luks-key-volume1"...
Mar 04 18:09:45 home-server systemd[1]: Listening on Socket activator for service "fetch-luks-key-volume1".
Mar 04 18:10:05 home-server systemd[1]: Started Remotely fetch LUKS key for "volume1" (PID 2371/UID 0).
Mar 04 18:10:05 home-server systemd[1]: fetch-luks-key-volume1@0-2371-0.service: Deactivated successfully.

You can use the newly created Unix socket in “/etc/crypttab” like this:

# <target name>  <source device>           <key file>                 <options>
backup-decrypted /dev/vg0/backup-encrypted /run/luks-key-volume1.sock luks,headless=true,nofail,keyfile-timeout=10,_netdev

Disclaimer: This “always on” remote key protection works only if you can disable the remote key quickly enough. If someone breaks into your home and steals your NAS server, you probably have more than enough time to disable the remote key which is accessible only by the remote IP address of your home network. But if you are targeted by highly skilled hackers who can physically breach into your server, then they could boot your Linux server into rescue mode (or read the hard disk physically) while they are still on your premises, find the URL where you keep your remote key and then fetch the key to use it later to decrypt what they managed to steal physically. The Mandos system tries to narrow the attack surface by facilitating keep-alive checks and auto-locking of the key server.

If your hardware supports UEFI Secure Boot and TPM 2.0, you can greatly improve the security of your encryption keys and encrypted data. Generally speaking, UEFI Secure Boot will ensure a fully verified boot chain (boot loader, initrd, running kernel). Only a verified system boot state can request the encryption keys from the TPM hardware device. This verified system boot state is something which you control and you can disable the Linux “rescue mode” or other ways of getting to the root file-system without supplying a password. Here are two articles (1, 2) where this is further discussed.

Last but not least, if a highly-skilled attacker has enough time and physical access to your hardware, they can perform many different Evil maid attacks, install hardware backdoors on your keyboard, for example, or even read the encryption keys directly from your running RAM. Additionally, a system could also be compromised via the network, by various social engineering attacks, etc. You need to assess the likelihood of each attack against your data and decide which defense strategy is practical.


Update: This setup failed to boot after a regular OS upgrade. Probably due to incorrect ordering of the services. I didn’t have enough time to debug it and therefore created the file “/root/mount-home-backup” which does the mount manually:

#!/bin/bash
set -u

echo "Executing mount-home-backup"

set -x

systemctl start systemd-cryptsetup@backup\\x2ddecrypted.service
mount /home/backup

The I marked all definitions in “/etc/crypttab” and “/etc/fstab” with the option “noauto” which tells the system scripts to not try to mount the file systems at boot.

Finally I created the following service in “/etc/systemd/system/mount-home-backup.service”:

[Unit]

Description=Late mount for /home/backup
After=network-online.target fetch-luks-key-volume1.socket

[Service]

Type=oneshot
RemainAfterExit=yes

StandardOutput=journal
StandardError=journal

ExecStart=/root/mount-home-backup

[Install]

WantedBy=multi-user.target

This new service needs to be activated, too:

systemctl daemon-reload
systemctl enable mount-home-backup.service


Leave a comment

Proxy SSH traffic using Cloudflare Tunnels

Long story short, Cloudflare Tunnel started as a network service which lets you expose a web server with private IP address to the public Internet. This way you don’t have to punch a hole in your firewall infrastructure, in order to have inbound access to the server. There are additional benefits like the fact that nobody knows the real IP address of your server, they can’t DDoS you by sending malicious traffic, etc.

Today I was pleasantly surprised to discover that Cloudflare Tunnels can be used for SSH traffic as well. It’s true that most machines with an SSH server have public IP addresses. But think about the time when you want to access the Linux laptop or workstation of a relative, so that you can remotely control their desktop, in order to help them out. Modern Linux distros all provide remote desktop functionality but the question is how do you get direct network access to the Linux workstation.

If you can connect via SSH to a remote machine without a public IP address, then you can set up SSH port forwarding, in order to connect to their remote desktop local service, too.

Here is what you have to execute at the remote machine to which you want to connect:

$ wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64
$ chmod +x cloudflared-linux-amd64 
$ ./cloudflared-linux-amd64 tunnel --url ssh://localhost:22

2023-03-04T20:51:16Z INF Thank you for trying Cloudflare Tunnel. Doing so, without a Cloudflare account, is a quick way to experiment and try it out. However, be aware that these account-less Tunnels have no uptime guarantee. If you intend to use Tunnels in production you should use a pre-created named tunnel by following: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps
2023-03-04T20:51:16Z INF Requesting new quick Tunnel on trycloudflare.com...
2023-03-04T20:51:20Z INF +--------------------------------------------------------------------------------------------+
2023-03-04T20:51:20Z INF |  Your quick Tunnel has been created! Visit it at (it may take some time to be reachable):  |
2023-03-04T20:51:20Z INF |  https://statistics-feel-icon-applies.trycloudflare.com                                    |
2023-03-04T20:51:20Z INF +--------------------------------------------------------------------------------------------+

When you have the URL “statistics-feel-icon-applies.trycloudflare.com” (which changes with every quick Cloudflare tunnel execution), you have to do the following on your machine (documentation is here):

$ wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64
$ chmod +x cloudflared-linux-amd64 

$ vi ~/.ssh/config # and then add the following
Host home.server
	ProxyCommand /home/famzah/cloudflared-linux-amd64 access ssh --hostname statistics-feel-icon-applies.trycloudflare.com

$ ssh root@home.server 'id ; hostname' # try the connection

uid=0(root) gid=0(root) groups=0(root)
home-server

The quick Cloudflare Tunnels are free and don’t require that you have an account with Cloudflare. What a great ad-hoc replacement of VPN networks! On Linux this network infrastructure lets you replace Teamviewer, AnyDesk, etc. with a free secure remote desktop solution.


Leave a comment

How to reliably get the system time zone on Linux?

If you want to get the time zone abbreviation, use the following command:

date +%Z

If you want the full time zone name, use the following command instead:

timedatectl show | grep ^Timezone= | perl -pe 's/^Timezone=(\S+)$/$1/'

There are other methods for getting the time zone. But they depend either on the environment variable $TZ (which may not be set), or on the statically configured “/etc/timezone” file which might be out of sync with the system time zone file “/etc/localtime”.

It’s important to note that the Linux system utilities reference the “/etc/localtime” file (not “/etc/timezone”) when working with the system time. Here is a proof:

$ strace -e trace=file date
execve("/bin/date", ["date"], 0x7ffda462e360 /* 67 vars */) = 0
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/etc/localtime", O_RDONLY|O_CLOEXEC) = 3
Sat 04 Feb 2023 10:33:35 AM EET
+++ exited with 0 +++

The “/etc/timezone” file is a static helper that is set up when “/etc/localtime” is configured. Typically, “/etc/localtime” and “/etc/timezone” are in sync so it shouldn’t matter which one you query. However, I prefer to use the source of truth.

DNS icon by PRchecker


Leave a comment

Can your local NFS connections get broken by your external Internet connection?

Long story short: Yes! A flaky Internet connection to the outside world can make your local NFS client-server connections unusable. Even when they run on a dedicated storage network using dedicated switches and cables. This is a tale of dependencies, wrong assumptions, desperate restart of services, the butterfly effect and learning something new.

The company I work for operates 1000+ production servers in three data centers around the globe. This all started after a planned, trivial mass-restart of some internal services which are used by the online Control Panel. A couple of minutes after the restarts, the support team alarmed me that the Backup section of the Control Panel is not working. I acted as a typical System Administrator and just tried if the NFS backups are accessible from an SSH console. They were. So I concluded that most probably it wasn’t something with the NFS service but it was a problem caused by the restart of the internal services which keep the Control Panel running.

So we called a system developer to look into this. In the meantime I discovered by tests that the issue is limited only to one of our data centers. This raised an eyebrow but still with no further debug info and with everything working under the SSH console, I had to wait for input from the system development team. About an hour later they came up with a super simple reproducer:

perl -e 'use Path::Tiny; print path("/nfs/backup/somefile")->slurp;'

strace() shown that this hung on “flock(LOCK_SH)”. OMG! So it was a problem with the System Administrators’ systems after all. My previous test was to simply browse and read the files, and it didn’t occur to me to try file locking. I didn’t even know that this was used by the Control Panel. It turns out to be some (weird) default by Path::Tiny. A couple of minutes later I simplified the reproducer even more to just the following:

flock --shared /nfs/backup/somefile true

This also hung on “flock(LOCK_SH)”. Only in the USA data center. The backup servers were complaining about the following:

statd: server rpc.statd not responding, timed out
lockd: cannot monitor %server-XXX-of-ours%

The NFS clients were reporting:

lockd: server %backup-IP% not responding, still trying
xs_tcp_setup_socket: connect returned unhandled error -107

Right! So it’s the “rpc.statd” which just died! On both of our backup servers, simultaneously? Hmm… I raised the eyebrow even more. All servers had weeks of uptime, no changes at the time when the incident started, etc. Nothing suspicious caused by activity from any of our teams. Nevertheless, it doesn’t hurt to restart the NFS services. So I did it — restarted the backup NFS services (two times), the client NFS services for one of the production servers, unmounted and mounted the NFS directories. Nothing. Finally, I restarted the backup servers because there was a “[lockd]” kernel process hung in “D” state. After all it is possible that two backup servers with the same uptime get the same kernel bug at the same time…

The restart of the server machines fixed it! Pfew! Yet another unresolved mystery fixed by restart. Wait! Three minutes later the joy was gone because the Control Panel Backup section started to be sluggish again. The production machine where I was testing was intermittendly able to use the NFS locking.

2h30m elapsed already. Now it finally occurred to me that I need to pay closer attention to what the “rpc.statd” process was doing. To my surprise strace() shown that the process was waiting for 5+ seconds for some… DNS queries! It was trying to resolve “a.b.c.x.in-addr.arpa” and was timing out. The request was going to the local DNS cache server. The global DNS resolvers 8.8.8.8 and 1.1.1.1 were working properly and immediately returned “NXDOMAIN” for this DNS query. So I configured them on the backup servers and the NFS connections got much more stable. Still not perfect though.

The situation started to clear up. The NFS client was connecting to the NFS server. The server then tried to resolve the client’s private IP address to a hostname but was failing and this DNS failure was taking too many seconds. The reverse DNS zone for this private IPv4 network is served by the DNS servers “blackhole-1.iana.org” and “blackhole-2.iana.org”. Unfortunately, our upstream Internet provider was experiencing a problem and the connection to those DNS servers was failing with “Time to live exceeded” because of a network loop.

But why the NFS locking was still a bit sluggish after I fixed the NFS servers? It turned out that the “rpc.statd” of the NFS clients also does DNS resolve for the IP address of the NFS server.

30 minutes later I blacklisted the whole “x.in-addr.arpa” DNS zone for the private IPv4 network in all our local DNS resolvers and now they were replying with SERVFAIL immediately. The NFS locking started to work fast again and the Online Control panels were responding as expected.

Case closed. In three hours. Could have been done must faster – if I only knew NFS better, our NFS usage pattern and if I didn’t jump into the wrong assumptions. I’m still happy that I got to the root cause and have the confidence that the service is completely fixed for our customers.


Leave a comment

Open many interactive sessions in Konsole using a script

For 99.999% of my mass-execute tasks on many servers I use MPSSH.py which opens non-interactive SSH shells in parallel. But still there is one tedious weekly task that needs to be done using an interactive SSH shell.

In order to make the task semi-automated and to avoid typo errors, I open the Konsole sessions (tabs) from a list of servers. Here is how to do it:

for srv in $(cat server-list.txt); do konsole --new-tab --hold -e bash -i -c "ssh root@$srv" ; done

Once I have all sessions opened, I use Edit -> Copy Input to in Konsole, so that the input from the first “master” session (tab) is sent simultaneously to all sessions (tabs).

The "--hold" option ensures that no session ends without you noticing it. For example, if you had many sessions started without "--hold" and one of them terminates because the SSH connection was closed unexpectedly, the session (tab) would simply be closed and the tab would disappear. Having "--hold" keeps the terminated session (tab) opened so that you can see the exit messages, etc. Finally, you have to close each terminated session (tab) with File -> Close Session or the shortcut Ctrl + Shift + W.


Leave a comment

Force Exim to process outgoing queue quickly

There are times when a lot of messages queue up in Exim. For example, it could be due to an intermittent network problem.

I always struggled to force Exim to process the outgoing queue with lots of parallel connections to the remote SMTP servers. Note that my use-case is rather special where all mails are delivered to the same recipient domain.

There are suggestions to use "queue_run_max = 30" or "remote_max_parallel = 30" to increase the maximum parallel outgoing SMTP connections. When I execute "exim -qf" or "exim -Rf domain" to process the mail queue immediately, the parallel SMTP connections are still capped to just about 5.

Today I found a way to control the parallelism for SMTP deliveries when we want to process the queue manually (forced):

exiqgrep -ir example.com|xargs -P 30 -n 10 exim -M

This effectively launches 30 parallel SMTP connections and a queue with thousands of messages gets processed in a few minutes. If you want to process all messages regardless of their domain name, use only "exiqgrep -i". The command “exiqgrep” has other filtering capabilities to help you with the selection of messages to process.

Note that if the messages have queued up because the remote MX server was marked as unreachable, you will have to clear the “retry” database before you start to process the queue manually. Additionally, I also clear the database about messages waiting for remote SMTP hosts:

exim_tidydb -t 1s /var/spool/exim4 retry
exim_tidydb -t 1s /var/spool/exim4 wait-remote_smtp

The man page of “exim_tidydb” explains in detail what this command does.


Leave a comment

MQTT QoS level between publisher and subscribers

Quality of Service (QoS) in MQTT is well explained by the HiveMQ Team. With the exception of one subtle detail: QoS messages are never “upgraded”, so if the original publisher sent a message with QoS 0, a QoS 2 subscriber will still receive the message as QoS 0. That’s what Dominik from the HiveMQ Team explained in a comment and it was also reiterated by his colleague Dasha in another comment.

This applies for regular messages, as well as for the Last Will and Testament (LWT).

Another discussion about this explains the same thing but points to an interesting feature of the Mosquitto MQTT broker:

upgrade_outgoing_qos [ true | false ]

The MQTT specification requires that the QoS of a message delivered to a subscriber is never upgraded to match the QoS of the subscription.

Enabling this option changes this behavior. If "upgrade_outgoing_qos" is set "true", messages sent to a subscriber will always match the QoS of its subscription. This is a non-standard option not provided for by the spec.

Defaults to "false".


Leave a comment

Properly stop a Firebase app in Node.js

Normally, the Node.js process will exit when there is no work scheduled (docs). You shouldn’t call process.exit() unless it is necessary to terminate the Node.js process immediately due to an error condition. Calling process.exit() doesn’t let pending events to complete which may lead to unpredictable results as demonstrated in the docs.

Now that we know how to naturally terminate a Node.js application, how do we achieve it if we are using the Firebase JavaScript SDK?

First you need to cancel any asynchronous listeners. For example, if you subscribed for data changes, you need to unsubscribe:

let func = firebase.database().ref(userDB).on("value", myHandler);
...
firebase.database().ref(userDB).off("value", func);

Some people suggest that you also call firebase.database().goOffline() in the final stage.

Additionally, as described in these bug reports (#1 and #2), if you used firebase.auth() you need to call firebase.auth().signOut().

And finally, you also need to destroy the Firebase application by calling app.delete().

This has worked for me using Node.js version 10 and Firebase JS SDK version 8.


Leave a comment

Debug the usage of anonymously shared memory regions

PHP-FPM keeps a shared Opcache memory between the parent process and all its child processes in a pool. The idea is to compile source code once and then reuse it in all child processes directly as byte code. This is efficient but as a System administrator I recently stumbled across a problem – how to find out the real memory usage by the Opcache in the operating system?

I thought a simple “ps” list would reveal the memory usage because it would be accounted to the parent process, because the parent process created the anonymously shared mmap() region. Linux doesn’t work this way. In order to debug this easily, I created a simple program which does the following:

  • The parent process creates a shared anonymous memory region using mmap() with a size of 2000 MB. The parent process does not use the memory region in any way. It doesn’t change any data in it.
  • Two child processes are fork()’ed and then:
    • The first process writes 500 MB of data in the beginning of shared memory region passed by the parent process.
    • The second process writes 1000 MB of data in the beginning of the shared memory region passed by the parent process.

Here is how the process list looks like:

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
famzah 9256 0.0 0.0 2052508 816 pts/15 S+ 18:00 0:00 ./a.out # parent
famzah 9257 10.7 10.1 2052508 512008 pts/15 S+ 18:00 0:01 \_ ./a.out # child 1
famzah 9258 29.0 20.2 2052508 1023952 pts/15 S+ 18:00 0:03 \_ ./a.out # child 2
famzah@vbox64:~$ free -m
total used free shared buff/cache available
Mem: 4940 549 1943 1012 2447 3097
view raw ps output hosted with ❤ by GitHub

A quick explanation of this process list snapshot:

  • VSZ (virtual size) of the parent and child processes is 2000 MB because the parent process has allocated 2000 MB of anonymous memory using mmap(). No additional allocations were made by the child processes as they were passed a reference to the anonymously shared memory in the parent process. Therefore the virtual memory footprint of all processes is the same.
  • RSS (resident set size, or simply “the real usage”) is:
    • Almost none for the parent process because it never used any memory. It only “requested” the memory block by mmap().
    • 500 MB for the first child processes because it wrote 500 MB of data at the beginning of the shared memory region.
    • 1000 MB for the second child processes because it wrote 1000 MB of data at the beginning of the shared memory region.
  • The “free -m” command shows that 1012 MB of anonymously shared memory is being used.

So far things seem kind of logical. We can roughly determine the real usage of the shared memory region by looking at the child processes. This however is also not really true because if they write at completely different regions in the anonymous memory, we would need to sum their usage. If they write to the very same memory region, we need to look at the max() value.

The pmap command doesn’t provide any additional information and shows the same values that we see in the “ps” output:

famzah@vbox64:~$ pmap -XX 9256
9256: ./a.out
Address Perm Offset Device Inode Size KernelPageSize MMUPageSize Rss Pss Shared_Clean Shared_Dirty Private_Clean Private_Dirty Referenced Anonymous LazyFree AnonHugePages ShmemPmdMapped Shared_Hugetlb Private_Hugetlb Swap SwapPss Locked VmFlagsMapping
7f052ea9b000 rw-s 00000000 00:05 733825 2048000 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 rd wr sh mr mw me ms sd zero (deleted)
famzah@vbox64:~$ pmap -XX 9257
9257: ./a.out
Address Perm Offset Device Inode Size KernelPageSize MMUPageSize Rss Pss Shared_Clean Shared_Dirty Private_Clean Private_Dirty Referenced Anonymous LazyFree AnonHugePages ShmemPmdMapped Shared_Hugetlb Private_Hugetlb Swap SwapPss Locked VmFlagsMapping
7f052ea9b000 rw-s 00000000 00:05 733825 2048000 4 4 512000 256000 0 512000 0 0 512000 0 0 0 0 0 0 0 0 0 rd wr sh mr mw me ms sd zero (deleted)
famzah@vbox64:~$ pmap -XX 9258
9258: ./a.out
Address Perm Offset Device Inode Size KernelPageSize MMUPageSize Rss Pss Shared_Clean Shared_Dirty Private_Clean Private_Dirty Referenced Anonymous LazyFree AnonHugePages ShmemPmdMapped Shared_Hugetlb Private_Hugetlb Swap SwapPss Locked VmFlagsMapping
7f052ea9b000 rw-s 00000000 00:05 733825 2048000 4 4 1024000 768000 0 512000 512000 0 1024000 0 0 0 0 0 0 0 0 0 rd wr sh mr mw me ms sd zero (deleted)
view raw pmap output hosted with ❤ by GitHub

Things get even more messy when the child processes terminate (and get replaced by new ones which never touched the shared anonymous memory). Here is how the process list looks like:

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
famzah 9256 0.0 0.0 2052508 816 pts/15 S+ 18:00 0:00 ./a.out # parent
famzah@vbox64:~$ free -m
total used free shared buff/cache available
Mem: 4940 549 1943 1012 2447 3097
view raw ps output hosted with ❤ by GitHub

The RSS (resident set size, or simply “the real usage”) of the parent process continues to show no usage. But the anonymous memory region is actually used because the child processes wrote data in it. And the region is not automatically free()’d, because the parent process is still alive. The “free -m” command clearly shows that there are 1000 MB of data stored in anonymous shared memory.

How can we reliably find out the memory usage of the anonymous shared region and account it to a given process?

We will look at /proc/[pid]/maps:

A file containing the currently mapped memory regions and their access permissions. See mmap(2) for some further information about memory mappings.

If the pathname field is blank, this is an anonymous mapping as obtained via mmap(2). There is no easy way to coordinate this back to a process’s source, short of running it through gdb(1), strace(1), or similar.

Wikipedia gives the following additional information:

When “/dev/zero” is memory-mapped, e.g., with mmap(), to the virtual address space, it is equivalent to using anonymous memory; i.e. memory not connected to any file.

Now we know how to find out the virtual address of the anonymously memory-mapped region. Here I demostrate two different ways of obtaining the address:

famzah@vbox64:~$ cat /proc/9256/maps | grep /dev/zero
7f052ea9b000-7f05aba9b000 rw-s 00000000 00:05 733825 /dev/zero (deleted)
famzah@vbox64:~$ ls -la /proc/9256/map_files/ | grep /dev/zero # same region of 7f052ea9b000-7f05aba9b000
lrw——- 1 famzah famzah 64 Nov 11 18:21 7f052ea9b000-7f05aba9b000 -> '/dev/zero (deleted)'

The man page of tmpfs gives further insight:

An internal shared memory filesystem is used for […] shared anonymous mappings (mmap(2) with the MAP_SHARED and MAP_ANONYMOUS flags).

The amount of memory consumed by all tmpfs filesystems is shown in the Shmem field of /proc/meminfo and in the shared field displayed by free(1).

We verify that the memory-mapped region is a “tmpfs” file:

famzah@vbox64:~$ sudo stat -Lf /proc/9256/map_files/7f052ea9b000-7f05aba9b000
File: "/proc/9256/map_files/7f052ea9b000-7f05aba9b000"
ID: 0 Namelen: 255 Type: tmpfs

💚 We can then finally get the real memory usage of this shared anonymous memory block in terms of VSS (virtual memory size) and RSS (resident set size, or simply “the real usage”):

# stat doesn't give us the real usage, only the virtual
famzah@vbox64:~$ sudo stat -L /proc/9256/map_files/7f052ea9b000-7f05aba9b000
File: /proc/9256/map_files/7f052ea9b000-7f05aba9b000
Size: 2097152000 Blocks: 2048000 IO Block: 4096 regular file
Device: 5h/5d Inode: 733825 Links: 0
Access: (0777/-rwxrwxrwx) Uid: ( 1000/ famzah) Gid: ( 1000/ famzah)
# VSS (virtual memory size)
famzah@vbox64:~$ sudo du –apparent-size -BM -D /proc/9256/map_files/7f052ea9b000-7f05aba9b000
2000M /proc/9256/map_files/7f052ea9b000-7f05aba9b000
# RSS (resident set size, or simply "the real usage")
famzah@vbox64:~$ sudo du -BM -D /proc/9256/map_files/7f052ea9b000-7f05aba9b000
1000M /proc/9256/map_files/7f052ea9b000-7f05aba9b000

Since we have access to the memory region as a file, we can even read this memory mapped region:

famzah@vbox64:~$ sudo cat /proc/9256/map_files/7f052ea9b000-7f05aba9b000 | wc -c
2097152000