Using flock() in Bash without invoking a subshell

The flock(1) utility on Linux manages flock(2) advisory locks from within shell scripts or the command line. This lets you synchronize your Bash scripts with all your other applications written in Perl, Python, C, etc.

I’ll focus on the third usage form where flock() is used inside a Bash script. Here is what the man page suggests:

#!/bin/bash

(
flock -s 200

# ... commands executed under lock ...

) 200>/var/lock/mylockfile

Unfortunately, this invokes a subshell which has the following drawbacks:

  • You cannot pass values to variables from the subshell in the main shell script.
  • There is a performance penalty.
  • The syntax coloring in “vim” does not work properly. :)

This motivated my colleague zImage to come up with a usage form which does not invoke a subshell in Bash:

#!/bin/bash

exec 200>/var/lock/mylockfile || exit 1
flock -n 200 || { echo "ERROR: flock() failed." >&2; exit 1; }

# ... commands executed under lock ...

flock -u 200

Nagios: Improve CPU performance with popen_noshell()

Today I’ll share my real-world experience with popen_noshell() on the Nagios monitoring server which we run at work. We are actively monitoring 1166 hosts and 14250 services. The machine has 6 GB RAM and a single Intel Core i7-950 CPU with enabled multi-threading (8 total threads) and slight overclock. Besides running Nagios, this machine also handles the incoming data from our custom monitoring systems, processes RRD database storage, and generates web interface status + charts output. So it’s a pretty busy machine which does a lot of network activity and where the Nagios daemon is just a part of the CPU load. For example, since boot the main “nagios3″ process has used only 20% of the CPU. The other part has been used by the fork()’ed Perl scripts (we use a lot of them for the active checks), the Nagios standard network checks, and the Apache/PHP web server handling the incoming data.

Recently the machine started to exhaust its CPU resources. First we overclocked it a bit which gave us 10% more CPU idle time. Then we decided to try to compile Nagios with the popen-noshell library. This gave us another 10% CPU idle and now the machine is working great again.

I’ll focus on the popen-noshell integration and results, since CPU overclocking is a well-known topic. Here is the chart which shows the CPU usage before and after we re-compiled Nagios with the popen-noshell library:

nagios-popen-noshell-benchmark-results

As we can see, the system-CPU usage dropped from 38% to 31%, which is an 18% improvement. The user-CPU usage dropped from 44% to 41%, which is a 7% improvement. Overall, we gained a 12% speed-up for our workload by just re-compiling Nagios with the popen-noshell library. I’m stressing out that the speed-up depends a lot on your workload. If this machine was busy only with Nagios and the active checks were more CPU efficient (i.e. not written in Perl but in C), then the speed-up could have been much higher, since popen_noshell() is about 10 times faster than the standard popen().

A list with the other machine metrics which were also affected by the workload change:

  • Used memory: 39% => 24% (38% less)
  • Load average: 39 => 46 (18% higher)
  • Forks rates: 8*61 => 8*61 (created processes/second – no change)

Here are the steps that you need to perform, in order to re-compile the Nagios Debian package by integrating it with the popen-noshell library:

apt-get install devscripts

apt-get build-dep nagios3-core
# No need to run as "root" from here on
apt-get source nagios3-core

svn checkout http://popen-noshell.googlecode.com/svn/trunk/ popen-noshell

cd nagios3-3.2.1/

# BEGIN: patch Nagios to use popen_noshell_compat()

cp ../popen-noshell/popen_noshell.* base/
vi base/Makefile.in
	OBJS=$(BROKER_O) popen_noshell.o 

vi base/utils.c
	#include "popen_noshell.h"
	
        /* run the command */
        struct popen_noshell_pass_to_pclose pclose_arg;
        fp=(FILE *)popen_noshell_compat(cmd,"r",&pclose_arg);

            /* close the command and get termination status */
            status=pclose_noshell(&pclose_arg);

vi base/checks.c
	2x the same as above

# END: patch Nagios to use popen_noshell_compat()

EDITOR=vim dch -i
	# 3.2.1-2+squeeze1 -> 3.2.1-2+squeeze1-noshell1
	# you must have a trailing number in the added version name
	# after exit, this renames the original directory name

cd ..
mv nagios3_3.2.1.orig.tar.gz nagios3_3.2.1-2+squeeze1.orig.tar.gz

# the source directory was renamed by "dch"
cd nagios3-3.2.1-2+squeeze1/
DEB_BUILD_OPTIONS=nocheck debuild -us -uc

cd ..
sudo dpkg -i nagios3-core_3.2.1-2+squeeze1-noshell1_i386.deb \
	nagios3-common_3.2.1-2+squeeze1-noshell1_all.deb \
	nagios3-cgi_3.2.1-2+squeeze1-noshell1_i386.deb \
	nagios3-doc_3.2.1-2+squeeze1-noshell1_all.deb \
	nagios3_3.2.1-2+squeeze1-noshell1_i386.deb

Locally encrypted secure remote backup over Internet on Linux (iSCSI / TrueCrypt)

Recently I decided to start using Amazon AWS as my backup storage but my paranoid soul wasn’t satisfied until I figured it out how to secure my private data. It’s not that I don’t trust Amazon but a lot of bad things could happen if I decided that I just copy my data to a remote server on Amazon:

  • Amazon staff would have access to my data.
  • A breach in Amazon’s systems would expose my data.
  • A breach in my remote server OS would expose my data.

One of the solutions which I considered was to encrypt my local file-system with eCryptfs but it has some issues with relatively long file names.

Finally I came out with the following working backup solution which I currently use to backup both my Windows and Linux partitions. I share the Windows root directory with the VirtualBox Linux machine and run the backup scripts from there. Here is a short explanation of the properties and features of the backup setup:

  • Locally encrypted — all files which I store on the iSCSI volume are encrypted on my personal desktop, before being sent to the remote server. This ensures that the files cannot be read by anyone else.
  • Secure — besides the local volume encryption, the whole communication is done over an SSH tunnel which secures the Internet point-to-point client-to-server communication.
  • Remote — having a remote backup ensures that even if someone breached in my house and steals my laptop and my offline backup, I can still recover my data from the remote server. Furthermore, it is more convenient to frequently backup on a remote machine, because we have Internet access everywhere now. Note that remote backups are not a substitution for offline backups.
  • Over Internet — very convenient. Of course, this backup scheme can be used in any TCP/IP network — private LAN, WAN, VPN networks, etc.

The following two articles provide detailed instructions on how to setup the backup solution:

Daily usage example

Here are the commands which I execute, in order to make a backup of my laptop. Those can be further scripted and automated if a daily or more frequent backup is required:

IP=23.21.98.10 # the public DNS IP address of the EC2 instance / server

## Execute the following, in order to mount the remote encrypted iSCSI volume:

sudo -E \
  ssh -F /dev/null \
  -o PermitLocalCommand=yes \
  -o LocalCommand="ifconfig tun0 172.18.0.2 pointopoint 172.18.0.1 netmask 255.255.255.0" \
  -o ServerAliveInterval=60 \
  -w 0:0 root@"$IP" \
  'sudo ifconfig tun0 172.18.0.1 pointopoint 172.18.0.2 netmask 255.255.255.0; hostname; echo tun0 ready'

sudo iscsiadm -m node --targetname "iqn.2012-03.net.famzah:storage.backup" --portal "172.18.0.1:3260" --login
sudo truecrypt --filesystem=none -k "" --protect-hidden=no /dev/sdb
sudo mount /dev/mapper/truecrypt1 /mnt

## You can now work on /mnt -- make a backup, copy files, etc.

ls -la /mnt

## Execute the following, in order to unmount the encrypted iSCSI volume:

sync
sudo umount /mnt
sudo truecrypt -d /dev/sdb
sudo iscsiadm -m node --targetname "iqn.2012-03.net.famzah:storage.backup" --portal "172.18.0.1:3260" --logout
# stop the SSH tunnel

Disaster recovery plan

Any backup is useless if you cannot restore your data. If your main computer is totally out, you would need the following, in order to access your backed up data:

In order to be able to log in to the remote server via SSH, you need to set up the following:

vi /etc/ssh/sshd_config # PasswordAuthentication yes
/etc/init.d/ssh restart
passwd root # set a very long password which you CAN remember

Make sure that you test if you can log in using an SSH client which does not have your SSH key and thus requires you to enter the root password manually.

I do not consider password authentication for the root account to be a security threat here. The backup server is online only during the time a backup is being made, after which I shut it down in order to save money from Amazon AWS. Furthermore, the backup has a new IP address on each new EC2 machine start, so an attacker cannot continue a brute-force attack easily, even if they started it.

Locally encrypt an iSCSI volume with TrueCrypt on Linux

While this article focuses on iSCSI volumes, it also applies for regular directly attached block devices. If you are in doubt on how to export and attach an iSCSI volume over Internet, you can review the “Secure iSCSI setup via an SSH tunnel on Linux” article.

Locally encrypting a remote iSCSI volume with TrueCrypt has the following advantages:

  • You don’t need to trust the administrators of the remote machine — they cannot see your files because you are using their storage in a locally encrypted format. Thus your private data is completely safe, as long as your encryption password/key is strong enough.
  • You have the option to temporarily mount the exported iSCSI volume on the remote server, if you are the owner of the remote server and know the encryption password/key. This is handy if you want to make a local copy of a file from the backup volume without storing the encryption password on the remote server.
  • TrueCrypt is cross-platform (Windows / Mac OS X / Linux), fast, free, and open-source.

Download and install TrueCrypt

You need to install TrueCrypt wherever you are going to use it — on the client machine and optionally on the server.

# Download the distribution file from the official page:
#   http://www.truecrypt.org/downloads
# Linux -> Console-only (choose 32-bit or 64-bit depending on your local Linux installation)

tar -zxf truecrypt-7.1a-linux-console-x86.tar.gz # 32-bit in this example
sudo ./truecrypt-7.1a-setup-console-x86

truecrypt --version
#>> TrueCrypt 7.1a

Encrypt an iSCSI volume with TrueCrypt

The instructions below assume that the iSCSI volume is attached under “/dev/sdb“. The output of the commands is quoted with “#>>”.

# Encrypt the iSCSI volume
sudo truecrypt -t --create /dev/sdb --volume-type=normal --encryption=AES --hash=RIPEMD-160 --filesystem=ext4 --quick -k ""

# Mount the *volume* (there is no file-system, yet)
sudo truecrypt --filesystem=none -k "" --protect-hidden=no /dev/sdb

# Check that a new "dm-0" device with the same size appeared
cat /proc/partitions
#>> major minor  #blocks  name
#>> ...
#>> 8        16  83886080 sdb
#>> 252       0  83885824 dm-0

# Double-check that this is a TrueCrypt volume
ls -la /dev/mapper/truecrypt1
# /dev/mapper/truecrypt1 -> ../dm-0

# Create a file-system.
# This takes about 30 min for a 80 GB volume @ 1 MBit Internet connection.
sudo mkfs.ext4 /dev/mapper/truecrypt1

# You can now mount and use /dev/mapper/truecrypt1 in any mount-point, as 
# this is a regular block device with an ext4 file-system.
# Remember to unmount it when you are done.
mount /dev/mapper/truecrypt1 /mnt
ls -la /mnt
umount /mnt

# Unmount the encrypted *volume*.
# Make sure that you have ALREADY unmounted the file-system!
sync
sudo truecrypt -d /dev/xvdf

Mount an encrypted iSCSI volume locally on the remote server

The output of the commands is quoted with “#>>”.

# The local block device is "/dev/xvdf"
cat /proc/partitions 
#>> major minor  #blocks  name
#>> ...
#>>   202    80  83886080 xvdf

#
# MAKE SURE that no iSCSI clients are using the volume now
#

# Mount an encrypted volume (/dev/xvdf).
# The unencrypted volume will be presented under a different device name (/dev/mapper/truecrypt1).
sudo truecrypt --filesystem=none -k "" --protect-hidden=no /dev/xvdf

# Mount the file-system
sudo mount /dev/mapper/truecrypt1 /mnt
# Access the encrypted files
ls -la /mnt
# Unmount the file-system
sudo umount /mnt

# Unmount the encrypted volume (/dev/mapper/truecrypt1 -> /dev/xvdf).
# Make sure that you have ALREADY unmounted the file-system!
sudo truecrypt -d /dev/xvdf

Secure iSCSI setup via an SSH tunnel on Linux

This article will demonstrate how to export a raw block storage device over Internet in a secure manner. Re-phrased this means that you can export a hard disk from a remote machine and use it on your local computer as it was a directly attached disk, thanks to iSCSI. Authentication and secure transport channel is provided by an SSH tunnel (more info). The setup has been tested on Ubuntu 11.10 Oneiric.

Server provisioning

Amazon AWS made it really simple to deploy a server setup in a minute:

  1. Launch a Micro EC2 instance and then install Ubuntu server by clicking on the links in the Ubuntu EC2StartersGuide, section “Official Ubuntu Cloud Guest Amazon Machine Images (AMIs)”.
  2. Create an EBS volume in the same availability zone. Attach it to the EC2 instance as “/dev/sdf” (seen as “/dev/xvdf” in latest Ubuntu versions).
  3. (optionally) Allocate an Elastic IP address and associate it with the EC2 instance.

Note that you can lower your AWS bill by buying a Reserved instance slot. Those slots are non-refundable and non-transferrable, so shop wisely. You can also stop the EC2 instance when you’re not using it and you won’t be billed for it but only for the allocated EBS volume storage.

You can use any other dedicated or virtual server which you own and can access by IP. An Amazon AWS EC2 instance is given here only as an example.

iSCSI server-side setup

Execute the following on your server (iSCSI target):

IP=23.21.98.10 # the public DNS IP address of the EC2 instance / server

# Log in to the server
ssh ubuntu@$IP
# Update your SSH key in ".ssh/authorized_keys", if needed.
sudo bash
cp /home/ubuntu/.ssh/authorized_keys /root/.ssh/ # so that we can log in directly as root

apt-get update
apt-get upgrade

apt-get install linux-headers-virtual # virtual because we're running an EC2 instance
apt-get install iscsitarget iscsitarget-dkms
perl -pi -e 's/^ISCSITARGET_ENABLE=.*$/ISCSITARGET_ENABLE=true/' /etc/default/iscsitarget

# We won't use any iSCSI authentication because the server is totally firewalled
# and we access it only using an SSH tunnel.
# NOTE: If you don't use Amazon EC2, make sure that you firewall this machine completely,
# leaving only SSH access (TCP port 22).

# update your block device location in "Path", if needed
cat >> /etc/iet/ietd.conf <<EOF
Target iqn.2012-03.net.famzah:storage.backup
   Lun 0 Path=/dev/xvdf,Type=fileio
EOF

/etc/init.d/iscsitarget restart

echo 'PermitTunnel yes' >> /etc/ssh/sshd_config
/etc/init.d/ssh restart

iSCSI client-side setup

Execute the following on your client / desktop machine (iSCSI initiator):

# Install the iSCSI client
sudo apt-get install open-iscsi

How to attach an iSCSI volume on the client

The following commands show how to attach and detach a remote iSCSI volume on the client machine. The output of the commands is quoted with “#>>”.

IP=23.21.98.10 # the public DNS IP address of the EC2 instance / server

# Establish the secure SSH tunnel to the remote server
sudo -E \
  ssh -F /dev/null \
  -o PermitLocalCommand=yes \
  -o LocalCommand="ifconfig tun0 172.18.0.2 pointopoint 172.18.0.1 netmask 255.255.255.0" \
  -o ServerAliveInterval=60 \
  -w 0:0 root@"$IP" \
  'sudo ifconfig tun0 172.18.0.1 pointopoint 172.18.0.2 netmask 255.255.255.0; hostname; echo tun0 ready'

# Make sure that we can reach the remote server via the SSH tunnel
ping 172.18.0.1

# Execute this one-time; it discovers the available iSCSI volumes
sudo iscsiadm -m discovery -t st -p 172.18.0.1
#>> 172.18.0.1:3260,1 iqn.2012-03.net.famzah:storage.backup

# Attach the remote iSCSI volume on the local machine
sudo iscsiadm -m node --targetname "iqn.2012-03.net.famzah:storage.backup" --portal "172.18.0.1:3260" --login
#>> Logging in to [iface: default, target: iqn.2012-03.net.famzah:storage.backup, portal: 172.18.0.1,3260]
#>> Login to [iface: default, target: iqn.2012-03.net.famzah:storage.backup, portal: 172.18.0.1,3260]: successful

# Check the kernel log
dmesg
#>> [ 1237.538172] scsi3 : iSCSI Initiator over TCP/IP
#>> [ 1238.657846] scsi 3:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
#>> [ 1238.662985] sd 3:0:0:0: Attached scsi generic sg2 type 0
#>> [ 1239.578079] sd 3:0:0:0: [sdb] 167772160 512-byte logical blocks: (85.8 GB/80.0 GiB)
#>> [ 1239.751271] sd 3:0:0:0: [sdb] Write Protect is off
#>> [ 1239.751279] sd 3:0:0:0: [sdb] Mode Sense: 77 00 00 08
#>> [ 1240.099649] sd 3:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
#>> [ 1241.962729]  sdb: unknown partition table
#>> [ 1243.568470] sd 3:0:0:0: [sdb] Attached SCSI disk

# Double-check that the iSCSI volume is with the expected size (80 GB in our case)
cat /proc/partitions
#>> major minor  #blocks  name
#>> ...
#>> 8       16   83886080 sdb

# The remote iSCSI volume is now available under /dev/sdb on our local machine.
# You can use it as any other locally attached hard disk (block device).

# Detach the iSCSI volume from the local machine
sync
sudo iscsiadm -m node --targetname "iqn.2012-03.net.famzah:storage.backup" --portal "172.18.0.1:3260" --logout
#>> Logging out of session [sid: 1, target: iqn.2012-03.net.famzah:storage.backup, portal: 172.18.0.1,3260]
#>> Logout of [sid: 1, target: iqn.2012-03.net.famzah:storage.backup, portal: 172.18.0.1,3260]: successful

# Check the kernel log
dmesg
#>> [ 1438.942277]  connection1:0: detected conn error (1020)

# Double-check that the iSCSI volume is no longer available on the local machine
cat /proc/partitions
#>> no "sdb"

Once you have the iSCSI block device volume attached on your local computer, you can use it as you need, just like it was a normal hard disk. Only it will be slower because each I/O operation takes place over Internet. For example, you can locally encrypt the iSCSI volume with TrueCrypt, in order to prevent the administrators of the remote machine to be able to see your files.


References:

Bind a shell on Linux and reverse-connect to it through a firewall

There are situations when a friend is in need of Linux help, and the only way for you to help them is to log in to their machine and fix the problem yourself, instead of trying to explain over the phone all the steps to your friend.

Such a problem has two sub-problems:

  • The remote machine must accept incoming connections and provide you with shell access. The obvious way to achieve this is an SSH daemon. Many Desktop Linux distributions don’t install an SSH server by default though, for security reasons. Setting up an SSH server in this moment is slow, and could even not be possible, if your friend messed up with the packaging system, for example. So we need to find an easy way to bind a network shell on the remote machine.
  • We must be able to connect to the remote machine. Usually desktop machines are protected behind a firewall or NAT, and we cannot connect to them directly. If this is not the case for you, you can skip this step and just connect to the remote machine IP address. A common approach to overcome this problem is that the remote machine connects to a machine of yours, which has an accessible real IP address and has a running SSH server. Most Desktop Linux distributions have an SSH client installed by default. So all you need to do is quickly and temporarily set up an account with password authentication for your friend on your machine. Then let them log in there which will create a reverse tunnel back to their machine.

Bind a shell

Another useful tool which is usually available on Linux is the Netcat, the Swiss-army knife for TCP/IP. In order to bind a shell using the Netcat version available on Ubuntu/Debian, you need to execute the following:

mkfifo /tmp/mypipe
# user shell
cat /tmp/mypipe|/bin/bash 2>&1|nc -l 6000 >/tmp/mypipe

I got this awesome idea from a user comment. I only extended it a bit by adding “2>&1″ which redirects the STDERR error messages to the remote network client too.

Once the above has been executed on the remote machine, anyone can connect on TCP port 6000, assuming that there is no firewall. Note that you have to connect via Netcat again. A connection via Telnet adds an additional “\r” at every line end, which confuses Bash. If you need to perform actions as “root” on the remote machine, the shell needs to be executed as “root”:

mkfifo /tmp/mypipe
# root shell
cat /tmp/mypipe|sudo /bin/bash 2>&1|nc -l 6000 >/tmp/mypipe

If you are worried that your friend will mistype something, save the commands to a text file on a web server, and let them download it using “wget” or “curl”. Example:

wget http://www.famzah.net/download/bind-shell.txt
# or
curl http://www.famzah.net/download/bind-shell.txt > bind-shell.txt

chmod +x bind-shell.txt
./bind-shell.txt

Reverse connect using an SSH tunnel

The ssh client has the ability to forward a local port (review Reversing an ssh connection for a detailed example). Once you’ve set up an account for your friend, you ask them to connect to your machine:

ssh -R 6000:127.0.0.1:6000 $IP_OF_YOUR_MACHINE

Once your friend has connected to your machine, you can connect to theirs using the reverse SSH tunnel by executing the following:

nc 127.0.0.1 6000

The connection to 127.0.0.1 on TCP port 6000 is actually forwarded by SSH to the remote machine of your friend on their TCP port 6000.

Note that once you disconnect from the “nc” session, the Netcat server on the remote machine exists and needs to be restarted if you need to connect again.

Boot Linux using Windows 7 boot loader

Windows 7 and Linux live together on the same hard disk in perfect harmony. I had Windows 7 installed first, and a few GBytes of free space at the end of the hard drive which I left unpartitioned. Here is how to install Ubuntu:

  1. Download Ubuntu and burn the ISO on a CD.
  2. Boot from the CD, and install it. Make sure that you choose an empty partition, and also make sure that you select to install the boot loader on the Linux partition (example: on “/dev/sda3″, and not on the main MBR “/dev/sda”).

Until here you have an Ubuntu installation which you cannot boot, yet.

Here is how to configure the Windows 7 boot loader to include Ubuntu in the boot choice menu:

  1. Download EasyBCD and install it. EasyBCD is free for non-commercial use and offers a nice GUI to edit the Windows 7 boot loader menu.
  2. Do the following in EasyBCD — Add New Entry -> Operating Systems -> Linux/BSD:
    • Type: GRUB 2
    • Name: Ubuntu
    • Device: (Automatically configured)
  3. Finally, click on “View Settings” in EasyBCD. You should see something similar to the following:

    Entry #2
    Name: Ubuntu
    BCD ID: {1d486d61-64cc-12a5-7d94-af2f5df01535}
    Drive: C:\
    Bootloader Path: \NST\AutoNeoGrub0.mbr

EasyBCD ships the “stage1″ boot loader of GRUB2 (\NST\AutoNeoGrub0.mbr), so you don’t have to do anything else. Just reboot your Windows 7, and the boot menu should present a choice between “Windows 7″ and “Ubuntu”.

A note of caution: It is highly recommended that you do a backup of your whole hard disk before you try to install Ubuntu or modify the boot loader options.

P.S. There is no “boot.ini” in Windows 7. You could modify “boot.ini” in Windows XP to achieve the same result, but this does not apply for Windows 7.


References:

Get default outgoing IP address and interface on Linux

Suppose you have one or more network interfaces, and they have one or more assigned IP addresses, also called aliases. If you need to find out which IP address and interface will be used as a default “source” by your Linux box, you need to execute the following:

ip route get 8.8.8.8

This, of course, assumes that 8.8.8.8 is not directly connected on your networks somehow. Since this is one of the Public Name Servers of Google, I think it is safe to assume so.

A sample output of the ip command follows:

8.8.8.8 via 10.0.2.2 dev eth0  src 10.0.2.15
    cache 

The output is pretty much self-explanatory — the route to “8.8.8.8″ will originate from device “eth0″, the used source IP address will be “10.0.2.15″, and the next hop, the (default) gateway, will be “10.0.2.2″.

This method is 100% reliable. The man page of “ip” says that “this command gets a single route to a destination and prints its contents exactly as the kernel sees it”.


References:

sudo hangs and leaves the executed program as “zombie”

Today I discovered a non-security bug in sudo – the executed program finishes successfully, but sudo hangs forever waiting for something. The executed program is left in a “zombie” process state. Here is how the process list looks like, for example:

root 6368 0.0 0.0 2808 1592 pts/6 Ss 18:39 0:00 | \_ -bash
root 1103 0.0 0.0 2200 1000 pts/6 S+ 21:45 0:00 | \_ ./sudo -u root sleep 5
root 1104 0.0 0.0 0 0 pts/6 Z+ 21:45 0:00 | \_ [sleep] <defunct>

If we try to trace the system calls of the sudo command, here is what we get:

[root@tester2 ~]# strace -fF -p 1103
select(4, [3], [], NULL, NULL

The sudo process waits endlessly in a select() system call which waits for file descriptor #3. So we quickly check what corresponds to file descriptor #3:

[root@tester2 ~]# ls -la /proc/1103/fd
total 0
dr-x—— 2 root root 0 Nov 1 21:45 .
dr-xr-xr-x 5 root root 0 Nov 1 21:45 ..
lrwx—— 1 root root 64 Nov 1 21:45 0 -> /dev/pts/6
lrwx—— 1 root root 64 Nov 1 21:46 1 -> /dev/pts/6
lrwx—— 1 root root 64 Nov 1 21:45 2 -> /dev/pts/6
lrwx—— 1 root root 64 Nov 1 21:46 3 -> socket:[1136261781]

Socket “socket:[1136261781]” is already closed tough (blinking in red on my console). Thus sudo is waiting in a blocked select() for a change on file descriptor #3, which is already closed and will never change its state. Sudo will therefore wait forever.

In order to understand why this happens, we will look at the source code of sudo. Here is snippet from “exec.c” – only the relevant code is left for clarity:

int
sudo_execve(path, argv, envp, uid, cstat, dowait, bgmode)
{
    int sv[2];

    if (socketpair(PF_UNIX, SOCK_DGRAM, 0, sv) != 0)
    error(1, "cannot create sockets");

    zero_bytes(&sa, sizeof(sa));
    sigemptyset(&sa.sa_mask);

    /* Note: HP-UX select() will not be interrupted if SA_RESTART set */
    sa.sa_flags = SA_INTERRUPT; /* do not restart syscalls */
    sa.sa_handler = handler;
    sigaction(SIGCHLD, &sa, NULL);
    sigaction(SIGHUP, &sa, NULL);
    sigaction(SIGINT, &sa, NULL);
    sigaction(SIGPIPE, &sa, NULL);
    sigaction(SIGQUIT, &sa, NULL);
    sigaction(SIGTERM, &sa, NULL);

    /* Max fd we will be selecting on. */
    maxfd = sv[0];

    child = fork()
    close(sv[1]);

    fdsr = (fd_set *)emalloc2(howmany(maxfd + 1, NFDBITS), sizeof(fd_mask));
    fdsw = (fd_set *)emalloc2(howmany(maxfd + 1, NFDBITS), sizeof(fd_mask));

    for (;;) {

    zero_bytes(fdsw, howmany(maxfd + 1, NFDBITS) * sizeof(fd_mask));
    zero_bytes(fdsr, howmany(maxfd + 1, NFDBITS) * sizeof(fd_mask));

    FD_SET(sv[0], fdsr);

    if (recvsig[SIGCHLD])
        continue;
    nready = select(maxfd + 1, fdsr, fdsw, NULL, NULL);
    if (nready == -1) {
        if (errno == EINTR)
        continue;
        error(1, "select failed");
    }

    }
}

/*
 * Generic handler for signals passed from parent -> child.
 * The recvsig[] array is checked in the main event loop.
 */
void
handler(s)
    int s;
{
    recvsig[s] = TRUE;
}

The race-condition happens right before the select() on line #40, and just after the “if” on lines #38 and #39. If the parent process gets re-scheduled after the “if” was executed, and at this very time the child process finishes and SIGCHLD is sent to the parent process, sudo gets in trouble. The SIGCHLD handler accounts in the variable “recvsig[]” that the signal was received, and then the parent process calls select(). This select will never be interrupted, as the author had it in mind. In 99% of the cases, the parent process will enter in the select() blocking state before the child process ended. The child would then send SIGCHLD, which will be accounted in the handler procedure, and will also interrupt select() which will return -1 in “nready”, and “errno” will be set to EINTR.

Here is an easy way to reproduce the bug. We add a sleep() of 10 seconds between the “if” and select(), thus simulating that the system was very busy and re-scheduled the parent sudo process right between these two operations. Here is the source diff:

--- sudo-orig/sudo-1.7.4p4/exec.c       Sat Sep  4 00:40:19 2010
+++ sudo-1.7.4p4/exec.c Mon Nov  1 21:48:24 2010
@@ -307,6 +307,10 @@
 
        if (recvsig[SIGCHLD])
            continue;
+       printf("debug: Missed the check for SIGCHLD, the child is still running. SIGCHLD status: %d\n", recvsig[SIGCHLD]);
+       sleep(10); // this will be interrupted by SIGCHLD, because the child exists at some time here (we run "sudo sleep 5")
+       printf("debug: We should have got SIGCHLD by now. SIGCHLD status: %d\n", recvsig[SIGCHLD]);
+       printf("debug: Entering the endless select()...\n");
        nready = select(maxfd + 1, fdsr, fdsw, NULL, NULL);
        if (nready == -1) {
            if (errno == EINTR)

After that we execute sudo, and observe the bug every time:

[root@tester2 sudo-1.7.4p4]# make >/dev/null && ./sudo -u root sleep 5
debug: Missed the check for SIGCHLD, the child is still running. SIGCHLD status: 0
debug: We should have got SIGCHLD by now. SIGCHLD status: 1
debug: Entering the endless select()…
…(this never finishes)…

The sudo author actually tried to avoid this potential race condition if SIGCHLD is received immediately
before we call select() – changeset 5334:99adc5ea7f0a. The proper way to fix this is to use a timeout in the select() call:

--- sudo-orig/sudo-1.7.4p4/exec.c       Sat Sep  4 00:40:19 2010
+++ sudo-1.7.4p4/exec.c Mon Nov  1 21:50:26 2010
@@ -307,7 +307,11 @@
 
        if (recvsig[SIGCHLD])
            continue;
-       nready = select(maxfd + 1, fdsr, fdsw, NULL, NULL);
+       struct timeval timeout;
+       timeout.tv_sec = 1; // Linux resets this, so set it everytime
+       timeout.tv_usec = 0;
+       nready = select(maxfd + 1, fdsr, fdsw, NULL, &timeout);
        if (nready == -1) {
            if (errno == EINTR)
                continue;

The select() mechanism and this bug were introduced somewhere between sudo versions 1.7.2 and 1.7.3. At least that is what I managed to see from the Changelog:

2010-06-29  Todd C. Miller  <Todd.Miller@courtesan.com>
	[72fd1f510a08] [SUDO_1_7_3] <1.7>
...
2009-11-15  Todd C. Miller  <Todd.Miller@courtesan.com>
	* script.c, sudo.c, sudo.h, sudoreplay.c, term.c, tgetpass.c:
	Use a socketpair to pass signals from parent to child.
...
2009-07-12  Todd C. Miller  <Todd.Miller@courtesan.com>
	[f5ad45f69f05] [SUDO_1_7_2]

I’ve reported this bug (#447) to the sudo maintainer, and I’m sure he will fix it when time permits. Because we all depend on sudo and love it. :)

Linux Cached/Buffers memory

I won’t try to explain in details what Linux Cached/Buffers memory is. In a nutshell, it shows how much of your memory is used for the read cache and for the write cache.

Usually when you look at your system memory usage and see that almost all of the unused memory is allocated for Cached/Buffers, you are happy, because this memory is used for file-system cache, thus your system is running faster.

Today however I observed quite an interesting fact – what I said above is still correct, however you don’t know how often these cache entries are used by the system. After all, it’s not the cache memory usage (or size) which makes the system run faster, but the cache hit ratio. If the file operations get satisfied by the cache (cache hit), then your system is running faster. If the system needs to make a physical disk I/O (cache miss), then you’d need to wait for a good few milliseconds.

What are your options, in order to know if your file cache is actually being used or is just sitting there allocated, giving you a false feeling that your system is running faster thanks to the used cache memory:

  1. (hard) In order to actually know the Cache Hit/Miss ratios for block devices, you’ll need to dig deep into the kernel, as I already explained in the “Is there a way to get Cache Hit/Miss ratios for block devices in Linux” article.
  2. (easy) You can clear the Cached/Buffers memory regularly, see how fast and how much the cache memory grows back, and draw some conclusions about the actual Cache Hit/Miss ratios.

The latter is not a perfect solution, but in all cases gives you a better idea of your file-system cache usage, than just watching the totally used memory in Cached/Buffers, and never actually knowing if it is used/accessed at all.

Therefore, you can run the following every hour in a cron job:

sync ; echo 3 > /proc/sys/vm/drop_caches
sync ; echo 0 > /proc/sys/vm/drop_caches

The commands are safe (see reference for “drop_caches“), and you won’t lose any data, just your caches will be zeroed. The disadvantage of this approach is that if the caches were very actively used indeed, Linux would need to read the data back from the disk.

A real-world example

 

Here is how the Cached/Buffers graphics of a server of mine looks for the following few days:

Linux memory usage

Pay attention to the points of interest which are marked. Here is the explanation and motivation to write this article:

  1. (Point A) The beginning of the graph shows my system after it just booted, and I did some small administrative tasks on it. After that, one script runs regularly on the machine, and as we see, it doesn’t use much file-system cache, as it doesn’t do many file operations.
  2. Then every day at 06:25 the “/etc/cron.daily” scripts are run and some of them read all files on the file-system. Such a script is the updatedb cron job. Because of the great disk activity, the Buffers/Cache usage gets maximal, as all possible files and meta data are being cached in memory.
  3. (see “Updatedb cron” markers on the graphics) After one hour, the scripts finish and no significant disk activity is done on the system any more.
  4. (Point B) But the Cached/Buffers usage never drops down. The file cache doesn’t seem to expire, and is therefore giving us the false feeling that it is being used by our system all the time, thus making it faster. But it isn’t!
  5. On Sat 15:08 the Cached/Buffers cache is cleared manually by the command I provided above, and I installed it as an hourly crontab too.
  6. As you can see, right after the cache was cleared, we see the sad reality – the Cached/Buffers cache was filled with data that nobody needed or accessed, and the high memory usage by Cached/Buffers actually didn’t speed up our system. I grieve for a while and accept the reality, and also understand why so many I/O requests are issued to my EBS storage, even though the cache was so huge.
  7. (Point C) That’s how the actual daily usage pattern of this machine looks like. The Cached/Buffers memory cache is heavily underused on my system, as it doesn’t do much I/O work. This wouldn’t be visible if I don’t clear the cache every hour.

References: