Private networking per-process in Linux

This is a follow-up of the Private /tmp mount per-process in Linux. As already stated there, Linux namespaces offer great options for security.

In this article we will demonstrate the use of the “network” namespace which enables a process to have independent IPv4 and IPv6 stacks, network interfaces, IP routing tables, iptables firewall rules, the /proc/net and /sys/class/net directory trees, sockets, etc.

Here is a diagram to illustrate the concept:
Linux network namespace

First we start by creating a pair of “veth” network interfaces:

ip link add v-eth1 type veth peer name v-peer1
ip link set v-eth1 up
ip link set v-peer1 up

One of those interfaces will be used as a communication point from the side of the original default network namespace. We will assign “10.200.1.1” for IP address:

ifconfig v-eth1 10.200.1.1 netmask 255.255.255.0 up

It is time to enter the new network namespace. Once we have created the new namespace, we will associate the second interface “v-peer1″ with it, then we will configure an IP address “10.200.1.2” and add a default route through the first interface which will act as a router:

export MAIN_NS_PID="$$"
unshare -n /bin/bash

#
# We are in a "/bin/bash" session in the NEW network namespace now.
#

ip link set lo up # activate the "loopback" interface

nsenter --net="/proc/$MAIN_NS_PID/ns/net" ip link set v-peer1 netns "$$" # join "v-peer1" into this namespace
ifconfig v-peer1 10.200.1.2 netmask 255.255.255.0 up
route add default gw 10.200.1.1 dev v-peer1

#
# Setup is done.
# You can now drop privileges and launch a daemon which will use this confined network namespace.
#

sudo -u www-data /etc/init.d/my-net-daemon start

The original default namespace, our original Linux installation, must be configured to act as a router. Otherwise the processes inside the new network namespace won’t have any Internet access. Configuring a Linux network router is a straightforward task:

echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -P FORWARD DROP
iptables -F FORWARD

iptables -t nat -F
iptables -t nat -A POSTROUTING -s 10.200.1.0/255.255.255.0 -o eth0 -j MASQUERADE

iptables -A FORWARD -i eth0 -o v-eth1 -j ACCEPT
iptables -A FORWARD -o eth0 -i v-eth1 -j ACCEPT

Finally, you can enable inbound connections to the processes in the confined new network namespace. Let’s assume that you have a daemon listening on TCP port 10105. Here is how you can forward any new incoming connections to the processes inside the new network namespace:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 10105 -j DNAT --to-destination 10.200.1.2

Pros: Using separate network namespaces gives us full network isolation and control over a group of processes. Additionally, we can match incoming packets against a process which is not possible in a standard “iptables” setup using the “-m owner” match extension. These are huge security benefits.

Cons: The technical implications are that the Linux host has to do (a lot) more work because of the DNAT/SNAT operations and their related connection tracking overhead. If you are running a high traffic server, you should plan and test accordingly. Furthermore, one additional network interfaces pair is created for each new network namespace. Linux can handle hundreds of network devices but still this is a factor to be considered.

The better security features outweigh the drawbacks in most use-cases though. Last but not least, it is very easily to run a process with completely detached network and this won’t cost us anything on the Linux host.

Resources:

Private /tmp mount per-process in Linux

I’ve been playing with Linux namespaces and the results are very satisfying. This process isolation has several benefits:

  • The setup is automatically destroyed when the process and its children exit — easy maintenance.
  • Non-privileged processes cannot alter the setup — great security.
  • The isolated resource type is completely invisible by processes in other namespaces — great security.
  • The setup is inherited by any forked children — great for security and maintenance.

If you review the man page of the “unshare” command or syscall, you will see that currently we can have the following private namespaces:

  • mount namespace — mounting and unmounting filesystems will not affect rest of the system, except for filesystems which are explicitly marked as shared
  • UTS namespace — setting hostname, domainname will not affect rest of the system
  • IPC namespace — the process will have independent namespace for System V message queues, semaphore sets and shared memory segments
  • network namespace — the process will have independent IPv4 and IPv6 stacks, IP routing tables, iptables firewall rules, the /proc/net and /sys/class/net directory trees, sockets, etc.
  • pid namespace (new) — children will have a distinct set of PID to process mappings from their parent
  • user namespace (new) — the process will have a distinct set of UIDs, GIDs and capabilities

In this article we will demonstrate the use of the “mount” namespace which lets us mount a filesystem per-process without affecting the rest of the system. Using such a private mount for “/tmp” has mainly security but also usability benefits.

Here are all the commands which you need, in order to start a process with a private “/tmp” directory:

TARGET_USER='www-data'
TARGET_CMD='/bin/bash' # but it can be any command
NEWTMP="$(mktemp -d)" # securely create a new empty tmp folder

chown "root:$TARGET_USER" "$NEWTMP"
chmod 770 "$NEWTMP"

unshare --mount -- /bin/bash -c "mount -o bind,noexec,nosuid,nodev '$NEWTMP' /tmp && sudo -u '$TARGET_USER' $TARGET_CMD"

A longer version with more explanations follow:

#
# setup operations done as "root"
#

root@vbox:~# TARGET_USER='www-data'
root@vbox:~# TARGET_CMD='/bin/bash' # but it can be any command
root@vbox:~# NEWTMP="$(mktemp -d)" # securely create a new empty tmp folder
root@vbox:~# chown "root:$TARGET_USER" "$NEWTMP"
root@vbox:~# chmod 770 "$NEWTMP"

#
# review the result in the real file-system "/tmp"
#

root@vbox:~# echo $NEWTMP
/tmp/tmp.IyoUhputAW

root@vbox:~# ls -la /tmp
total 60
drwxrwxrwt 12 root   root     12288 Jun  4 13:53 .
drwxr-xr-x 23 root   root      4096 Jan 24 15:31 ..
drwxrwxrwt  2 root   root      4096 Jun  1 22:54 .ICE-unix
drwxrwx---  2 root   www-data  4096 Jun  4 13:53 tmp.IyoUhputAW

root@vbox:~# ls -la "$NEWTMP"
total 16
drwxrwx---  2 root www-data  4096 Jun  4 13:53 .
drwxrwxrwt 12 root root     12288 Jun  4 13:53 ..

#
# start the non-privileged process with a private "/tmp" mount
#

root@vbox:~# unshare --mount -- /bin/bash -c "mount -o bind,noexec,nosuid,nodev '$NEWTMP' /tmp && sudo -u '$TARGET_USER' $TARGET_CMD"

#
# sample operations done inside the non-privileged process
#

www-data@vbox:~$ ls -la / | grep tmp
drwxrwx---   2 root www-data  4096 Jun  4 13:53 tmp

www-data@vbox:~$ touch /tmp/test-www-data-file

www-data@vbox:~$ ls -la /tmp # the process has a private "/tmp" mount
total 8
drwxrwx---  2 root     www-data 4096 Jun  4 13:55 .
drwxr-xr-x 23 root     root     4096 Jan 24 15:31 ..
-rw-r--r--  1 www-data www-data    0 Jun  4 13:55 test-www-data-file

#
# see the result in the real file-system "/tmp"
#

root@vbox:~# ls -la /tmp
total 60
drwxrwxrwt 12 root   root     12288 Jun  4 13:53 .
drwxr-xr-x 23 root   root      4096 Jan 24 15:31 ..
drwxrwxrwt  2 root   root      4096 Jun  1 22:54 .ICE-unix
drwxrwx---  2 root   www-data  4096 Jun  4 13:55 tmp.IyoUhputAW

root@vbox:~# echo "$NEWTMP"
/tmp/tmp.IyoUhputAW

root@vbox:~# ls -la "$NEWTMP"
total 16
drwxrwx---  2 root     www-data  4096 Jun  4 13:55 .
drwxrwxrwt 12 root     root     12288 Jun  4 13:53 ..
-rw-r--r--  1 www-data www-data     0 Jun  4 13:55 test-www-data-file

Note that we are mounting a directory inside another directory using the “bind” mount feature of Linux.

Resources:

ProFTPD inheritance of “.ftpaccess” files

ProFTPD and Apache have a lot in common in their concept for inheriting the per-directory settings files. ProFTPD and Apache use “.ftpaccess” and “.htaccess” files respectively.

There is one substantial difference though. ProFTPD does not always look from the current directory to the very root “/” directory, because ProFTPD uses chroot() when the “DefaultRoot” directive is set to “~”, for example. Many distributions set this directive to “~” by default.

Let’s have an example:

  • “/home/$user/.ftpaccess” — a “.ftpaccess” file EXISTS
  • “/home/$user/private/files/” — no “.ftpaccess” file

Let’s also assume that an FTP user has “/home/$user/private/files” for a home directory in the “/etc/passwd” file. If this user logs in, ProFTPD will chroot() to the user’s home directory first. This will effectively make the per-session “/” root directory to start from “/home/$user/private/files”. Therefore, the ProFTPD process will have access to the following directories:

  • “/home/$user/private/files/and-any-subdirectories” — accessible as “/and-any-subdirectories”
  • “/home/$user/private/files/” — accessible as “/”
  • “/home/$user/private/” — not accessible
  • “/home/$user/” — not accessible (“/home/$user/.ftpaccess” — EXISTS but not accessible)
  • “/home/” — not accessible
  • “/” — not accessible

As a result, even that we have the file “/home/$user/.ftpaccess” it won’t take effect for the FTP user who has “/home/$user/private/files/” for their home directory.

An strace excerpt follows which shows the exact steps which ProFTPD performs when it logs in an FTP user with “DefaultRoot” setting configured to “~”:

root@mysrv:~# getent passwd nobody
nobody:x:99:99:Nobody:/:/sbin/nologin
root@mysrv:~# getent group 99
nobody:x:99:

root@mysrv:~# getent passwd ftpuser1
ftpuser1:x:5178:4388::/home/ftpuser1/private/public_upload:/dev/null
root@mysrv:~# getent group 4388
ftpuser1:x:4388:

root@mysrv:~# strace -tt -f -p 3442
# main ProFTP process (PID=3442) accepts the new connection and forks a child process to handle it
3442  14:36:58.767425 accept(2, 0, NULL) = 7
3442  14:36:58.768078 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xec66f728) = 9389

# ProFTP child (PID=9389) becomes "root" temporarily
9389  14:36:58.771897 setresuid32(-1, 0, -1) = 0
9389  14:36:58.772026 setresgid32(-1, 0, -1) = 0

# open "syslog"
# read server's SSL CA, public and private files
# open "/var/log/proftpd/extended.log"
# manage the "/var/run/proftpd.scoreboard" file

# child becomes "nobody"
9389  14:36:58.785372 setresgid32(-1, 99, -1) = 0
9389  14:36:58.785421 setresuid32(-1, 99, -1) = 0

# log the connection and send a greeting message to the connected FTP client
# FTP client sends login information
9389  14:37:00.964572 read(0, "USER ftpuser1\r\n", 4102) = 14
9389  14:37:02.178714 read(0, "PASS SOME-PASS-INPUT\r\n", 4102) = 20

# ProFTP child becomes "root" temporarily
9389  14:37:02.192030 setresuid32(-1, 0, -1) = 0
9389  14:37:02.192090 setresgid32(-1, 0, -1) = 0

# open "/etc/shadow" to compare the supplied username and password
# open "/etc/shells" (temporarily switched UID/GID back to "nobody")
# open "/etc/ftpusers"
# open "/var/log/wtmp" to add a login entry there
# open "/var/log/xferlog"

# set any additional user "ftpuser1" groups
9389  14:37:02.201134 setgroups32(1, [4388]) = 0
9389  14:37:02.201301 setgid32(4388)    = 0

# chroot() to user's home directory (we are still with "root" privileges)
9389  14:37:02.208208 chroot("/home/ftpuser1/private/public_upload") = 0

# drop privileges to user's UID/GID
9389  14:37:02.209190 setgid32(4388)    = 0
9389  14:37:02.209253 setresuid32(-1, 5178, -1) = 0

# check for any ".ftpaccess" files
9389  14:37:02.209841 stat64("/.ftpaccess", 0xf640ddd0) = -1 ENOENT (No such file or directory)

Enable Forward Secrecy in Apache 2.4

The Heartbleed bug once again raised a very serious question — what happens if someone steals our SSL private key. The future actions are all clear — we revoke the key, change it on the server and we are secure. However any recorded past SSL encrypted sessions which the attacker posses can be decrypted to plain text using the stolen SSL private key.

The article “SSL: Intercepted today, decrypted tomorrow” explains the problem in great details. It also suggests who may take advantage of your stolen SSL private key.

Fortunately there is a solution to this attack vector. It is called Forward Secrecy and solves the problem by using a different private key to encrypt each new SSL session. If an attacker wanted to decrypt all your SSL sessions, the attacker would need to brute-force the private keys of each of your SSL sessions. While this attack vector still exists, current computing power is too small to solve such a task in a reasonable time. Note that Forward Secrecy is not new at all and was invented in 1992, pre-dating the SSL protocol by two years, as stated in the “SSL: Intercepted today, decrypted tomorrow” article.

Many websites and blogs recommend their own cipher list which needs to be used, in order to enable Perfect Secrecy, and at the same time to not be vulnerable to the BEAST attack. Trusting such long lists of ciphers without understanding all of them makes me a bit insecure. Luckily the OpenSSL vendors have created a special string which we can use to identify the TLS v1.2 specific ciphers — “TLSv1.2″. This makes your Apache configuration very readable and leaves the control in the OpenSSL developers who can update the ciphers accordingly:

SSLProtocol -ALL -SSLv2 +SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2
SSLHonorCipherOrder on
SSLCipherSuite TLSv1.2:RC4:HIGH:!aNULL:!eNULL:!MD5
SSLCompression off

TraceEnable Off

The effect of this configuration is that for TLS v1.2 connections you will prefer the strong TLS v1.2 ciphers. For all other connections, like SSLv3 or TLS v1.0, the RC4 ciphers will be used which are required to protect against the BEAST attack. Note that TLS v1.2 does not suffer from the BEAST attack.

This configuration won’t give you Forward Secrecy for Internet Explorer. The explanation here says:
“Internet Explorer, in all versions, does not support the ECDHE and RC4 combination (which has the benefit of supporting Forward Secrecy and being resistant to BEAST). But IE has long patched the BEAST vulnerability and so we shouldn’t worry about it.”

It’s worth mentioning that using those strong TLS v1.2 ciphers may increase the CPU time required to establish a new SSL connection 3 times.


Real-time SSL test:

Resources:

Solutions:

Cron job custom timezone

The default cron job daemon on Debian and Ubuntu does not support per-user timezones (see crontab(5) man page).

Here is a solution which runs hourly cron tasks in the timezone which you specified. For example, if you want to run “test.sh” at 5 AM and 7 PM in timezone Europe/Sofia, you need to create the following “cron.hourly” script:

#!/bin/bash
/path/to/run-at.sh 5,19 Europe/Sofia $(( 15*60 )) /var/run/run-at-test.state /path/to/test.sh

Here is the source code of “run-at.sh”:

#!/bin/bash
set -u

# XXX
# We work exclusively with global variables.
# Functions are used just to separate logic and for self-documenting.

function display_usage() {
	if [ "$1" -ge 5 ]; then
		return; # enough parameters
	fi

	cat >&2 <<EOF
Usage: $0 HOURS TZ WARN_TIME STATE_FILE COMMAND [ARGS...]
Runs COMMAND every day at the specified HOURS.

The execution is accounted and considered successful
only if COMMAND exits with 0. If there was no (successful)
execution within the WARN_TIME hours specified period,
a warning is being issued.

The STATE_FILE must pre-exist, so make sure that you
create it before the first run, or you will get a
warning.

This script is intended to be run in "cron.hourly".

Arguments:
 - HOURS -- comma-separated list; example: 5,12,19
 - TZ -- time zone; example: Europe/Sofia
 - WARN_TIME -- minutes; example: 360
 - STATE_FILE -- fill path to a writable file
 - COMMAND and ARGS to be executed on HOURS

Example:
 $0 5,19 Europe/Sofia \\
   \$(( 15*60 )) /root/.run-at-test.state \\
   date -d now +%c
EOF
	exit 1
}

function check_state_file_oldness() {
	# find file with mtime less than $WARN_TIME minutes
	if [ "$(find "$STATE_FILE" -type f -mmin -"$WARN_TIME")" != "$STATE_FILE" ]; then
		# file not found -> this will be indicated by 'find' on STDERR too
		# file is too old ("-mmin" condition not met)
		echo "WARNING: No successful run in the last $WARN_TIME minutes" >&2
	fi
}

function split_hours() {
	HOURS="${HOURS//,/ }" # replace "," with " "
	# http://blog.famzah.net/2013/02/17/
	#   bash-split-a-string-into-columns-by-white-space-without-invoking-sub-shells/
	HOURS=( $HOURS ) # now an ARRAY
}

function validate_tz() {
	# naive check; tested on Debian
	if [ ! -e "/usr/share/zoneinfo/$WANT_TZ" ]; then
		echo "ERROR: TZ seems to be invalid." >&2
		exit 1
	fi
}

function get_now_hour_in_tz() {
	NOW_HOUR="$(TZ="$WANT_TZ" date +%H)" # get current hour in 24h-format using the $WANT_TZ
}

function check_if_we_should_run_or_exit() {
	RUN=0
	for h in "${HOURS[@]}" ; do
		if [ "$h" -eq "$NOW_HOUR" ]; then
			RUN=1
			break
		fi
	done

	if [ "$RUN" -eq 0 ]; then
		exit 0
	fi
}

function execute_command_and_get_exit_code() {
	"$@" # execute the command
	EC="$?"
}

function update_state_file_mtime() {
	if [ "$EC" -eq 0 ]; then
		touch "$STATE_FILE"
	fi
}

#### ### ### ###

display_usage "$#"

# parse_argv

HOURS="$1" ; shift
WANT_TZ="$1" ; shift
WARN_TIME="$1" ; shift
STATE_FILE="$1" ; shift
# the rest in "$@" is the command to be executed

check_state_file_oldness

split_hours
validate_tz
get_now_hour_in_tz
check_if_we_should_run_or_exit

execute_command_and_get_exit_code "$@"
update_state_file_mtime

Using flock() in Bash without invoking a subshell

The flock(1) utility on Linux manages flock(2) advisory locks from within shell scripts or the command line. This lets you synchronize your Bash scripts with all your other applications written in Perl, Python, C, etc.

I’ll focus on the third usage form where flock() is used inside a Bash script. Here is what the man page suggests:

#!/bin/bash

(
flock -s 200

# ... commands executed under lock ...

) 200>/var/lock/mylockfile

Unfortunately, this invokes a subshell which has the following drawbacks:

  • You cannot pass values to variables from the subshell in the main shell script.
  • There is a performance penalty.
  • The syntax coloring in “vim” does not work properly. :)

This motivated my colleague zImage to come up with a usage form which does not invoke a subshell in Bash:

#!/bin/bash

exec 200>/var/lock/mylockfile || exit 1
flock -n 200 || { echo "ERROR: flock() failed." >&2; exit 1; }

# ... commands executed under lock ...

flock -u 200

Nagios: Improve CPU performance with popen_noshell()

Today I’ll share my real-world experience with popen_noshell() on the Nagios monitoring server which we run at work. We are actively monitoring 1166 hosts and 14250 services. The machine has 6 GB RAM and a single Intel Core i7-950 CPU with enabled multi-threading (8 total threads) and slight overclock. Besides running Nagios, this machine also handles the incoming data from our custom monitoring systems, processes RRD database storage, and generates web interface status + charts output. So it’s a pretty busy machine which does a lot of network activity and where the Nagios daemon is just a part of the CPU load. For example, since boot the main “nagios3″ process has used only 20% of the CPU. The other part has been used by the fork()’ed Perl scripts (we use a lot of them for the active checks), the Nagios standard network checks, and the Apache/PHP web server handling the incoming data.

Recently the machine started to exhaust its CPU resources. First we overclocked it a bit which gave us 10% more CPU idle time. Then we decided to try to compile Nagios with the popen-noshell library. This gave us another 10% CPU idle and now the machine is working great again.

I’ll focus on the popen-noshell integration and results, since CPU overclocking is a well-known topic. Here is the chart which shows the CPU usage before and after we re-compiled Nagios with the popen-noshell library:

nagios-popen-noshell-benchmark-results

As we can see, the system-CPU usage dropped from 38% to 31%, which is an 18% improvement. The user-CPU usage dropped from 44% to 41%, which is a 7% improvement. Overall, we gained a 12% speed-up for our workload by just re-compiling Nagios with the popen-noshell library. I’m stressing out that the speed-up depends a lot on your workload. If this machine was busy only with Nagios and the active checks were more CPU efficient (i.e. not written in Perl but in C), then the speed-up could have been much higher, since popen_noshell() is about 10 times faster than the standard popen().

A list with the other machine metrics which were also affected by the workload change:

  • Used memory: 39% => 24% (38% less)
  • Load average: 39 => 46 (18% higher)
  • Forks rates: 8*61 => 8*61 (created processes/second – no change)

Here are the steps that you need to perform, in order to re-compile the Nagios Debian package by integrating it with the popen-noshell library:

apt-get install devscripts

apt-get build-dep nagios3-core
# No need to run as "root" from here on
apt-get source nagios3-core

svn checkout http://popen-noshell.googlecode.com/svn/trunk/ popen-noshell

cd nagios3-3.2.1/

# BEGIN: patch Nagios to use popen_noshell_compat()

cp ../popen-noshell/popen_noshell.* base/
vi base/Makefile.in
	OBJS=$(BROKER_O) popen_noshell.o 

vi base/utils.c
	#include "popen_noshell.h"
	
        /* run the command */
        struct popen_noshell_pass_to_pclose pclose_arg;
        fp=(FILE *)popen_noshell_compat(cmd,"r",&pclose_arg);

            /* close the command and get termination status */
            status=pclose_noshell(&pclose_arg);

vi base/checks.c
	2x the same as above

# END: patch Nagios to use popen_noshell_compat()

EDITOR=vim dch -i
	# 3.2.1-2+squeeze1 -> 3.2.1-2+squeeze1-noshell1
	# you must have a trailing number in the added version name
	# after exit, this renames the original directory name

cd ..
mv nagios3_3.2.1.orig.tar.gz nagios3_3.2.1-2+squeeze1.orig.tar.gz

# the source directory was renamed by "dch"
cd nagios3-3.2.1-2+squeeze1/
DEB_BUILD_OPTIONS=nocheck debuild -us -uc

cd ..
sudo dpkg -i nagios3-core_3.2.1-2+squeeze1-noshell1_i386.deb \
	nagios3-common_3.2.1-2+squeeze1-noshell1_all.deb \
	nagios3-cgi_3.2.1-2+squeeze1-noshell1_i386.deb \
	nagios3-doc_3.2.1-2+squeeze1-noshell1_all.deb \
	nagios3_3.2.1-2+squeeze1-noshell1_i386.deb

An “xargs” alternative to GNU Parallel

I wanted to use GNU Parallel on my Ubuntu system, in order to process some data in parallel. It turned out that there is no official package for Ubuntu. As of Ubuntu Quantal released on April/2014, this has been corrected and the package is in the official repository.

Reading a bit more brought me to the astonishing fact that “xargs” can run commands in parallel. The “xargs” utility is something I use every day and this parallelism feature made it even more useful.

Let’s try it by running the following:

famzah@vbox:~$ echo 10 20 30 40 50 60 | xargs -n 1 -P 4 sleep

The use of “-n 1″ is vital if you want to pass only one command-line argument from the list to each parallel process.

Here is the result:

# right after we launched "xargs"
famzah@vbox:~$ ps f -o pid,command
  PID COMMAND
 5068 /bin/bash
 7007  \_ xargs -n 1 -P 4 sleep
 7008      \_ sleep 10
 7009      \_ sleep 20
 7010      \_ sleep 30
 7011      \_ sleep 40

# 10 seconds later (the first "sleep" has just exited)
famzah@vbox:~$ ps f -o pid,command
  PID COMMAND
 5068 /bin/bash
 7007  \_ xargs -n 1 -P 4 sleep
 7009      \_ sleep 20
 7010      \_ sleep 30
 7011      \_ sleep 40
 7017      \_ sleep 50

# 20 seconds later (the second and third "sleep" commands have exited)
# we now have only 3 simultaneous processes (no more arguments to process)
famzah@vbox:~$ ps f -o pid,command
  PID COMMAND
 5068 /bin/bash
 7007  \_ xargs -n 1 -P 4 sleep
 7011      \_ sleep 40
 7017      \_ sleep 50
 7023      \_ sleep 60

It’s worth mentioning that if “xargs” fails to execute the binary, it prematurely terminates the failed parallel processing queue, which leaves some of the stdin arguments not processed:

famzah@vbox:~$ echo 10 20 30 40 50 60 | xargs -n 1 -P 4 badexec-name
xargs: badexec-namexargs: badexec-name: No such file or directory: No such file or directory

xargs: badexec-namexargs: badexec-name: No such file or directory
: No such file or directory

The output is scrambled because all parallel processes write to the screen with no locking synchronization. This seems to be a known issue. The point is that we could expect that “xargs” would try to execute “badexec-name” for every command-line argument (total of six attempts in our example). It turns out that “xargs” bails out the same way even if we don’t use the “-P” option:

# standard usage of "xargs"
famzah@vbox:~$ echo 10 20 30 40 50 60 | xargs -n 1 badexec-name
xargs: badexec-name: No such file or directory

Not a very cool behavior. I’ve reported this as a bug to the GNU community. If you review the responses to the bug report, you will find out that this actually is an intended feature. :)

If the provided command to “xargs” is a valid one but it fails during the execution, there are no surprises and “xargs” continues with the next command-line argument by executing a new command:

famzah@vbox:~$ echo 10 20 30 40 50 60 | xargs -n 1 -P 4 rm
rm: rm: cannot remove `10'cannot remove `40': No such file or directory
: No such file or directory
rm: cannot remove `20': No such file or directory
rm: cannot remove `30': No such file or directory
rm: cannot remove `60': No such file or directory
rm: cannot remove `50': No such file or directory

The output here is scrambled too because all parallel processes write to the screen with no locking synchronization. We see however that all command-line arguments from “10” to “60” were processed by executing a command for each of them.

Bash: Split a string into columns by white-space without invoking a subshell

The classical approach is:

RESULT="$(echo "$LINE"| awk '{print $1}')" # executes in a subshell 

Processing thousands of lines this way however fork()’s thousands of processes, which affects performance and makes your script CPU hungry.

Here is the effective solution which I found with my colleagues at work:

COLS=( $LINE ); # parses columns without executing a subshell
RESULT="${COLS[0]}"; # returns first column (0-based indexes)

Here is an example:

LINE="col0 col1  col2     col3  col4      " # white-space including tab chars
COLS=( $LINE ); # parses columns without executing a subshell

echo "${COLS[0]}"; # prints "col0"
echo "${COLS[1]}"; # prints "col1"
echo "${COLS[2]}"; # prints "col2"
echo "${COLS[3]}"; # prints "col3"
echo "${COLS[4]}"; # prints "col4"

If you want to split not by white-space but by any other character, you can temporarily change the IFS variable which determines how Bash recognizes fields and word boundaries.

Re-compile a Debian kernel as a .deb package

Here is my success story on how to re-compile a Debian/Ubuntu kernel, in order to enable or tune kernel features which are not available as kernel modules:

# Install required software for the kernel compilation
apt-get install fakeroot build-essential devscripts
apt-get build-dep linux-image-$(uname -r) # make sure you have the appropriate "deb-src" in "sources.list"
apt-get install libncurses5-dev # required for "make menuconfig"
apt-get install ccache # to re-compile the kernel faster (http://wiki.debian.org/OverridingDSDT)

# Prepare some environent variables for our architecture, for later use
ARCH=$(uname -r|cut -d- -f3)
CPUCNT=$(( $(cat /proc/cpuinfo |egrep ^processor |wc -l) * 2))

# Get the kernel sources
rm -rf /root/krebuild && mkdir /root/krebuild
cd /root/krebuild
apt-get source linux-image-$(uname -r)
cd linux-$(uname -r|cut -d- -f1|cut -d. -f1-2)* # cd linux-3.2.20

# http://kernel-handbook.alioth.debian.org/ch-common-tasks.html # 4.2.5 Building packages for one flavour
# The target in this command has the general form of target_arch_featureset_flavour. Replace the featureset with none if you do not want any of the extra featuresets.

# Prepare a Debian kernel to compile
fakeroot make -f debian/rules.gen setup_${ARCH}_none_${ARCH} >/dev/null
cd debian/build/build_${ARCH}_none_${ARCH}
make menuconfig # make any kernel config changes now
cd ../../..

# No debug info => faster kernel build
perl -pi -e 's/debug-info:\s+true/debug-info: false/' debian/config/$ARCH/defines
echo binary-arch_${ARCH}_none_${ARCH}
vi debian/rules.gen # find the Make target and change DEBUG and DEBUG_INFO to False/n respectively

# Bugfix: http://lists.debian.org/debian-user/2008/02/msg01455.html
vi debian/bin/buildcheck.py +51 # add "return 0" right after "def __call__(self, out):"

# Compile the kernel
time DEBIAN_KERNEL_USE_CCACHE=true DEBIAN_KERNEL_JOBS=$CPUCNT \
	fakeroot make -j$CPUCNT -f debian/rules.gen binary-arch_${ARCH}_none_${ARCH} > compile-progress.log

# If needed, the linux-headers-version-common binary package (http://kernel-handbook.alioth.debian.org/ch-common-tasks.html -> 4.2.5)
#fakeroot make -j$CPUCNT -f debian/rules.gen binary-arch_${ARCH}_none_real

# Install the newly compiled kernel
cd ..
dpkg -i linux-image-*.deb
#dpkg -i linux-headers-*.deb # only if you need them and/or have them installed already