OpenSSH ciphers performance benchmark (update 2015)

It’s been five years since the last OpenSSH ciphers performance benchmark. There are two fundamentally new things to consider, which also gave me the incentive to redo the tests:

  • Since OpenSSH version 6.7 the default set of ciphers and MACs has been altered to remove unsafe algorithms. In particular, CBC ciphers and arcfour* are disabled by default. This has been adopted in Debian “Jessie”.
  • Modern CPUs have hardware acceleration for AES encryption.

I tested five different platforms having CPUs with and without AES hardware acceleration, different OpenSSL versions, and running on different platforms including dedicated servers, OpenVZ and AWS.

Since the processing power of each platform is different, I had to choose a criteria to normalize results, in order to be able to compare them. This was a rather confusing decision, and I hope that my conclusion is right. I chose to normalize against the “arcfour*”, “blowfish-cbc”, and “3des-cbc” speeds, because I doubt it that their implementation changed over time. They should run equally fast on each platform because they don’t benefit from the AES acceleration, nor anyone bothered to make them faster, because those ciphers are meant to be marked as obsolete for a long time.

A summary chart with the results follow:
openssh-ciphers-performance-2015-chart

You can download the raw data as an Excel file. Here is the command which was run on each server:

# uses "/root/tmp/dd.txt" as a temporary file!
for cipher in aes128-cbc aes128-ctr aes128-gcm@openssh.com aes192-cbc aes192-ctr aes256-cbc aes256-ctr aes256-gcm@openssh.com arcfour arcfour128 arcfour256 blowfish-cbc cast128-cbc chacha20-poly1305@openssh.com 3des-cbc ; do
	for i in 1 2 3 ; do
		echo
		echo "Cipher: $cipher (try $i)"
		
		dd if=/dev/zero bs=4M count=1024 2>/root/tmp/dd.txt | pv --size 4G | time -p ssh -c "$cipher" root@localhost 'cat > /dev/null'
		grep -v records /root/tmp/dd.txt
	done
done

We can draw the following conclusions:

  • Servers which run a newer CPU with AES hardware acceleration can enjoy the benefit of (1) a lot faster AES encryption using the recommended OpenSSH ciphers, and (2) some AES ciphers are now even two-times faster than the old speed champion, namely “arcfour”. I could get those great speeds only using OpenSSL 1.0.1f or newer, but this may need more testing.
  • Servers having a CPU without AES hardware acceleration still get two-times faster AES encryption with the newest OpenSSH 6.7 using OpenSSL 1.0.1k, as tested on Debian “Jessie”. Maybe they optimized something in the library.

Test results may vary (a lot) depending on your hardware platform, Linux kernel, OpenSSH and OpenSSL versions.

“iperf” and “iftop” accuracy

While working on my latest pet project which involved 10 GigE transfers, I noticed a significant difference between the results shown by “iperf” and “iftop“. A fellow blogger also noticed this discrepancy. In order to get to the bottom of this, I did some additional tests using different MTU sizes, and observing the output of “iperf”, “iftop”, “iptraf”, and the raw Linux network device counters as seen by “ifconfig”.

The tests results are summarized in an online spreadsheet: https://goo.gl/MvJC8K
iperf vs. iftop vs. iptraf vs. raw stats - spreadsheet - preview

Some notes about each application:

  • iperf – this tool measures the TCP performance, as per documentation; therefore it counts the useful payload in a TCP/IP transfer; this is layer4 in the OSI model
  • iftop – this tool counts all IP packets, as per documentation; my tests show that it also operates on layer4, just as “iperf”, because ARP traffic (on layer3) is not counted at all; the fact that “iftop” cares about connections+ports also suggests that it operates at layer4
  • iptraf – this tool seems to be too old now, and its results were off by a multiple of 4 to 5
  • ifconfig – shows the most low-level statistics, namely bytes that passed as RX or TX through the network device; the most trusted source of performance data

We notice that both “iperf” and “iftop” measure the useful payload data that we can transfer per second. Since all OSI layers have some overhead, let’s take a look at what theory says about bandwidth efficiency in Ethernet:

  • with a standard MTU frame of 1500 bytes, we get 94.93% efficiency (5.07% overhead)
  • with a jumbo MTU frame of 9000 bytes, we get 99.14% efficiency (0.86% overhead)

Those numbers correspond very closely with the results shown by “iperf”.

It’s only “iftop” which differs a lot. Analysis of its source code reveals the reason for this and how we must interpret the displayed results:

#
# ui.c
#

void ui_print() {
...
    mvaddstr(y, COLS - 8 * HISTORY_DIVISIONS - 8, "rates:");

    draw_totals(&totals);
}

void draw_totals(host_pair_line* totals) {
    for(j = 0; j < HISTORY_DIVISIONS; j++) {
        readable_size((totals->sent[j] + totals->recv[j]) , buf, 10, 1024, options.bandwidth_in_bytes);
...
}

#
# ui_common.c
#

/*
 * Format a data size in human-readable format
 */
void readable_size(float n, char* buf, int bsize, int ksize, int bytes) {
    float size = 1;
...
    while(1) {
      size *= ksize;
...
        snprintf(buf, bsize, " %4.2f%s", n / size, bytes ? unit_bytes[i] : unit_bits[i]);

The authors of “iftop” decided to round to Gigibit (multiple of 1024), instead of the more common Gigabit (multiple of 1000). This makes the difference by “iftop” bigger as the transfer rate gets higher. For Gigabit the difference is 7%.

Once the “iftop” values are converted from Gigibit to Gigabit, they also match the results by “iperf” and the raw Linux network device counters.

Linux md-RAID scalability on a 10 Gigabit network

The question for today is – does Linux md-RAID scale to 10 Gbit/s?

I wanted to build a proof of concept for a scalable, highly available, fault tolerant, distributed block storage, which utilizes commodity hardware, runs on a 10 Gigabit Ethernet network, and uses well-tested open-source technologies. This is a simplified version of Ceph. The only single point of failure in this cluster is the client itself, which is inevitable in any solution.

Here is an overview diagram of the setup:
Linux md-RAID scalability on a 10 Gigabit network

My test lab is hosted on AWS:

  • 3x “c4.8xlarge” storage servers
    • each of them has 5x 50 GB General Purpose (SSD) EBS attached volumes which provide up to 160 MiB/s and 3000 IOPS for extended periods of time; practical tests shown 100 MB/s sustained sequential read/write performance per volume
    • each EBS volume is managed via LVM and there is one logical volume with size 15 GB
    • each 15 GB logical volume is being exported by iSCSI to the client machine
  • 1x “c4.8xlarge” client machine
    • the client machine initiates an iSCSI connection to each single 15 GB logical volume, and thus has 15 identical iSCSI block devices (3 storage servers x 5 block devices = 15 block devices)
    • to achieve a 3x replication factor, the block devices from each storage server are grouped into 5x mdadm software RAID-1 (mirror) devices; each RAID-1 device “md1″ to “md5″ contains three disks from a different storage server, so that if one or two of the storage servers fail, this won’t affect the operation of the whole RAID-1 device
    • all RAID-1 devices “md1″ to “md5″ are grouped into a single RAID-0 (stripe), in order to utilize the full bandwidth of all devices into a single block device, namely the “md99″ RAID-0 device, which also combines the size capacity of all “md1″ to “md5″ devices and it equals to 75 GB
  • 10 Gigabit network in a VPC using Jumbo frames
  • the storage servers and the client machine were limited on boot to 4 CPUs and 2 GB RAM, in order to minimize the effect of the Linux disk cache
  • only sequential and random reading were benchmarked
  • Linux md RAID-1 (mirror) does not read from all underlying disks by default, so I had to create a RAID-1E (mirror) configuration; more info here and here; the “mdadm create” options follow: --level=10 --raid-devices=3 --layout=o3 Continue reading

The “cp” command may corrupt your files on Debian Wheezy

We recently had two files corrupted on Debian Wheezy (the current “stable” release). The first one had some garbage, instead of the real data, the other had only zero characters. Only a small part of the files of about 3K was corrupted. This affects both “ext3″ and “ext4″ file-systems.

It turns out to be a free-memory read bug in cp from coreutils-[8.11..8.19] reported to GNU in Oct/2012. Almost a year ago it was also reported to Debian in Apr/2014 with severity “grave“.

Today we test if the bug is fixed using the PoC given in the original GNU bug report:

$ perl -e 'for (1..3333) { sysseek (*STDOUT, 4096, 1)' -e '&& syswrite (*STDOUT, "a" x 1024) or die "$!"}' > j

$ valgrind cp j j2

==13175== Memcheck, a memory error detector
==13175== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==13175== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==13175== Command: cp j j2
==13175== 
==13175== Invalid read of size 4
==13175==    at 0x8051229: ??? (in /bin/cp)
==13175==    by 0x153FFF: ???
==13175==  Address 0x424ed0c is 1,356 bytes inside a block of size 1,440 free'd
==13175==    at 0x40283EE: realloc (vg_replace_malloc.c:632)
==13175==    by 0x805820B: ??? (in /bin/cp)
==13175==    by 0x153FFF: ???

...

==15843== ERROR SUMMARY: 15 errors from 9 contexts (suppressed: 25 from 6)

It turns out that the bug is not fixed in Debian. Unfortunately, upgrade of the “coreutils” package from Jessie is not an option, where this bug is not present. The “coreutils” package from Jessie depends on a newer “libc6″ and futhermore would introduce too many (untested) changes to the core utils.

Here is how to rebuild the “coreutils” package by applying the “cp” data corruption patch:

root@machine1:~# cowbuilder --login

COW-machine1:~# apt-get update
COW-machine1:~# apt-get upgrade

COW-machine1:~# mkdir /root/coreutils
COW-machine1:~# cd /root/coreutils

COW-machine1:~/coreutils# apt-get source coreutils
COW-machine1:~/coreutils# apt-get build-dep coreutils

COW-machine1:~/coreutils# cd coreutils-8.13
COW-machine1:~/coreutils/coreutils-8.13# wget 'http://git.savannah.gnu.org/cgit/coreutils.git/patch/?id=64aef5fb9afecc023a6e719da161dbbf450908b8' -O cp-avoid_data_corrupting_free_memory_read.patch

COW-machine1:~/coreutils/coreutils-8.13# patch -p1 < cp-avoid_data_corrupting_free_memory_read.patch
COW-machine1:~/coreutils/coreutils-8.13# DEBFULLNAME='Admin Team' DEBEMAIL='box@example.com' dch --local '~patched' 'Local build with cp data corruption patch'
COW-machine1:~/coreutils/coreutils-8.13# dpkg-buildpackage -b -rfakeroot

root@machine1:~# cp /var/cache/pbuilder/build/cow.1385/root/coreutils/coreutils_8.13-3.5~patched1_i386.deb /root/tmp/

Finally, you need to install the “.deb” file on your system and prevent APT from auto-upgrading it. You’d need to recompile it every time Debian “stable” releases a mainstream update for “cureutils”. This doesn’t happen that often. Furthermore, we hope that Debian will react to the bug report and will fix the bug in their source tree for Wheezy “stable”.

MemAvailable metric for Linux kernels before 3.14 in /proc/meminfo

A great new metric has been introduced in “/proc/meminfo” in the Linux 3.14 kernel — MemAvailable:

An estimate of how much memory is available for starting new applications, without swapping. Calculated from MemFree, SReclaimable, the size of the file LRU lists, and the low watermarks in each zone.

The estimate takes into account that the system needs some page cache to function well, and that not all reclaimable slab will be reclaimable, due to items being in use. The impact of those factors will vary from system to system.

I recommend that you read the kernel commit description for further details.

Since many people are still using Linux kernels before 3.14, I’ve backported this kernel patch to Perl. You can download the sources from GitHub: https://github.com/famzah/linux-memavailable-procfs

Many system administrators rely on the “free” tool to get a quick overview of the system’s memory usage. Unfortunately, the latest “procps” package still doesn’t interpret the “MemAvailable” metric, and even if it did, we don’t have it in Linux kernels before 3.14. Actually, the developers of the “procps-ng” package (which is the Debian, Fedora and openSUSE fork of “procps“) have reacted and did the same thing as me. For kernels before 3.14 they emulate the metric in the same way, and for kernels after 3.14, they display the native metric from “/proc/meminfo”. This makes my Perl port more or less redundant.

This is the reason I wrote a quick replacement of the “free” tool in Perl. A few examples of it follow.

Typical memory usage overview, in MBytes:

famzah@vbox:~$ ./free.pl -m
             total       used       free  anonymous     kernel     caches     others
Mem:          2488       1228       1259        608         24        580         15
  -/+ caches              648       1840
Swap:         1565          0       1565

Typical memory usage overview, in percentage:

famzah@vbox:~$ ./free.pl -mp
             total       used       free  anonymous     kernel     caches     others
Mem:          2488        49%        51%        24%         1%        23%         1%
  -/+ caches              26%        74%
Swap:         1565         0%       100%

Extended memory usage overview, in MBytes:

famzah@vbox:~$ ./free.pl -me
             total       used       free  anonymous     kernel     caches     others
Mem:          2488       1228       1260        608         24        580         14
  -/+ caches              647       1840
Swap:         1565          0       1565

Extended memory usage info:
  Buffers                  83
  Cached                  785
  SwapCached                0
  Shmem                   308
  AnonPages               300
  Mapped                  122
  Unevict+Mlocked           5
  Dirty+Writeback           0
  NFS+Bounce                0

Extended memory usage overview, in percentage:

famzah@vbox:~$ ./free.pl -mep
             total       used       free  anonymous     kernel     caches     others
Mem:          2488        49%        51%        24%         1%        23%         1%
  -/+ caches              26%        74%
Swap:         1565         0%       100%

Extended memory usage info:
  Buffers                  3%
  Cached                  32%
  SwapCached               0%
  Shmem                   12%
  AnonPages               12%
  Mapped                   5%
  Unevict+Mlocked          0%
  Dirty+Writeback          0%
  NFS+Bounce               0%

Memory logo

Know your Linux memory usage

I’ve given another try to understand the Linux memory usage statistics which are exported by “/proc/meminfo“. The main reason was to know how much memory is still available for free allocation by user applications, and also to understand the memory usage of the different Linux subsystems like the kernel, file system cache, etc.

The whole analysis is done on 649 machines with different workload. Most of them are running a 64-bit Linux kernel 3.2.59, while the others are running a 64-bit kernel with versions between 3.2.42 and 3.2.60. This is an LTS Linux kernel.

Cached memory and tmpfs

One of the most astonishing facts that I found out was that the “tmpfs” in-memory file system is counted against the “Cached” value in “/proc/meminfo”. The “tmpfs” file system is commonly mounted in “/dev/shm”. There is a separate stats key “Shmem” which shows the amount of memory allocated to “tmpfs” files, but at the same time this memory is added to the value of “Cached”. This is rather confusing because we’re used to think that both “Cached” and “Buffers” are reclaimable memory which can be freed anytime (or very soon enough) for use by applications or the kernel. That’s the current behavior of the “free” memory statistics tool which is widely used by System administrators to get an overview of the memory usage on their Linux systems. The “free” tool is part of the “procps” package which as of today is still not fixed and showing the “Cached” memory as free.

Here is a little proof:

famzah@vbox:~$ free -m
             total       used       free     shared    buffers     cached
Mem:          2488        965       1523          0         83        517
-/+ buffers/cache:        363       2124
Swap:         1565          0       1565

famzah@vbox:~$ dd if=/dev/zero of=/dev/shm/bigfile bs=1M count=800
838860800 bytes (839 MB) copied, 0.53357 s, 1.6 GB/s

famzah@vbox:~$ free -m
             total       used       free     shared    buffers     cached
Mem:          2488       1765        723          0         83       1317
-/+ buffers/cache:        364       2124
Swap:         1565          0       1565

The “cached” memory got 800 MB bigger. The free “-/+ buffers/cache” memory is still the same amount (2124 MB), which is wrong because the used memory in “tmpfs” cannot be reclaimed nor given to user applications or the kernel.

The reason that “tmpfs” is added to the “cached” memory usage is because “tmpfs” is implemented in the page cache which is accounted in the “cached” stats.

Active, Inactive, and Slab

The following is true for all machines that I tested on:

  • Active = Active(anon) + Active(file)
  • Inactive = Inactive(anon) + Inactive(file)
  • Slab = SReclaimable + SUnreclaim

Total memory usage

I tried to sum up the usage of the different fields in “/proc/meminfo” up to the maximum amount of available memory on each machine. The following two formulas both represent the whole memory usage:

  • MemTotal = MemFree + (Buffers + Cached + SwapCached) + AnonPages + (Slab + PageTables + KernelStack)
  • MemTotal = MemFree + (Active + Inactive) + (Slab + PageTables + KernelStack)

Both those formulas can be expanded to include the sub-components as well:

  • MemTotal = MemFree + (Buffers + Cached + SwapCached) + AnonPages + ((SReclaimable + SUnreclaim) + PageTables + KernelStack)
  • MemTotal = MemFree + ((“Active(anon)” + “Active(file)”) + (“Inactive(anon)” + “Inactive(file)”)) + ((SReclaimable + SUnreclaim) + PageTables + KernelStack)

The formulas give an error of up to -3% for only 6 machines. For the rest of the machines the error is less than +/- 1.5%. The less total available memory a machine has, the bigger the error. I suppose that this small error comes by the fact that not all “/proc/meminfo” counters are updated atomically at once.

Statistical keys

The following keys in “/proc/meminfo” seem to be accounted into other super-set keys, and are therefore only for statistical purpose:

  • Mapped
  • Dirty
  • Writeback
  • Unevictable
  • Shmem — accounted in “Cached”
  • VmallocUsed

References:

Private networking per-process in Linux

This is a follow-up of the Private /tmp mount per-process in Linux. As already stated there, Linux namespaces offer great options for security.

In this article we will demonstrate the use of the “network” namespace which enables a process to have independent IPv4 and IPv6 stacks, network interfaces, IP routing tables, iptables firewall rules, the /proc/net and /sys/class/net directory trees, sockets, etc.

Here is a diagram to illustrate the concept:
Linux network namespace

First we start by creating a pair of “veth” network interfaces:

ip link add v-eth1 type veth peer name v-peer1
ip link set v-eth1 up
ip link set v-peer1 up

One of those interfaces will be used as a communication point from the side of the original default network namespace. We will assign “10.200.1.1” for IP address:

ifconfig v-eth1 10.200.1.1 netmask 255.255.255.0 up

It is time to enter the new network namespace. Once we have created the new namespace, we will associate the second interface “v-peer1″ with it, then we will configure an IP address “10.200.1.2” and add a default route through the first interface which will act as a router:

export MAIN_NS_PID="$$"
unshare -n /bin/bash

#
# We are in a "/bin/bash" session in the NEW network namespace now.
#

ip link set lo up # activate the "loopback" interface

nsenter --net="/proc/$MAIN_NS_PID/ns/net" ip link set v-peer1 netns "$$" # join "v-peer1" into this namespace
ifconfig v-peer1 10.200.1.2 netmask 255.255.255.0 up
route add default gw 10.200.1.1 dev v-peer1

#
# Setup is done.
# You can now drop privileges and launch a daemon which will use this confined network namespace.
#

sudo -u www-data /etc/init.d/my-net-daemon start

The original default namespace, our original Linux installation, must be configured to act as a router. Otherwise the processes inside the new network namespace won’t have any Internet access. Configuring a Linux network router is a straightforward task:

echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -P FORWARD DROP
iptables -F FORWARD

iptables -t nat -F
iptables -t nat -A POSTROUTING -s 10.200.1.0/255.255.255.0 -o eth0 -j MASQUERADE

iptables -A FORWARD -i eth0 -o v-eth1 -j ACCEPT
iptables -A FORWARD -o eth0 -i v-eth1 -j ACCEPT

Finally, you can enable inbound connections to the processes in the confined new network namespace. Let’s assume that you have a daemon listening on TCP port 10105. Here is how you can forward any new incoming connections to the processes inside the new network namespace:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 10105 -j DNAT --to-destination 10.200.1.2

Pros: Using separate network namespaces gives us full network isolation and control over a group of processes. Additionally, we can match incoming packets against a process which is not possible in a standard “iptables” setup using the “-m owner” match extension. These are huge security benefits.

Cons: The technical implications are that the Linux host has to do (a lot) more work because of the DNAT/SNAT operations and their related connection tracking overhead. If you are running a high traffic server, you should plan and test accordingly. Furthermore, one additional network interfaces pair is created for each new network namespace. Linux can handle hundreds of network devices but still this is a factor to be considered.

The better security features outweigh the drawbacks in most use-cases though. Last but not least, it is very easily to run a process with completely detached network and this won’t cost us anything on the Linux host.

Resources: