/contrib/famzah

Enthusiasm never stops


47 Comments

C++ vs. Python vs. Perl vs. PHP performance benchmark (2016)

There are newer benchmarks: C++ vs. Python vs. PHP vs. Java vs. Others performance benchmark (2016 Q3)

The benchmarks here do not try to be complete, as they are showing the performance of the languages in one aspect, and mainly: loops, dynamic arrays with numbers, basic math operations.

This is a redo of the tests done in previous years. You are strongly encouraged to read the additional information about the tests in the article.

Here are the benchmark results:

Language CPU time Slower than Language
version
Source
code
User System Total C++ previous
C++ (optimized with -O2) 0.952 0.172 1.124 g++ 5.3.1 link
Java 8 (non-std lib) 1.332 0.096 1.428 27% 27% 1.8.0_72 link
Python 2.7 + PyPy 1.560 0.160 1.720 53% 20% PyPy 4.0.1 link
Javascript (nodejs) 1.524 0.516 2.040 81% 19% 4.2.6 link
C++ (not optimized) 2.988 0.168 3.156 181% 55% g++ 5.3.1 link
PHP 7.0 6.524 0.184 6.708 497% 113% 7.0.2 link
Java 8 14.616 0.908 15.524 1281% 131% 1.8.0_72 link
Python 3.5 18.656 0.348 19.004 1591% 22% 3.5.1 link
Python 2.7 20.776 0.336 21.112 1778% 11% 2.7.11 link
Perl 25.044 0.236 25.280 2149% 20% 5.22.1 link
PHP 5.6 66.444 2.340 68.784 6020% 172% 5.6.17 link

The clear winner among the script languages is… PHP 7. 🙂

Yes, that’s not a mistake. Apparently the PHP team did a great job! The rumor that PHP 7 is really fast confirmed for this particular benchmark test. You can also review the PHP 7 infographic by the Zend Performance Team.

Brief analysis of the results:

  • NodeJS got almost 2x faster.
  • Java 8 seems almost 2x slower.
  • Python has no significant change in the performance. Every new release is a little bit faster but overall Python is steadily 15x slower than C++.
  • Perl has the same trend as Python and is steadily 22x slower than C++.
  • PHP 5.x is the slowest with results between 47x to 60x behind C++.
  • PHP 7 made the big surprise. It is about 10x faster than PHP 5.x, and about 3x faster than Python which is the next fastest script language.

The tests were run on a Debian Linux 64-bit machine.

You can download the source codes, an Excel results sheet, and the benchmark batch script at:
https://github.com/famzah/langs-performance

 


Leave a comment

Convert human-readable sizes back to raw numbers

Ever needed to convert lots of lines with 1M or 1G to their raw number representation?

Here is a sample:

$ cat sample
26140   132K   1.9G   1.5G     ?K     0K     8K     0K   5% mysqld
26140   132K   1.9G   1.5G     ?K     4K     8K     0K   5% mysqld
26140   132K   1.9G   1.5G     ?K     0K     0K     0K   5% mysqld
26140   132K   1.9G   1.5G     ?K    -8K     0K     0K   5% mysqld
26140   132K   1.9G   1.6G     ?K     0K    20K     0K   5% mysqld
26140   132K   1.9G   1.6G     ?K     0K    56K     0K   5% mysqld
26140   132K   1.9G   1.7G     ?K    -4K     4K     0K   5% mysqld
26140   132K   1.9G   1.7G     ?K     0K    16K     0K   5% mysqld
26140   132K   1.9G   1.8G     ?K     0K     0K     0K   5% mysqld

The following Perl one-liner comes to the rescue:

perl -Mstrict -Mwarnings -n -e 'my %p=( K=>3, M=>6, G=>9, T=>12); s/(\d+(?:\.\d+)?)([KMGT])/$1*10**$p{$2}/ge; print'

In the end you get:

$ cat sample | perl -Mstrict -Mwarnings -n -e 'my %p=( K=>3, M=>6, G=>9, T=>12); s/(\d+(?:\.\d+)?)([KMGT])/$1*10**$p{$2}/ge; print'
26140   132000   1900000000   1500000000     ?K     0     8000     0   5% mysqld
26140   132000   1900000000   1500000000     ?K     4000     8000     0   5% mysqld
26140   132000   1900000000   1500000000     ?K     0     0     0   5% mysqld
26140   132000   1900000000   1500000000     ?K    -8000     0     0   5% mysqld
26140   132000   1900000000   1600000000     ?K     0    20000     0   5% mysqld
26140   132000   1900000000   1600000000     ?K     0    56000     0   5% mysqld
26140   132000   1900000000   1700000000     ?K    -4000     4000     0   5% mysqld
26140   132000   1900000000   1700000000     ?K     0    16000     0   5% mysqld
26140   132000   1900000000   1800000000     ?K     0     0     0   5% mysqld

You can now paste this output to Excel, for example, in order to create a nice chart of it.


Leave a comment

Properly escape arbitrary data for JavaScript in an HTML page

I’ve encountered different techniques which (try to) solve this problem. Some of them escape only the single/double quotes, others sanitize the input by removing unexpected characters, etc. The solution should however be more general, and thus bullet proof.

We have no doubts on how to escape arbitrary data which we want displayed in an HTML page. We convert all special characters to HTML entities, and most programming languages have a function for that. In PHP that’s the htmlspecialchars() function. No developer writes their own version by substituting the ampersand character with “&”, for example, and so on.

Why re-invent the wheel when dealing with arbitrary data for JavaScript in an HTML page then. JavaScript expects data to be escaped in JSON — “Since JSON is a subset of JavaScript, it can be used in the language with no muss or fuss”.

The rules of thumb are:

  • When supplying arbitrary data to JavaScript, encode it as JSON. Let json_encode() put the opening and closing quotes.
  • If the JavaScript code is embedded in HTML code, the whole thing needs to be additionally HTML-escaped (converted to HTML entities).

Enough theory, let’s see the source code:

<?php
	$data = 'Any data, including <html tags>, \'"&;(){}'."\nNewline";
?>
<html>
<body>
	<script>
		// JavaScript not in HTML code, because we are inside a <script> block
		js_var1 = <?=json_encode($data)?>;
	</script>

	The input data is: <?=htmlspecialchars($data)?>
	<br><br>
	<a href="#" onclick="alert(<?=htmlspecialchars(json_encode($data))?>)">
		JavaScript in HTML code; supply data directly.
	</a>
	<br><br>
	<a href="#" onclick="alert(js_var1)">
		JavaScript in HTML code; supply data indirectly by using a JavaScript variable.
	</a>
</body>
</html>

The result seems a bit weird, even like a broken HTML, when we supply the data directly inside the HTML code:

<a href="#" onclick="alert(&quot;Any data, including &lt;html tags&gt;, '\&quot;&amp;;(){}\nNewline&quot;)">
	JavaScript in HTML code; supply data directly.
</a>

A side note: Make sure that for PHP you stay in UTF-8, because json_encode() requires this, and htmlspecialchars() also interprets encodings.

I’ll be glad to hear your comments or see an example where this method of escaping fails.


2 Comments

MemAvailable metric for Linux kernels before 3.14 in /proc/meminfo

A great new metric has been introduced in “/proc/meminfo” in the Linux 3.14 kernel — MemAvailable:

An estimate of how much memory is available for starting new applications, without swapping. Calculated from MemFree, SReclaimable, the size of the file LRU lists, and the low watermarks in each zone.

The estimate takes into account that the system needs some page cache to function well, and that not all reclaimable slab will be reclaimable, due to items being in use. The impact of those factors will vary from system to system.

I recommend that you read the kernel commit description for further details.

Since many people are still using Linux kernels before 3.14, I’ve backported this kernel patch to Perl. You can download the sources from GitHub: https://github.com/famzah/linux-memavailable-procfs

Many system administrators rely on the “free” tool to get a quick overview of the system’s memory usage. Unfortunately, the latest “procps” package still doesn’t interpret the “MemAvailable” metric, and even if it did, we don’t have it in Linux kernels before 3.14. Actually, the developers of the “procps-ng” package (which is the Debian, Fedora and openSUSE fork of “procps“) have reacted and did the same thing as me. For kernels before 3.14 they emulate the metric in the same way, and for kernels after 3.14, they display the native metric from “/proc/meminfo”. This makes my Perl port more or less redundant.

This is the reason I wrote a quick replacement of the “free” tool in Perl. A few examples of it follow.

Typical memory usage overview, in MBytes:

famzah@vbox:~$ ./free.pl -m
             total       used       free  anonymous     kernel     caches     others
Mem:          2488       1228       1259        608         24        580         15
  -/+ caches              648       1840
Swap:         1565          0       1565

Typical memory usage overview, in percentage:

famzah@vbox:~$ ./free.pl -mp
             total       used       free  anonymous     kernel     caches     others
Mem:          2488        49%        51%        24%         1%        23%         1%
  -/+ caches              26%        74%
Swap:         1565         0%       100%

Extended memory usage overview, in MBytes:

famzah@vbox:~$ ./free.pl -me
             total       used       free  anonymous     kernel     caches     others
Mem:          2488       1228       1260        608         24        580         14
  -/+ caches              647       1840
Swap:         1565          0       1565

Extended memory usage info:
  Buffers                  83
  Cached                  785
  SwapCached                0
  Shmem                   308
  AnonPages               300
  Mapped                  122
  Unevict+Mlocked           5
  Dirty+Writeback           0
  NFS+Bounce                0

Extended memory usage overview, in percentage:

famzah@vbox:~$ ./free.pl -mep
             total       used       free  anonymous     kernel     caches     others
Mem:          2488        49%        51%        24%         1%        23%         1%
  -/+ caches              26%        74%
Swap:         1565         0%       100%

Extended memory usage info:
  Buffers                  3%
  Cached                  32%
  SwapCached               0%
  Shmem                   12%
  AnonPages               12%
  Mapped                   5%
  Unevict+Mlocked          0%
  Dirty+Writeback          0%
  NFS+Bounce               0%

Memory logo


Leave a comment

Google App Engine Datastore benchmark

I admire the idea of Google App Engine — a platform as a service where there is “no worrying about DBAs, servers, sharding, and load balancers”. And you can “auto scale to 7 billion requests per day”. I wanted to try the App Engine for a pet project where I had to collect, process and query a huge amount of time series. The fact that I needed to do fast queries over tens of 1000’s of records however made me wonder if the App Engine Datastore would be fast enough. Note that in order to reduce the amount of entities which are fetched from the database, couples of data entries are consolidated into a single database entity. This however imposes another limitation — fetching big data entities uses more memory on the running instance.

My language of choice is Java, because its performance for such computations is great. I am using the the Objectify interface (version 4.0rc2), which is also one of the recommended APIs for the Datastore.

Unfortunately, my tests show that the App Engine is not suitable for querying of such amount of data. For example, fetching and updating 1000 entries takes 1.5 seconds, and additionally uses a lot of memory on the F1 instance. You can review the Excel sheet file below for more detailed results.

Basically each benchmark test performs the following operations and then exits:

  1. Adds a bunch of entries.
  2. Gets those entries from the database and verifies them.
  3. Updates those entries in the database.
  4. Gets the entries again from the database and verifies them.
  5. Deletes the entries.

All Datastore operations are performed in a batch and thus in an asynchronous parallel way. Furthermore, no indexes are used but the entities are referenced directly by their key, which is the most efficient way to query the Datastore. The tests were performed at two separate days because I wanted to extend some of the tests. This is indicated in the results. A single warmup request was made before the benchmarks, so that the App Engine could pre-load our application.

The first observation is that using the default F1 instance once we start fetching more than 100 entities or once we start to add/update/delete more than 1000 entities, we saturate the Datastore -> Objectify -> Java throughput and don’t scale any more:
App Engine Datastore median time per entity for 1 KB entities @ F1 instance

The other interesting observation is that the Datastore -> Objectify -> Java throughput depends a lot on the App Engine instance. That’s not a surprising fact because the application needs to serialize data back and forth when communicating with the Datastore. This requires CPU power. The following two charts show that more CPU power speeds up all operations where serializing is involved — that is all Datastore operations but the Delete one which only queries the Datastore by supplying the keys of the entities, no data:
App Engine Datastore times per entity for 1000 x 1 KB entities @ F1 instance

App Engine Datastore times per entity for 1000 x 1 KB entities @ F4 instance

It is unexpected that the App Engine and the Datastore still have good and bad days. Their latency as well as CPU accounting could fluctuate a lot. The following chart shows the benchmark results which we got using an F1 instance. If you compare this to the chart above where a much more expensive F4 instance was used, you’ll notice that the 4-times cheaper F1 instance performed almost as fast as an F4 instance:
App Engine Datastore times per entity for 1000 x 1 KB entities @ F1 instance (test on another day)

The source code and the raw results are available for download at http://www.famzah.net/download/gae-datastore-performance/


Leave a comment

C++: Use the right size for the right job

Do you want to speed up your C++ application up to 4 times? Read on.

Recently I happened to be on a summer holiday with a friend who is a C++ guru. We discussed my language performance benchmark post, which had just received a lot of attention in regards to C++. It took him a few hours to demonstrate something really important and often underestimated by developers — that we need to allocate only as much memory as needed, and nothing more.

The C++ algorithm which we use at the benchmark blog page uses an array of “int” (which on Linux also happens to be 4 bytes == 32 bits). However, if you review the code you’ll notice that the “int” value is used as a pure boolean variable for the given array index. So my friend proposed the following optimized version, which uses 1 bit of memory for the boolean value and thus doesn’t waste the other 31-bits left if we used “int” instead:

--- primes.cpp	2013-06-17 17:16:09.000000000 +0300
+++ primes-bool.cpp	2013-06-17 18:43:28.000000000 +0300
@@ -12,9 +12,9 @@
 		res.push_back(2);
 		return res;
 	}
-	vector<int> s;
+	vector<bool> s;
 	for (int i = 3; i < n + 1; i += 2) {
-		s.push_back(i);
+		s.push_back(true);
 	}
 	int mroot = sqrt(n);
 	int half = (int)s.size();
@@ -23,9 +23,9 @@
 	while (m <= mroot) {
 		if (s[i]) {
 			int j = (int)((m*m - 3)/2);
-			s[j] = 0;
+			s[j] = false;
 			while (j < half) {
-				s[j] = 0;
+				s[j] = false;
 				j += m;
 			}
 		}
@@ -33,9 +33,9 @@
 		m = 2*i + 3;
 	}
 	res.push_back(2);
-	for (vector<int>::iterator it = s.begin() ; it < s.end(); ++it) {
-		if (*it) {
-			res.push_back(*it);
+	for (size_t it = 0; it < s.size(); ++it) {
+		if (s[it]) {
+			res.push_back(2*it + 3);
 		}
 	}

The end result is that the original C++ implementation finishes for 18.913 CPU seconds, while the memory-optimized completes in 5.354 seconds (28%). Impressive!

The bottom line is that even RAM is very fast and memory allocation is efficient, using only as few memory as really needed could vastly improve the performance of your C++ application. Which applies for programs developed in any language.


13 Comments

Using flock() in Bash without invoking a subshell

The flock(1) utility on Linux manages flock(2) advisory locks from within shell scripts or the command line. This lets you synchronize your Bash scripts with all your other applications written in Perl, Python, C, etc.

I’ll focus on the third usage form where flock() is used inside a Bash script. Here is what the man page suggests:

#!/bin/bash

(
flock -s 200

# ... commands executed under lock ...

) 200>/var/lock/mylockfile

Unfortunately, this invokes a subshell which has the following drawbacks:

  • You cannot pass values to variables from the subshell in the main shell script.
  • There is a performance penalty.
  • The syntax coloring in “vim” does not work properly. 🙂

This motivated my colleague zImage to come up with a usage form which does not invoke a subshell in Bash:

#!/bin/bash

exec {lock_fd}>/var/lock/mylockfile || exit 1
flock -n "$lock_fd" || { echo "ERROR: flock() failed." >&2; exit 1; }

# ... commands executed under lock ...

flock -u "$lock_fd"

Note that you can skip the “flock -u “$lock_fd” unlock command if it is at the very end of your script. In such a case, your lock file will be unlocked once your process terminates.


Leave a comment

Google Reader alternative

Google announced that they are shutting down their online web RSS reader on July 1, 2013. What a shame, it was really useful and with a great web design.

After short research, I decided to code one — for fun and education. It’s designed to operate in a multi-user way, so if you want to give it a try, go on!

My online implementation is named “xs RSS reader“, short for extra-simple RSS reader:
http://www.famzah.net/xs-rss-reader/

Here is a sample demo screenshot:
xs RSS reader -- demo screenshot


Leave a comment

Nagios: Improve CPU performance with popen_noshell()

Today I’ll share my real-world experience with popen_noshell() on the Nagios monitoring server which we run at work. We are actively monitoring 1166 hosts and 14250 services. The machine has 6 GB RAM and a single Intel Core i7-950 CPU with enabled multi-threading (8 total threads) and slight overclock. Besides running Nagios, this machine also handles the incoming data from our custom monitoring systems, processes RRD database storage, and generates web interface status + charts output. So it’s a pretty busy machine which does a lot of network activity and where the Nagios daemon is just a part of the CPU load. For example, since boot the main “nagios3” process has used only 20% of the CPU. The other part has been used by the fork()’ed Perl scripts (we use a lot of them for the active checks), the Nagios standard network checks, and the Apache/PHP web server handling the incoming data.

Recently the machine started to exhaust its CPU resources. First we overclocked it a bit which gave us 10% more CPU idle time. Then we decided to try to compile Nagios with the popen-noshell library. This gave us another 10% CPU idle and now the machine is working great again.

I’ll focus on the popen-noshell integration and results, since CPU overclocking is a well-known topic. Here is the chart which shows the CPU usage before and after we re-compiled Nagios with the popen-noshell library:

nagios-popen-noshell-benchmark-results

As we can see, the system-CPU usage dropped from 38% to 31%, which is an 18% improvement. The user-CPU usage dropped from 44% to 41%, which is a 7% improvement. Overall, we gained a 12% speed-up for our workload by just re-compiling Nagios with the popen-noshell library. I’m stressing out that the speed-up depends a lot on your workload. If this machine was busy only with Nagios and the active checks were more CPU efficient (i.e. not written in Perl but in C), then the speed-up could have been much higher, since popen_noshell() is about 10 times faster than the standard popen().

A list with the other machine metrics which were also affected by the workload change:

  • Used memory: 39% => 24% (38% less)
  • Load average: 39 => 46 (18% higher)
  • Forks rates: 8*61 => 8*61 (created processes/second – no change)

Here are the steps that you need to perform, in order to re-compile the Nagios Debian package by integrating it with the popen-noshell library:

apt-get install devscripts

apt-get build-dep nagios3-core
# No need to run as "root" from here on
apt-get source nagios3-core

svn checkout http://popen-noshell.googlecode.com/svn/trunk/ popen-noshell

cd nagios3-3.2.1/

# BEGIN: patch Nagios to use popen_noshell_compat()

cp ../popen-noshell/popen_noshell.* base/
vi base/Makefile.in
	OBJS=$(BROKER_O) popen_noshell.o 

vi base/utils.c
	#include "popen_noshell.h"
	
        /* run the command */
        struct popen_noshell_pass_to_pclose pclose_arg;
        fp=(FILE *)popen_noshell_compat(cmd,"r",&pclose_arg);

            /* close the command and get termination status */
            status=pclose_noshell(&pclose_arg);

vi base/checks.c
	2x the same as above

# END: patch Nagios to use popen_noshell_compat()

EDITOR=vim dch -i
	# 3.2.1-2+squeeze1 -> 3.2.1-2+squeeze1-noshell1
	# you must have a trailing number in the added version name
	# after exit, this renames the original directory name

cd ..
mv nagios3_3.2.1.orig.tar.gz nagios3_3.2.1-2+squeeze1.orig.tar.gz

# the source directory was renamed by "dch"
cd nagios3-3.2.1-2+squeeze1/
DEB_BUILD_OPTIONS=nocheck debuild -us -uc

cd ..
sudo dpkg -i nagios3-core_3.2.1-2+squeeze1-noshell1_i386.deb \
	nagios3-common_3.2.1-2+squeeze1-noshell1_all.deb \
	nagios3-cgi_3.2.1-2+squeeze1-noshell1_i386.deb \
	nagios3-doc_3.2.1-2+squeeze1-noshell1_all.deb \
	nagios3_3.2.1-2+squeeze1-noshell1_i386.deb


4 Comments

Bash: Split a string into columns by white-space without invoking a subshell

The classical approach is:

RESULT="$(echo "$LINE"| awk '{print $1}')" # executes in a subshell 

Processing thousands of lines this way however fork()’s thousands of processes, which affects performance and makes your script CPU hungry.

Here is a more efficient way to do it:

LINE="col0 col1  col2     col3  col4      "
COLS=()

for val in $LINE ; do
        COLS+=("$val")
done

echo "${COLS[0]}"; # prints "col0"
echo "${COLS[1]}"; # prints "col1"
echo "${COLS[2]}"; # prints "col2"
echo "${COLS[3]}"; # prints "col3"
echo "${COLS[4]}"; # prints "col4"

If you want to split not by white-space but by any other character, you can temporarily change the IFS variable which determines how Bash recognizes fields and word boundaries.

P.S. For the record, here is the old solution:

#
# OLD CODE
# Update: Aug/2016: I've encountered a bug in Bash where this splitting doesn't work as expected! Please see the comments below.
#

# Here is the effective solution which I found with my colleagues at work:

COLS=( $LINE ); # parses columns without executing a subshell
RESULT="${COLS[0]}"; # returns first column (0-based indexes)

# Here is an example:

LINE="col0 col1  col2     col3  col4      " # white-space including tab chars
COLS=( $LINE ); # parses columns without executing a subshell

echo "${COLS[0]}"; # prints "col0"
echo "${COLS[1]}"; # prints "col1"
echo "${COLS[2]}"; # prints "col2"
echo "${COLS[3]}"; # prints "col3"
echo "${COLS[4]}"; # prints "col4"