/contrib/famzah

Enthusiasm never stops


11 Comments

Boot Linux using Windows 7 boot loader

Windows 7 and Linux live together on the same hard disk in perfect harmony. I had Windows 7 installed first, and a few GBytes of free space at the end of the hard drive which I left unpartitioned. Here is how to install Ubuntu:

  1. Download Ubuntu and burn the ISO on a CD.
  2. Boot from the CD, and install it. Make sure that you choose an empty partition, and also make sure that you select to install the boot loader on the Linux partition (example: on “/dev/sda3”, and not on the main MBR “/dev/sda”).

Until here you have an Ubuntu installation which you cannot boot, yet.

Here is how to configure the Windows 7 boot loader to include Ubuntu in the boot choice menu:

  1. Download EasyBCD and install it. EasyBCD is free for non-commercial use and offers a nice GUI to edit the Windows 7 boot loader menu.
  2. Do the following in EasyBCD — Add New Entry -> Operating Systems -> Linux/BSD:
    • Type: GRUB 2
    • Name: Ubuntu
    • Device: (Automatically configured)
  3. Finally, click on “View Settings” in EasyBCD. You should see something similar to the following:

    Entry #2
    Name: Ubuntu
    BCD ID: {1d486d61-64cc-12a5-7d94-af2f5df01535}
    Drive: C:\
    Bootloader Path: \NST\AutoNeoGrub0.mbr

EasyBCD ships the “stage1” boot loader of GRUB2 (\NST\AutoNeoGrub0.mbr), so you don’t have to do anything else. Just reboot your Windows 7, and the boot menu should present a choice between “Windows 7” and “Ubuntu”.

A note of caution: It is highly recommended that you do a backup of your whole hard disk before you try to install Ubuntu or modify the boot loader options.

P.S. There is no “boot.ini” in Windows 7. You could modify “boot.ini” in Windows XP to achieve the same result, but this does not apply for Windows 7.


References:


5 Comments

Get default outgoing IP address and interface on Linux

Suppose you have one or more network interfaces, and they have one or more assigned IP addresses, also called aliases. If you need to find out which IP address and interface will be used as a default “source” by your Linux box, you need to execute the following:

ip route get 8.8.8.8

This, of course, assumes that 8.8.8.8 is not directly connected on your networks somehow. Since this is one of the Public Name Servers of Google, I think it is safe to assume so.

A sample output of the ip command follows:

8.8.8.8 via 10.0.2.2 dev eth0  src 10.0.2.15
    cache 

The output is pretty much self-explanatory — the route to “8.8.8.8” will originate from device “eth0”, the used source IP address will be “10.0.2.15”, and the next hop, the (default) gateway, will be “10.0.2.2”.

This method is 100% reliable. The man page of “ip” says that “this command gets a single route to a destination and prints its contents exactly as the kernel sees it”.


References:


4 Comments

Backup Google Sites automatically

I just found out how to make my Google Sites backup script almost non-interactive, so I decided to share. My usage pattern of this script is that I run it every month in the Linux console, and then the weekly backup of my hard disk takes care to additionally back up the information.

Why bother backing up Google Sites?
While Google are very reliable and probably they will never fail me here, I want to have an offline backup of my Google Sites pages in case someone steals my Google Account. So I back up. Online and offline, every week.

The backup script uses the wonderful free Java application “Google Sites Liberation“. My script is actually more like a sample Bash usage of this Java tool. You need to download the .jar file and store it in the same directory as the backup script. The source code follows:

#!/bin/bash
set -e
set -u
set -o pipefail

trap 'echo "ERROR: Abnormal exit." >&2' ERR

# config BEGIN

GUSER='username@gmail.com'
WIKI_LIST='wiki1 wiki2 wiki3'
JAR_BIN='google-sites-liberation-1.0.4.jar'
ROOT_BACKUP_DIR='./sites.google.com'

# config END

echo "We are using '$JAR_BIN'. Check for a newer version:"
echo '	http://code.google.com/p/google-sites-liberation/downloads/list'
read

echo "The directory '$ROOT_BACKUP_DIR' will be deleted!!!"
echo 'Press Enter to confirm.'
read

rm -rf "$ROOT_BACKUP_DIR"
mkdir "$ROOT_BACKUP_DIR"

echo -n "Enter the password for '$GUSER': "
read -s -r -e PASS
echo ; echo

for wiki in $WIKI_LIST ; do
	BACKUP_DIR="$ROOT_BACKUP_DIR/$wiki"
	echo "*** Exporting '$wiki' in '$BACKUP_DIR'..."
	echo "Press Enter to continue."
	read

	mkdir "$BACKUP_DIR"
	java -cp "$JAR_BIN" com.google.sites.liberation.export.Main \
		-w "$wiki" \
		-u "$GUSER" \
		-p "$PASS" \
		-f "$BACKUP_DIR"
	echo
done

References:


9 Comments

Secure chroot() remote file access via SFTP and SSH

If you want to replace the buggy and not-encrypted FTP protocol, and get rid of the FTP daemon on your system, the SFTP protocol comes to the rescue. Note that the SFTP protocol is something more than the SCP protocol (Secure copy), as it provides resuming interrupted transfers, directory listings, and remote file removal. This makes it more similar to the FTPS protocol (FTP over SSL) with the difference that it doesn’t require a separate FTP daemon, because the SSH daemon supports SFTP, which simplifies your network setup and lowers maintenance costs.

A standard security feature of the FTP servers is that logged in users are placed in a chroot jail directory, which restricts users from viewing and manipulating any other files but their own ones. Fortunately, the OpenSSH daemon supports chroot() too — see the sshd_config(5) man page.

Now that I’ve convinced you that SFTP is the right way to go for secure file transfers and remote file mounts, let’s see how we can configure it on a Debian or Ubuntu based server. All SFTP users will be kept in a root chroot() directory:

mkdir -m 751 /home/sftp-chroot /home/sftp-chroot/{dev,home}

The man page says that all components of the pathname must be root-owned directories that are not writable by any other user or group. Let’s verify this before going further:

ls -lad / /home /home/sftp-chroot /home/sftp-chroot/home
drwxr-xr-x 23 root root 4096 2011-01-31 17:15 /
drwxr-x--x  9 root root 4096 2011-01-31 18:05 /home
drwxr-x--x  4 root root 4096 2011-01-31 17:38 /home/sftp-chroot
drwxr-x--x  3 root root 4096 2011-02-01 08:46 /home/sftp-chroot/home

SFTP users’ home directories will be placed in “/home/sftp-chroot/home/$SFTPUSER” (after the chroot() the actual directory would be “/home/$SFTPUSER”). Note that this still gives full privacy, because users can’t list the root chroot() directory “/” nor the root home directory “/home”, as their permissions are 0751. Furthermore, even if one user guessed the username and home directory of another user, they cannot enter their home directory, because the users’ home directories give no permissions for the group “others”. The commands for managing users’ home directories are at the very end of this article.

In order for the chroot()’ed users and processes to be able to do logging, a listening UNIX socket of syslog must be created inside the root chroot() directory:

cat > /etc/rsyslog.d/sftp-chroot.conf <<'EOF' # the single quotes are important
# the following syslog socket will be used by all chrooted users
$AddUnixListenSocket /home/sftp-chroot/dev/log

# Log internal-sftp activity in a separate file
:programname, isequal, "internal-sftp" -/var/log/sftp.log
:programname, isequal, "internal-sftp" ~
EOF

/etc/init.d/rsyslog restart

We verify that the syslog UNIX socket was created:

ls -la /home/sftp-chroot/dev/log
srw-rw-rw- 1 root root 0 2011-01-31 16:42 /home/sftp-chroot/dev/log

We will also rotate the log files weekly:

cat > /etc/logrotate.d/sftp-chroot <<'EOF'
/var/log/sftp.log {
  weekly
  missingok
  rotate 52
  compress
  delaycompress
  postrotate
    invoke-rc.d rsyslog reload > /dev/null
  endscript
}
EOF

Finally, let’s set up the OpenSSH daemon accordingly:

# disable the standard "sftp" subsystem in the sshd config
perl -pi -e 's/^(\s*Subsystem\s+sftp\s+)/#$1/i' /etc/ssh/sshd_config

# enable the chroot "sftp" subsystem
cat >> /etc/ssh/sshd_config <<'EOF'

Subsystem sftp internal-sftp

Match group sftponly
  ChrootDirectory /home/sftp-chroot
  ForceCommand internal-sftp -l VERBOSE
  AllowTcpForwarding no
  PermitRootLogin no
  X11Forwarding no
  AllowAgentForwarding no
  # Change to yes to enable tunnelled clear text passwords, not recommended
  PasswordAuthentication no
EOF

groupadd sftponly
/etc/init.d/ssh restart

A few notes about the OpenSSH configuration:

  • This configuration requires OpenSSH 5.2 or later — discussion about this in the reference links below.
  • The SCP protocol is not supported, at least not very easily. If you really need SCP, you’d need to re-create a full binary environment, in order to provide standard SSH access in a chroot() way. This is not discussed here, nor tested by me. You are encouraged to use SFTP instead. [source article]
  • Some users report that the time-stamp of the messages in the syslog log file were wrong. This is indeed possible since there is no /etc/timezone file in the root chroot() directory. I haven’t encountered this bug, as my system is in GMT+0, or the bug no longer exists. [source article]

In the end, let’s take a look at Users management (sample Bash script below in the comments).

Adding an SFTP user:

export SFTPUSER='testsftp1'
useradd -d "/home/$SFTPUSER" --groups sftponly -s /bin/false -p '!' "$SFTPUSER"
passwd "$SFTPUSER" # if you enabled tunnelled clear text passwords, regardless of the security implications
mkdir -m 770 "/home/sftp-chroot/home/$SFTPUSER"
chown root:"$SFTPUSER" "/home/sftp-chroot/home/$SFTPUSER"

Enabling public/private key authentication, the more secure choice, for an SFTP user:

export SFTPUSER='testsftp1'
mkdir -m 750 "/home/$SFTPUSER" "/home/$SFTPUSER/.ssh"
chown root:"$SFTPUSER" "/home/$SFTPUSER" "/home/$SFTPUSER/.ssh"
vi "/home/$SFTPUSER/.ssh/authorized_keys" # add the public key

Deleting an SFTP user and its data:

export SFTPUSER='testsftp1'
userdel "$SFTPUSER"
rm -r "/home/sftp-chroot/home/$SFTPUSER"
rm -r "/home/$SFTPUSER" # if you authenticate using public keys

The end-result of all those efforts is that when an SFTP user logs in to your system, they see only their home directory and nothing else. This is because after the chroot, sshd(8) changes the working directory to the user’s home directory. Furthermore the user cannot list nor enter other home directories in the root chroot() directory, because of the directory setup we created.

Here is a sample log from the SFTP syslog file, click “show source” to view it:

Jan 31 17:55:10 m1 internal-sftp[13237]: session opened for local user testsftp1 from [127.0.0.1]
Jan 31 17:55:10 m1 internal-sftp[13237]: received client version 3
Jan 31 17:55:10 m1 internal-sftp[13237]: realpath "."
Jan 31 17:55:12 m1 internal-sftp[13237]: open "/home/testsftp1/fb.php" flags WRITE,CREATE,TRUNCATE mode 0644
Jan 31 17:55:13 m1 internal-sftp[13237]: close "/home/testsftp1/fb.php" bytes read 0 written 735
Jan 31 18:04:14 m1 internal-sftp[13237]: session closed for local user testsftp1 from [127.0.0.1]

References:


5 Comments

sudo hangs and leaves the executed program as “zombie”

Today I discovered a non-security bug in sudo – the executed program finishes successfully, but sudo hangs forever waiting for something. The executed program is left in a “zombie” process state. Here is how the process list looks like, for example:

root 6368 0.0 0.0 2808 1592 pts/6 Ss 18:39 0:00 | \_ -bash
root 1103 0.0 0.0 2200 1000 pts/6 S+ 21:45 0:00 | \_ ./sudo -u root sleep 5
root 1104 0.0 0.0 0 0 pts/6 Z+ 21:45 0:00 | \_ [sleep] <defunct>

If we try to trace the system calls of the sudo command, here is what we get:

[root@tester2 ~]# strace -fF -p 1103
select(4, [3], [], NULL, NULL

The sudo process waits endlessly in a select() system call which waits for file descriptor #3. So we quickly check what corresponds to file descriptor #3:

[root@tester2 ~]# ls -la /proc/1103/fd
total 0
dr-x—— 2 root root 0 Nov 1 21:45 .
dr-xr-xr-x 5 root root 0 Nov 1 21:45 ..
lrwx—— 1 root root 64 Nov 1 21:45 0 -> /dev/pts/6
lrwx—— 1 root root 64 Nov 1 21:46 1 -> /dev/pts/6
lrwx—— 1 root root 64 Nov 1 21:45 2 -> /dev/pts/6
lrwx—— 1 root root 64 Nov 1 21:46 3 -> socket:[1136261781]

Socket “socket:[1136261781]” is already closed tough (blinking in red on my console). Thus sudo is waiting in a blocked select() for a change on file descriptor #3, which is already closed and will never change its state. Sudo will therefore wait forever.

In order to understand why this happens, we will look at the source code of sudo. Here is snippet from “exec.c” – only the relevant code is left for clarity:

int
sudo_execve(path, argv, envp, uid, cstat, dowait, bgmode)
{
    int sv[2];

    if (socketpair(PF_UNIX, SOCK_DGRAM, 0, sv) != 0)
    error(1, "cannot create sockets");

    zero_bytes(&sa, sizeof(sa));
    sigemptyset(&sa.sa_mask);

    /* Note: HP-UX select() will not be interrupted if SA_RESTART set */
    sa.sa_flags = SA_INTERRUPT; /* do not restart syscalls */
    sa.sa_handler = handler;
    sigaction(SIGCHLD, &sa, NULL);
    sigaction(SIGHUP, &sa, NULL);
    sigaction(SIGINT, &sa, NULL);
    sigaction(SIGPIPE, &sa, NULL);
    sigaction(SIGQUIT, &sa, NULL);
    sigaction(SIGTERM, &sa, NULL);

    /* Max fd we will be selecting on. */
    maxfd = sv[0];

    child = fork()
    close(sv[1]);

    fdsr = (fd_set *)emalloc2(howmany(maxfd + 1, NFDBITS), sizeof(fd_mask));
    fdsw = (fd_set *)emalloc2(howmany(maxfd + 1, NFDBITS), sizeof(fd_mask));

    for (;;) {

    zero_bytes(fdsw, howmany(maxfd + 1, NFDBITS) * sizeof(fd_mask));
    zero_bytes(fdsr, howmany(maxfd + 1, NFDBITS) * sizeof(fd_mask));

    FD_SET(sv[0], fdsr);

    if (recvsig[SIGCHLD])
        continue;
    nready = select(maxfd + 1, fdsr, fdsw, NULL, NULL);
    if (nready == -1) {
        if (errno == EINTR)
        continue;
        error(1, "select failed");
    }

    }
}

/*
 * Generic handler for signals passed from parent -> child.
 * The recvsig[] array is checked in the main event loop.
 */
void
handler(s)
    int s;
{
    recvsig[s] = TRUE;
}

The race-condition happens right before the select() on line #40, and just after the “if” on lines #38 and #39. If the parent process gets re-scheduled after the “if” was executed, and at this very time the child process finishes and SIGCHLD is sent to the parent process, sudo gets in trouble. The SIGCHLD handler accounts in the variable “recvsig[]” that the signal was received, and then the parent process calls select(). This select will never be interrupted, as the author had it in mind. In 99% of the cases, the parent process will enter in the select() blocking state before the child process ended. The child would then send SIGCHLD, which will be accounted in the handler procedure, and will also interrupt select() which will return -1 in “nready”, and “errno” will be set to EINTR.

Here is an easy way to reproduce the bug. We add a sleep() of 10 seconds between the “if” and select(), thus simulating that the system was very busy and re-scheduled the parent sudo process right between these two operations. Here is the source diff:

--- sudo-orig/sudo-1.7.4p4/exec.c       Sat Sep  4 00:40:19 2010
+++ sudo-1.7.4p4/exec.c Mon Nov  1 21:48:24 2010
@@ -307,6 +307,10 @@
 
        if (recvsig[SIGCHLD])
            continue;
+       printf("debug: Missed the check for SIGCHLD, the child is still running. SIGCHLD status: %d\n", recvsig[SIGCHLD]);
+       sleep(10); // this will be interrupted by SIGCHLD, because the child exists at some time here (we run "sudo sleep 5")
+       printf("debug: We should have got SIGCHLD by now. SIGCHLD status: %d\n", recvsig[SIGCHLD]);
+       printf("debug: Entering the endless select()...\n");
        nready = select(maxfd + 1, fdsr, fdsw, NULL, NULL);
        if (nready == -1) {
            if (errno == EINTR)

After that we execute sudo, and observe the bug every time:

[root@tester2 sudo-1.7.4p4]# make >/dev/null && ./sudo -u root sleep 5
debug: Missed the check for SIGCHLD, the child is still running. SIGCHLD status: 0
debug: We should have got SIGCHLD by now. SIGCHLD status: 1
debug: Entering the endless select()…
…(this never finishes)…

The sudo author actually tried to avoid this potential race condition if SIGCHLD is received immediately
before we call select() – changeset 5334:99adc5ea7f0a. The proper way to fix this is to use a timeout in the select() call:

--- sudo-orig/sudo-1.7.4p4/exec.c       Sat Sep  4 00:40:19 2010
+++ sudo-1.7.4p4/exec.c Mon Nov  1 21:50:26 2010
@@ -307,7 +307,11 @@
 
        if (recvsig[SIGCHLD])
            continue;
-       nready = select(maxfd + 1, fdsr, fdsw, NULL, NULL);
+       struct timeval timeout;
+       timeout.tv_sec = 1; // Linux resets this, so set it everytime
+       timeout.tv_usec = 0;
+       nready = select(maxfd + 1, fdsr, fdsw, NULL, &timeout);
        if (nready == -1) {
            if (errno == EINTR)
                continue;

The select() mechanism and this bug were introduced somewhere between sudo versions 1.7.2 and 1.7.3. At least that is what I managed to see from the Changelog:

2010-06-29  Todd C. Miller  <Todd.Miller@courtesan.com>
	[72fd1f510a08] [SUDO_1_7_3] <1.7>
...
2009-11-15  Todd C. Miller  <Todd.Miller@courtesan.com>
	* script.c, sudo.c, sudo.h, sudoreplay.c, term.c, tgetpass.c:
	Use a socketpair to pass signals from parent to child.
...
2009-07-12  Todd C. Miller  <Todd.Miller@courtesan.com>
	[f5ad45f69f05] [SUDO_1_7_2]

I’ve reported this bug (#447) to the sudo maintainer, and I’m sure he will fix it when time permits. Because we all depend on sudo and love it. 🙂


10 Comments

Linux Cached/Buffers memory

I won’t try to explain in details what Linux Cached/Buffers memory is. In a nutshell, it shows how much of your memory is used for the read cache and for the write cache.

Usually when you look at your system memory usage and see that almost all of the unused memory is allocated for Cached/Buffers, you are happy, because this memory is used for file-system cache, thus your system is running faster.

Today however I observed quite an interesting fact – what I said above is still correct, however you don’t know how often these cache entries are used by the system. After all, it’s not the cache memory usage (or size) which makes the system run faster, but the cache hit ratio. If the file operations get satisfied by the cache (cache hit), then your system is running faster. If the system needs to make a physical disk I/O (cache miss), then you’d need to wait for a good few milliseconds.

What are your options, in order to know if your file cache is actually being used or is just sitting there allocated, giving you a false feeling that your system is running faster thanks to the used cache memory:

  1. (hard) In order to actually know the Cache Hit/Miss ratios for block devices, you’ll need to dig deep into the kernel, as I already explained in the “Is there a way to get Cache Hit/Miss ratios for block devices in Linux” article.
  2. (easy) You can clear the Cached/Buffers memory regularly, see how fast and how much the cache memory grows back, and draw some conclusions about the actual Cache Hit/Miss ratios.

The latter is not a perfect solution, but in all cases gives you a better idea of your file-system cache usage, than just watching the totally used memory in Cached/Buffers, and never actually knowing if it is used/accessed at all.

Therefore, you can run the following every hour in a cron job:

sync ; echo 3 > /proc/sys/vm/drop_caches
sync ; echo 0 > /proc/sys/vm/drop_caches

The commands are safe (see reference for “drop_caches“), and you won’t lose any data, just your caches will be zeroed. The disadvantage of this approach is that if the caches were very actively used indeed, Linux would need to read the data back from the disk.

A real-world example

 

Here is how the Cached/Buffers graphics of a server of mine looks for the following few days:

Linux memory usage

Pay attention to the points of interest which are marked. Here is the explanation and motivation to write this article:

  1. (Point A) The beginning of the graph shows my system after it just booted, and I did some small administrative tasks on it. After that, one script runs regularly on the machine, and as we see, it doesn’t use much file-system cache, as it doesn’t do many file operations.
  2. Then every day at 06:25 the “/etc/cron.daily” scripts are run and some of them read all files on the file-system. Such a script is the updatedb cron job. Because of the great disk activity, the Buffers/Cache usage gets maximal, as all possible files and meta data are being cached in memory.
  3. (see “Updatedb cron” markers on the graphics) After one hour, the scripts finish and no significant disk activity is done on the system any more.
  4. (Point B) But the Cached/Buffers usage never drops down. The file cache doesn’t seem to expire, and is therefore giving us the false feeling that it is being used by our system all the time, thus making it faster. But it isn’t!
  5. On Sat 15:08 the Cached/Buffers cache is cleared manually by the command I provided above, and I installed it as an hourly crontab too.
  6. As you can see, right after the cache was cleared, we see the sad reality – the Cached/Buffers cache was filled with data that nobody needed or accessed, and the high memory usage by Cached/Buffers actually didn’t speed up our system. I grieve for a while and accept the reality, and also understand why so many I/O requests are issued to my EBS storage, even though the cache was so huge.
  7. (Point C) That’s how the actual daily usage pattern of this machine looks like. The Cached/Buffers memory cache is heavily underused on my system, as it doesn’t do much I/O work. This wouldn’t be visible if I don’t clear the cache every hour.

References:


Leave a comment

PHP non-interactive usage in a cron job

Using a PHP script in a crontab is fairly easy, as stated in the “Using PHP from the command line” documentation… Until you start to get the following warning during the execution:

No entry for terminal type “unknown”;
using dumb terminal settings.

The script works, but this nasty warning really bothers you.

Here is a sample crontab entry:

* * * * * root sudo -u www-data php -r ‘echo “test”;’

When executed, it prints the warning on STDERR.

Yes, I know I don’t need “sudo” here, but this was my initial usage pattern as I discovered the problem, and at the first time I suspected that “sudo” got crazy. Well, it wasn’t “sudo” to blame, but PHP.

Here is the fixed crontab entry:

* * * * * root sudo -u www-data TERM=dumb php -r ‘echo “test”;’

The issue was encountered on an Ubuntu 10.04 server. I though crond usually sets $TERM to something… Anyway, problem solved.


4 Comments

Associate Amazon EC2 Elastic IP in a different region

If you allocate an Elastic IP address in the US-East region (N.Virginia), then sorry folks but you can’t use this IP address with an EC2 instance which is located in another region, say EU-West (Ireland) for example.
No problem to remap it to an EC2 instance in another availability zone within the same region.

The documentation of Amazon EC2 leaves another impression:

Elastic IP addresses allow you to mask instance or Availability Zone failures by programmatically remapping your public IP addresses to any instance in your account.

This is a bit misleading. I already had started dreaming as how I can transfer my EC2 instance across continents with no DNS propagation issues. And also had started admiring Amazon as to how they achieved this technically… Well, they haven’t. 🙂


References:


2 Comments

USB: rejected 1 configuration due to insufficient available bus power

If your USB device is not being recognized, execute the command “dmesg” and check if the following output is there:

usb 1-1.4: rejected 1 configuration due to insufficient available bus power

The “1-1.4” ID may be different for your configuration.

If, and only if, you are absolutely sure that your USB hub and/or hardware configuration have a safe way to actually supply enough power, you can override this barrier and force the device to be activated despite of the error message. A possible situation is where you manually applied 5V external power on your USB device and/or USB hub, like I did on my Bifferboard.

Here is how you can override the power safety mechanism:

echo 1 > /sys/bus/usb/devices/1-1.4/bConfigurationValue

Replace “1-1.4” with your USB device ID. Be careful and have fun!


Resources:


Leave a comment

Secure NAS on Bifferboard running Debian

This NAS solution uses OpenSSH for secure transport over a TCP connection, and NFS to mount the volume on your local computer. The hardware of the NAS server is the low-cost Bifferboard.

I’m using an external hard disk via USB which is partitioned in two parts – /dev/sda1 (1GB) and the rest in /dev/sda2. Once you have installed Debian on Bifferboard, here are the commands which further transform your Bifferboard into a secure NAS:

apt-get update
apt-get -y install nfs-kernel-server

vi /etc/default/nfs-common 
  # update: STATDOPTS='--port 2231'
vi /etc/default/nfs-kernel-server 
  # update: RPCMOUNTDOPTS='-p 2233'

mkdir -m 700 /root/.ssh
  # add your public key for "root" in /root/.ssh/authorized_keys

echo '/mnt/storage 127.0.0.1(rw,no_root_squash,no_subtree_check,insecure,async)' >> /etc/exports
mkdir /mnt/storage
chattr +i /mnt/storage # so that we don't accidentally write there without a mounted volume

cat > /etc/rc.local <<EOF
#!/bin/bash

# allow only SSH access via the network
/sbin/iptables -P FORWARD DROP
/sbin/iptables -P INPUT DROP
/sbin/iptables -A INPUT -i lo -j ACCEPT
/sbin/iptables -A INPUT -p tcp --dport 22 -j ACCEPT
/sbin/iptables -A INPUT -p tcp -m state --state ESTABLISHED -j ACCEPT # TCP initiated by server
/sbin/iptables -A INPUT -p udp -m state --state ESTABLISHED -j ACCEPT # DNS traffic

# mount the storage volume here, so that any errors with it don't interfere with the system startup
/bin/mount /dev/sda2 /mnt/storage
/etc/init.d/nfs-kernel-server restart
EOF

# allow only public key authentication
fgrep -i -v PasswordAuthentication /etc/ssh/sshd_config > /tmp/sshd_config && \
  mv -f /tmp/sshd_config /etc/ssh/sshd_config && \
  echo 'PasswordAuthentication no' >> /etc/ssh/sshd_config

reboot

There are two things you should consider with this setup:

  1. You must trust the “root” user who mounts the directory! They have full shell access to your NAS.
  2. A not-so-strong SSH encryption cipher is used, in order to improve the performance of the SSH transfer.

On the machine which is being backed up, I use the following script which mounts the NAS volume, starts the rsnapshot backup process and finally unmounts the NAS volume:

#!/bin/bash
set -u

HOST='192.168.100.102'
SSHUSER='root'
REMOTEPORT='22'
REMOTEDIR='/mnt/storage'
LOCALDIR='/mnt/storage'
SSHKEY='/home/famzah/.ssh/id_rsa-home-backups'

echo "Mounting NFS volume on $HOST:$REMOTEPORT (SSH-key='$SSHKEY')."
N=0
for port in 2049 2233 ; do
	N=$(($N + 1))
	LPORT=$((61000 + $N))
	ssh -f -i "$SSHKEY" -c arcfour128 -L 127.0.0.1:"$LPORT":127.0.0.1:"$port" -p "$REMOTEPORT" "$SSHUSER@$HOST" sleep 600d
	echo "Forwarding: $HOST: Local port: $LPORT -> Remote port: $port"
done
sudo mount -t nfs -o noatime,nfsvers=2,proto=tcp,intr,rw,bg,port=61001,mountport=61002 "127.0.0.1:$REMOTEDIR" "$LOCALDIR"

echo "Doing backup."
time sudo /usr/bin/rsnapshot weekly

echo "Unmounting NFS volume and closing SSH tunnels."
sudo umount "$LOCALDIR"
for pid in $(ps axuww|grep ssh|grep 6100|grep arcfour|grep -v grep|awk '{print $2}') ; do
	kill "$pid" # possibly dangerous...
done

Update, 29/Sep/2010 – performance tunes:

  • Added “async” in “/etc/exports”.
  • Removed the “rsize=8192,wsize=8192” mount options – they are auto-negotiated by default.
  • Added the “noatime” mount option.
  • Put the SSH username in a variable.

Resources: