Author Archives: zo0ok

NetBSD on a Raspberry Pi

As a long time Linux user I have always had some kind of curiosity about the BSDs, especially NetBSD and its minimalistic approach to system design. For a while I have been thinking that perhaps NetBSD is there perfect operating system for turning a Raspberry Pi into a server.

I have read anti-BSD rants like this “BSD, the truth“, and I have also appreciated pkgsrc for Mac OS X. I felt I needed got get my own opinion. It is easy to have a romantic idea about “Old Real UNIX”, but my limited experience with IRIX and Solaris is not that positive. And BSD is another beast.

For the Raspberry Pi (Version 1, Model B) it is supposed to be possible to run both (stable) NetBSD 6.1.5 and (beta) NetBSD 7.0. It seemed, after all, that the beta 7.0 was the way to go.

At first it was fine

I followed the official instructions and installed NetBSD 7.0. I (first) used the (800MB) rpi.img. I set up my user:

# useradd zo0ok
...
# mkdir /home
# mkdir /home/zo0ok
# chown zo0ok:users /home/zo0ok
# usermod -G wheel zo0ok

Then it was time to configure pkgsrc and start installing packages.

The Disk Problem
I did a quick check to see how much available space I have, before installing stuff. To my surprise:

# df -h
Filesystem         Size       Used      Avail %Cap Mounted on
/dev/ld0a          650M       623M      -5.4M 100% /
/dev/ld0e           56M        14M        42M  24% /boot
kernfs             1.0K       1.0K         0B 100% /kern
ptyfs              1.0K       1.0K         0B 100% /dev/pts
procfs             8.0K       8.0K         0B 100% /proc
tmpfs              112M       8.0K       112M   0% /var/shm

It seemed like the filesystem had not been (automatically) expanded as it should be according to the instructions above. So i followed the manual instructions to resize my root partition, with no success whatsover.

So I ran disklabel to see if NetBSD recognized my 8GB SD-card…

# /sbin/disklabel ld0
# /dev/rld0c:
type: SCSI
disk: STORAGE DEVICE
label: fictitious
flags: removable
bytes/sector: 512
sectors/track: 32
tracks/cylinder: 64
sectors/cylinder: 2048
cylinders: 862
total sectors: 1766560
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0 

8 partitions:
#        size    offset     fstype [fsize bsize cpg/sgs]
 a:   1381536    385024     4.2BSD      0     0     0  # (Cyl.    188 -    862*)
 b:    262144    122880       swap                     # (Cyl.     60 -    187)
 c:   1766560         0     unused      0     0        # (Cyl.      0 -    862*)
 d:   1766560         0     unused      0     0        # (Cyl.      0 -    862*)
 e:    114688      8192      MSDOS                     # (Cyl.      4 -     59)

Clearly, NetBSD thought this SD-card was 900MB rather than 8GB, and this is why it failed to automatically resize it.

The sysinst install
I was anyway not very comfortable with getting a preinstalled/preconfigured 800MB system with swap and everything, so I formatted the 8GB SD card with my digital camera (just to be sure the partition table did not contain anything weird), downloaded (6MB) rpi_inst.img and wrote it to the SD card.

NetBSD installation started properly, and I was looking forward to install over SSH. According to the instructions I was supposed to start the DHCP somehow. But DHCP seemed on (the RPi got an IP) but SSH was off, so I installed using keyboard.

Quite immediately I was informed that NetBSD failed to recognise the “disk geometry” properly. I tried the SD card in Linux which reluctantly reported that it had 30 tracks and 166 sectors (or the other way around – it does not matter and it sounds like nonsense). So I gave this information to the NetBSD sysinst program and now the SD card seemed to be 7.5GB.

Then followed a long and confused period of time when I tried to be smart enough to come up with any working partition scheme that NetBSD could accept. The right procedure was:

  1. Choose entire disk
  2. Confirm to delete the (required) 56MB dos partition
  3. Partition, pretending to be unaware of the need of a dos partition
  4. Magically, in the end, it added the dos partition

I am clearly stupid. There are no words for how confused I am about the a:, c: and e: partitions (that seems to reuse the DOS naming, but for other purposes), the empty space, the disk labels, the BSD partitions inside a (non existing) primary partition.

Anyway, just after I gave up and then gave it a final try I convinced sysinst to install. Then came a phase of choosing download paths, which clearly was non-trivial since I installed a Beta, and I am fine with that.

Installation went on. In the end came a nice menu where I could configure stuff. I liked it! (I wish I knew how to start it later). It managed to get my network settings from DHCP (except the GW), but it failed to configure and test the network itself (despite it had downloaded everything over the network just a few minutes ago). I configured a few other things, I restarted, network was working and I was happy… for a while.

I configured pkgsrc, and it seems ALL other systems where pkgsrc exists has been blessed with the pkgin tool, except NetBSD where you are supposed to do all the job yourself. Well, I added the PKG_PATH to the .shrc (of my user, not root) and enjoyed pkg_add.

(not) Compiling NodeJS
I want to install node.js on my NetBSD Raspberry Pi. It is not in pkgsrc (which is it for Mac OS X, but whatever) so I had to build it myself. I am used to building node.js and I was looking forward to fix all the broken dependencies. If I had ever gotten there.

I downloaded the source and started unpacking it… it is about 10000 files and 100MB of data. My SD card (a SanDisk Ultra, class 10) is not super fast, dd-ing the image to it earlier wrote at a speed of 3MB/s. The unpacking speed of node.js; roughly 1 file per second. I realised I need a (fast) USB-drive or a faster SD card, so I (literally) went out to town, bought a fast USB drive (did not find the SD card I wanted) and a few other things. When I came back more than 8000 files had been extracted and less than 2000 remained. I started reading about how to partition and format a USB drive for NetBSD, and at some point I inserted it in the Raspberry Pi. A little later I noticed my ssh sessions were dead, and the RPi had restarted. It turns out what reality was worse than the truth in “BSD, the truth”:

[…] the kernels of the BSDs are also very fault intolerant.

The best example of this is the issue with removing USBs. The problem appears when USBs are removed without unmounting them first. The result is a kernel panic. The astounding aspect of this is that this problem has been exhibited by all the major BSD variants Free, Open, Net and DragonflyBSD ever since USB support was implemented in them 5 to 6 years ago and has never ever been fixed. FreeBSD mailing lists even ban people who dare mention about it. In Linux, such things never and happen and bugs as serious as this gets fixed before a release is made.

Fact is, NetBSD 7.0 Beta for RPi, crashes, immediately, when I insert a USB drive.

This actually did not make me give up. I really restarted the system with the USB drive inserted, with the intention of treating my USB drive as a fixed disk and not inserting/removing it unless I shut the RPi down first. This was when I did give up: deleting the 16GB dos partition and creating a NetBSD filesystem was just too difficult for me. Admittedly, my patience was running out.

Conclusion
In theory, NetBSD would be a beautiful fit for the Raspberry Pi. The ARMv6 is not supported by standard Debian. Raspbian comes with a little “too much” for my taste (it is not a real problem), and it does not have the feeling of “Debian stable”, but more some “inoffical Debian test” (sorry Raspbian people – I really appreciate your job!).

I have wondered why Noobs does not come with NetBSD… but I think I know now. And, sometimes I am surpised that Linux seems to work better than Mac OS X, perhaps now I know why.

Perhaps I will get a faster SD-card and give NetBSD another try. But my romantic idea that NetBSD would be perfect for the RPI was just plain wrong. Installing NetBSD today made me remember Slackware on a Compaq laptop in 1998.

Perhaps I will give Arch a try. Or put OpenWRT on the RPi.

Faking a good goto in JavaScript

There are cases where gotos are good (most possible uses of gotos are not good). I needed to write JavaScript functions (for running in NodeJS) where I wanted to call the callback function just once in the end (to make things as clear as possible). In C that would be (this is a simplified example):

void withgoto(int x, void(*callback)(int) ) {
  int r;

  if ( (r = test1(x)) )
    goto done;

  if ( (r = test2(x)) )
    goto done;
 
  if ( (r = test3(x)) )
    goto done;
 
  r = 0;
done:
  (*callback)(r);
}

I think that looks nice! I mean the way goto controls the flow, not the syntax for function pointers.

JavaScript: multiple callbacks
The most obvious way to me to write this in JavaScript was:

var with4callbacks = function(x, callback) {
  var r

  if ( r = test1(x) ) {
    callback(r)
    return
  }

  if ( r = test2(x) ) {
    callback(r)
    return
  }

  if ( r = test3(x) ) {
    callback(r)
    return
  }
  r = 0
  callback(r)
}

This works perfectly, of course. But it is not nice with callback in several places. It is annoying (bloated) to always write return after callback. And in other cases it can be a little unclear if callback is called zero times, or more than one time… which is basically catastrophic. What options are there?

JavaScript: abusing exceptions
My first idea was to abuse the throw/catch-construction:

var withexceptions = function(x, callback) {
  var r

  try {
    if ( r = test1(x) )
      throw null

    if ( r = test2(x) )
      throw null

    if ( r = test3(x) )
      throw null

    r = 0
  } catch(e) {
  }
  callback(r)
}

This works just perfectly. In a more real world case you would probably put some code in the catch block. Is it good style? Maybe not.

JavaScript: an internal function
With an internal (is it called so?) function, a return does the job:

var withinternalfunc = function(x, callback) {
  var r
  var f

  f = function() {
    if ( r = test1(x) )
      return

    if ( r = test2(x) )
      return

    if ( r = test3(x) )
      return

    r = 0
  }
  f()
  callback(r)
}

Well, this looks like JavaScript, but it is not super clear.

JavaScript: an external function
You can also do with an external function (risking that you need to pass plenty of parameters to it, but in my simple example that is not an issue):

var externalfunc = function(x) {
  var r
  if ( r = test1(x) )
    return r

  if ( r = test2(x) )
    return r

  if ( r = test3(x) )
    return r

  return 0
}

var withexternalfunc = function(x, callback) {
  callback(externalfunc(x))
}

Do you think the readability is improved compared to the goto code? I don’t think so.

JavaScript: Break out of Block
Finally (and I got help coming up with this one), it is possible to do:

var withbreakblock = function(x, callback) {
  var r
  var f

myblock:
  {
    if ( r = test1(x) )
      break myblock

    if ( r = test2(x) )
      break myblock

    if ( r = test3(x) )
      break myblock

    r = 0
  }
  callback(r)
}

Well, that is at close to the goto construction I come with JavaScript. Pretty nice!

JavaScript: Multiple if(done)
Using a done-variable and multiple if statements is also possible:

var with3ifs = function(x, callback) {
  var r
  var done = false

  if ( r = test1(x) )
    done = true

  if ( !done ) {
    if ( r = test2(x) )
      done = true
  }

  if ( !done ) {
    if ( r = test3(x) )
      done = true
  }

  if ( !done ) {
    r = 0
  } 
  callback(r)
}

Hardly pretty, I think. The longer the code gets (the more sequential ifs there is), the higher the penalty for the ifs will be.

Performance
Which one I choose may depend on performance, if the difference is big. They should all be fast, but:

  • It is quite unclear what the cost of throwing (an exception) is
  • The internal function, is it recompiled and what is the cost?

I measured performance as (millions of) calls to the function per second. The test functions are rather cheap, and x is an integer in this case.

I did three test runs:

  1. The fall through case (r=0) is relatively rare (~8%)
  2. The fall through case is very common (~92%)
  3. The fall through case is extremely common (>99.99%)

In real applications fallthrough rate may be the most common case, with no error input data found. The benchmark environment is:

  • Mac Book Air Core i5@1.4GHz
  • C Compiler: Apple LLVM version 6.1.0 (clang-602.0.49) (based on LLVM 3.6.0svn)
  • C Flags: -O2
  • Node version: v0.10.35 (installed from pkgsrc.org, x86_64 version)

Performance was rather consistent over several runs (for 1000 000 calls each):

Fallthrough Rate           ~8%      ~92      >99.99%      
---------------------------------------------------------
     C: withgoto           66.7     76.9     83.3  Mops
NodeJS: with4callbacks     14.9     14.7     16.4  Mops
NodeJS: with exceptions     3.67     8.77    10.3  Mops
NodeJS: withinternalfunc    8.33     8.54     9.09 Mops
NodeJS: withexternalfunc   14.5     14.9     15.6  Mops
NodeJS: withbreakblock     14.9     15.4     17.5  Mops
NodeJS: with3ifs           15.2     15.6     16.9  Mops

The C code was row-by-row translated into the JavaScript code. The performance difference is between C/Clang and NodeJS, not thanks to the goto construction itself of course.

On Recursion
In JavaScript it is quite natural to do recursion when you deal with callbacks. So I decided to run the same benchmarks using recursion instead of a loop. Each recursion step involves three called ( function()->callback()->next()-> ). With this setup the maximum recursion depth was about 3×5300 (perhaps close to 16535?). That may sound much, but not enough to produce any benchmarks. Do I need to mention that C delivered 1000 000 recursive calls at exactly the same performance as the loop?

Conclusion
For real code 3.7 millions exceptions per second sounds pretty far fetched. Unless you are in a tight loop (which you probably are not, when you deal with callbacks), all solutions will perform well. However, the break out of a block is clearly the most elegant way and also the most efficient, second only to the real goto of course. I suspect the generally higher performance in the (very) high fallthrough case is because branch prediction gets more successful.

Any better ideas?

Storage and filesystem performance test

I have lately been curious about performance for low-end storage and asked myself questions like:

  1. Raspberry Pi or Banana Pi? Is the SATA of the Banana Pi a deal breaker? Especially now when the Raspberry Pi has 4 cores, and I don’t mind if one of them is mostly occupied with USB I/O overhead.
  2. For a Chromebook or a Mac Book Air where internal storage is fairly limited (or very expensive), how practical is it to use USB storage?
  3. Building OpenWRT buildroot requires a case sensitive filesystem (disqualifying the standard Mac OS X filesystem) – is it feasible to use a USB device?
  4. The journalling feature of HFS+ and ext4 is probably a good idea. How does it affect performance?
  5. For USB drives and Memory cards, what filesystems are better?
  6. Theoretical maximum throughput is usually not that interesting. I am more interested in actual performance (time to accomplish tasks), and I believe this is often limited by latency and overhead than throughput. Is it so?

Building OpenWRT on Mac Book Air
I tried building OpenWRT on a USB drive (with case sensitive HFS+), and it turned out to be very slow. I did some structured testing by checked out the code, putting it in a tarball, and repeating:

   $ cd /external/disk
1  $ time cp ~/openwrt.tar . ; time sync
2  $ time tar -xf ~/openwrt.tar ; time sync   (total 17k files)
   $ make menuconfig - not benchmarked)
3  $ time make tools/install                  (+38k files, +715MB)

I did this on the internal SSD (this first step of OpenWRT buildroot was not case sensitive-dependent), on an external old rotating 2.5 USB drive and on a cheap USB drive. I tried a few different filesystem combinations:

$ diskutil eraseVolume hfsx  NAME /dev/diskXsY   (non journaled case sensitive)
$ diskutil eraseVolume jhfsx NAME /dev/diskXsY   (journaled case sensitive)
$ diskutil eraseVolume ExFAT NAME /dev/diskXsY   (Microsoft ExFAT)

The results were (usually just a single run):

Drive and Interface Filesystem time cp time tar time make
Internal 128GB SSD Journalled HFS+ 5.4s 16m13s
2.5′ 160GB USB2 HFS+ 3.1s 7.0s 17m44s
2.5′ 160GB USB2 Journalled HFS+ 3.1s 7.1s 17m00s
Kingston DTSE9H
8GB USB Drive USB2
HFS+ 20-30s 1m40s-2m20s 1h
Kingston DTSE9H
8GB USB Drive USB2
ExFAT 28.5s 15m52s N/A

Findings:

  • Timings on USB drives were quite inconsistent over several runs (while internal SSD and hard drive were consistent).
  • The hard drive is clearly not the limiting factor in this scenario, when comparing internal SSD to external 2.5′ USB. Perhaps a restart between “tar xf” and “make” would have cleared the buffer caches and the internal SSD would have come out better.
  • When it comes to USB drives: WOW, you get what you pay for! Turns out the Kingston is among the slowest USB drive that money can buy.
  • ExFAT? I don’t think so!
  • For HFS+ and OS X, journalling is not much of a problem

Building OpenWRT in Linux
I decided to repeat the tests on a Linux (Ubuntu x64) machine, this time building using two CPUs (make -j 2) to stress the storage a little more. The results were:

Drive and Interface Filesystem real time user time system time
Internal SSD ext4 9m40s 11m53s 3m40s
2.5′ 160GB USB2 ext2 8m53s 11m54s 3m38s
2.5′ 160GB USB2 (just after reboot) ext2 9m24s 11m56s 3m31s
Kingston DTSE9H
8GB USB Drive USB2
ext2 11m36s
+3m48s (sync)
11m57s 3m44s

Findings:

  • Linux block device layer almost eliminates the performance differences of the underlying storage.
  • The worse real time for the SSD is probably because of other processes taking CPU cycles

My idea was to test connecting the 160GB drive directly via SATA, but given the results I saw no point in doing so.

Conclusions
The results were not exactly what I expected. Clearly the I/O load during build is too low to affect performance in a siginficant way (except for Mac OS X and a slow USB drive). Anyway, USB2 itself has not proved to be the weak link in my tests.

Build OpenWRT Toolchain on Mac OS X

A very quick guide to building the OpenWRT buildroot or toolchain on Mac OS X (10.10).

1. Install Xcode
Install Xcode from App Store (it is free).

2. Install pkgsrc
I have used fink, macports and homebrew, but now that I have tried pkgsrc I don’t think I will consider any of the others in a while. Install pkgsrc the standard way. Note: there is an x86_64 version for Mac OS X – it is probably what you want – just replace i386 with x86_64 in the download link.

Using pkgsrc, install these packages required by OpenWRT:

$ sudo pkgin install getopt coreutils gawk gtar findutils

3. Case sensitive filesystem
Your root filesystem on your Mac is probably case insensitive, and that is supposed to cause problems to building OpenWRT. Get yourself a USB disk or make a disk image an format it as case sensitive HFS+. If you do it from the command line you can avoid making it journaling:

$ diskutil eraseVolume hfsx OpenWRTdisk /dev/disk3s2

4. Building
This assumes you want the toolchain from that current stable build (14.07):

$ git clone git://git.openwrt.org/14.07/openwrt.git
$ cd openwrt
$ scripts/feeds update -a
$ scripts/feeds install -a
$ make menuconfig

In menuconfig I made just two changes: 1) setting my target platform, 2) asking toolchain to be built. Make all settings you want. I then ran:

$ make toolchain/install

You now find your toolchain is now in staging_dir :)
If you instead would have run just “make” the entire OpenWRT firmware would have been built.

Working OpenVPN configuration

I am posting my working OpenVPN server configuration, and client configuration for Linux, Android and iOS. First a little background.

I have an OpenWRT (14.07) router running OpenVPN server. This router has a public IP address and thanks to dyn.com/dns it can be resolved using a domain name (ROUTER.PUBLIC in all configuration examples below).

My router LAN address is 192.168.8.1, the LAN network is 192.168.8.*, and the OpenVPN network is 192.168.9.* (in this range OpenVPN-clients will be given an address to their vpn/dun-device). I run OpenVPN on TCP 1143.

What I want to achieve is
1) to access local services (like ownCloud and ssh) of computers on the LAN
2) to access internet as if I were at home, when I have an internet access that is somehow restricted

The Server
Essentially, this OpenWRT OpenVPN Setup Guide is very good. Follow it. I am not going to repeat everything, just post my working configurations.

root@breidablick:/etc/config# cat openvpn 

config openvpn 'myvpn'
	option enabled '1'
	option dev 'tun'
	option proto 'tcp'
	option status '/tmp/openvpn.clients'
	option log '/tmp/openvpn.log'
	option verb '3'
	option ca '/etc/openvpn/ca.crt'
	option cert '/etc/openvpn/my-server.crt'
	option key '/etc/openvpn/my-server.key'
	option server '192.168.9.0 255.255.255.0'
	option port '1143'
	option keepalive '10 120'
	option dh '/etc/openvpn/dh2048.pem'
	option push 'redirect-gateway def1'
	option push 'dhcp-option DNS 192.168.8.1'
	option push 'route 192.168.8.0 255.255.255.0'

It is a little unclear if the last three options really work for all clients. I also have:

root@breidablick:/etc/config# cat network 
.
.
.
config interface 'vpn0'
	option ifname 'tun0'
	option proto 'none'

and

root@breidablick:/etc/config# cat firewall 
.
.
.
config zone
	option name 'vpn'
	option input 'ACCEPT'
	option forward 'ACCEPT'
	option output 'ACCEPT'
	list network 'vpn0'
.
.
.
config forwarding
	option src 'lan'
	option dest 'vpn'

config forwarding
	option src 'vpn'
	option dest 'wan'
.
.
.
# may not be needed depending on your lan policys (2 next)
config rule
	option name 'Allow-lan-vpn'
	option src 'lan'
	option dest 'vpn'
	option target ACCEPT
	option family 'ipv4'

config rule
	option name 'Allow-vpn-lan'
	option src 'vpn'
	option dest 'lan'
	option target ACCEPT
	option family 'ipv4'
.
.
.
# may not be needed depending on your wan policy
config rule
	option name 'Allow-OpenVPN-from-Internet'
	option src 'wan'
	option proto 'tcp'
	option dest_port '1143'
	option target 'ACCEPT'
	option family 'ipv4'

iOS client
You need to install OpenVPN client for iOS from the app store. The client configuration is prepared on your computer, and synced with iOS using iTunes (brilliant or braindead?). This is my working configuration:

client
dev tun
ca ca.crt
cert iphone.crt
key iphone.key
remote ROUTER.PUBLIC 1143 tcp-client
route 0.0.0.0 0.0.0.0 vpn_gateway
dhcp-option DNS 192.168.8.1
redirect-gateway def1

This route and redirect-gateway configuration makes all traffic go via VPN. Omit those lines if you want direct internet access.

Android client
For Android, you also need to install the OpenVPN client from the Store. My client is the “OpenVPN for Android” by Arne Schwabe. This client has a GUI that allows you to configure everything (but you need to get the certificate files to your Android device somehow). You can watch the entire Generated Config in the GUI and mine looks like this (omitting GUI and Android-specific stuff, and the certificates):

ifconfig-nowarn
client
verb 4
connect-retry-max 5
connect-retry 5
resolv-retry 60
dev tun
remote ROUTER.PUBLIC 1143 tcp-client
route 0.0.0.0 0.0.0.0 vpn_gateway
dhcp-option DNS 192.168.8.1
remote-cert-tls server
management-query-proxy

Linux client
I also connect linux computers occationally. The configuration is:

client
remote ROUTER.PUBLIC 1194
ca ca.crt
cert linux.crt
key linux.key
dev tun
proto tcp
nobind
auth-nocache
script-security 2
persist-key
persist-tun
user nobody
group nogroup
verb 5
# redirect-gateway local def1
log log.txt

Here the redirect-gateway is commented away, so internet traffic is not going via VPN.

Certificates
The easy-rsa package and instructions in the OpenWRT guide above are excellent. You should have different certificates for different clients. One certificate can only be used for one connection at a time.

Better configuration?
I dont say this is the optimal or best way to configure OpenVPN – but it works for me. You may prefer UDP over TCP, and may reasons for running TCP are perhaps not valid for you. You may want different encryption or data compressions options, different logging options and so on.

Nodejs v0.12.0 on (unsupported) PowerPC G4

Nodejs can not be built for a G4 processor (PowerPC 7455, as found in pre-Intel Apple hardware) because of a few missing CPU instructions. IBM has made a Power/PowerPC-port of V8 (the JavaScript engine of Nodejs), but it does not work with the G4.

However, there is a quite simple workaround that can probably work for other unsupported platforms (PowerPC G3) as well, but ARMv5 failed.

This solution is to emulate a supported (i386) CPU using Qemu. Qemu is capable of emulating an entire computer (qemu-system-i386) or just emulate for a single program/process (qemu-i386). That is what I do.

I am running Debian 7 on my G4 computer, which comes with an old version of Qemu. It is old enough to not support the system call ‘futex’ (system call 240). My suggestion is to simply use debian backports to install a much more recent version of qemu.

# Add to /etc/apt/sources.list
deb http://http.debian.net/debian wheezy-backports main

# Then run
$ sudo apt-get update
$ sudo apt-get -t wheezy-backports install qemu-user

Now you can use the command qemu-i386 to run i386 binaries. Download the i386 binary linux version of nodejs and extract it somewhere. I extracted mine in /opt and made a symlink to /opt/node for convenience. Now:

zo0ok@sleipnir:~$ qemu-i386 /opt/node/bin/node 
/lib/ld-linux.so.2: No such file or directory

Unless you want to build your own statically linked nodejs binary, you need to get a few libraries from an i386 linux machine. I put these in /opt/node/bin/lib:

zo0ok@sleipnir:/opt/node/bin/lib$ ls -l
total 3320
-rw-r--r-- 1 zo0ok zo0ok  134380 mar  3 21:02 ld-linux.so.2
-rw-r--r-- 1 zo0ok zo0ok 1754876 mar  3 21:13 libc.so.6
-rw-r--r-- 1 zo0ok zo0ok   13856 mar  3 21:06 libdl.so.2
-rw-r--r-- 1 zo0ok zo0ok  113588 mar  3 21:12 libgcc_s.so.1
-rw-r--r-- 1 zo0ok zo0ok  280108 mar  3 21:11 libm.so.6
-rw-r--r-- 1 zo0ok zo0ok  134614 mar  3 21:12 libpthread.so.0
-rw-r--r-- 1 zo0ok zo0ok   30696 mar  3 21:05 librt.so.1
-rw-r--r-- 1 zo0ok zo0ok  922096 mar  3 21:08 libstdc++.so.6

For your convenience, I packed them for you:
https://dl.dropboxusercontent.com/u/9061436/code/linux-i386-lib.tgz
These are from Xubuntu 14.04.1 i386. The original symlinks are eliminated and the files come from different lib-folders. I packed exactly what you need to run the precompiled node-v0.12.0 binary.

Now you should be able to actually run nodejs:

$ zo0ok@sleipnir:~$ qemu-i386 -L /opt/node/bin/ /opt/node/bin/node --version
v0.12.0

To make it 100% convenient I created /usr/local/bin/nodejs:

zo0ok@sleipnir:~$ cat /usr/local/bin/nodejs 
#!/bin/sh
qemu-i386 -L /opt/node/bin /opt/node/bin/node "$@"

Dont forget to make it executable (chmod +x).

Performance is not amazing, but good enough for my purposes. It takes a few seconds to start nodejs, but when running it seems quite fast. I may post benchmarks in the future.

Nodejs v0.12.0 on Debian ARMv5/QNAP

I have written before about building NodeJS for ARMv5 (a QNAP TS-109 running Debian). Since nodejs 0.12.0 just came out, of course I wanted to build this version – but that did not go so well.

Just standard ./configure and make gave me this error after a while.

In file included from ../deps/v8/src/base/atomicops.h:146:0,
                 from ../deps/v8/src/base/once.h:55,
                 from ../deps/v8/src/base/lazy-instance.h:72,
                 from ../deps/v8/src/base/platform/mutex.h:8,
                 from ../deps/v8/src/base/platform/platform.h:29,
                 from ../deps/v8/src/assert-scope.h:9,
                 from ../deps/v8/src/v8.h:33,
                 from ../deps/v8/src/accessors.cc:5:
../deps/v8/src/base/atomicops_internals_arm_gcc.h:258:4: error: #error "Your CPU's ARM architecture is not supported yet"

This was quite expected though, since earlier versions (v0.10.25 was the last I built) did not build that easily. So I forced armv5t-architecture and tried again:

export CFLAGS='-march=armv5t'
export CXXFLAGS='-march=armv5t'
make
...
Segmentation fault
make[1]: *** [/home/kvaser/nodejs/node-v0.12.0/out/Release/obj.target/v8_snapshot/geni/snapshot.cc] Error 139
make[1]: Leaving directory `/home/kvaser/ndejs/node-v0.12.0/out'
make: *** [node] Error 2

It took almost 7 hours to get here. I stopped compiling and started reading instead.
It seems:

  • V8 is not supported on ARMv5 anymore (last supported version was 3.17.6 I think)
  • Building V8 as a shared library is not very easy
  • Even if I manage to build 3.17.6 as a shared library, there is no guarantee
    it would work with nodejs v0.12.0
  • Just replacing the v8 directory of v0.12.0 with an older version of v8 and hope everything just builds and runs perfectly seems… unlikely (but I have not tried and failed, yet)
  • The Raspberry Pi, with its ARMv6 CPU, is supposed to work with v0.12.0, but a little hack is required at this time (RPi 2 with ARMv7 seems safe)

The good thing is that nodejs (v0.10.29) can be installed in Debian 7 (wheezy) using backports. This is a rather nice and consistent way to install software not already in Debian Stable.

It is, after all, not strange that V8 is not maintained for an architecture that has not FPU. JavaScript uses 64-bit floats for all numbers, including integers.

Qemu Failed too
I tried running nodejs in Qemu (which works for a PowerPC G4), but this failed:

kvaser@kvaser:/opt/node-v0.12.0-linux-x86/bin$ qemu-i386 -L . ./node 
./node: error while loading shared libraries: rt.so.1: ncanoot  penrshaoed cbjeit f: No such file or directory

This is the actual result – not a copy-paste-mistake… so I believe something (byte order?) is seriously wrong.

Bad OS X performance due to bad blocks

An unfortunate iMac suffered from file system corruption a while ago. It was reinstalled and worked fine for a while, but performance degraded and after weeks the system was unusable. Startup was slow, and when on, it spent most time spinning the colorful wheel.

I realised the problem was that the hard drive (a good old rotating disk) had bad blocks, but this was not obvious to discover or fix within Mac OS X.

However, an Ubuntu live DVD (or USB I suppose) works perfectly with a Mac, and there the badblocks command proved useful. I did:

# badblocks -b 4096 -c 4096 -n -s /dev/sda

You probably want to make a backup of your system before doing this. Also, be aware that this command will take long time (about 9h on my 500GB drive). The command tests both reading and writing to the hard drive. It restores the data, so for a working drive it should be non-destructive. I work with 16MB chunks because reading and writing default 512 bytes is slower.

On my first run, about 250 bad blocks were discovered.
On a second run, 0 bad blocks were discovered.

The theory here is that the hard drive should learn about its bad blocks, and map around them. The computer is now reinstalled and it works very fine. I dont know if it is a matter of days or weeks until the drive completely breaks, or if it will work fine for years now. I will update this article in the future.

Finally, if you have a solid state drive (SSD)… I dont know. I guess you can run this a lot on a rotating drive without issues, but I would expect it to shorten the life of an SSD (but if it has bad blocks causing you problems, what are your options). For a USB-drive or SD-card… I doubt it is a good idea.

Conclusion
To be done…

Read OpenWRT reject log (with fwreject)

I configured the firewall on my OpenWRT router to reject outgoing traffic (LAN to WAN) by default, and then explicitely allow protocols and ports as needed. By configuring the firewall to log rejected packages I could identify what legitimate traffic was blocked, and open up the firewall. However, the default logging to the syslog is not particularly easy to read (neither using command line or a web browser). Also, the log is mostly full of other log lines, the log lives very short time (just a few minutes) to not waste memory on the router, and the log lines contain information not needed.

I understand there are powerful products to gather logs on central log servers and analyze them there. I did not want that, but rather a simple web interface directly on the router.

I asked for a simple tool on the OpenWRT forum, no result.

So, I wrote my own tool, fwreject, and published documentation and binaries on DropBox.

Xubuntu on Unsupported MacBook

Last week I wrote about installing Mac OS X Mavericks on my MacBook 2007 (MacBook 2,1). That went fine… but… for a computer I mostly use in my lap, in the living room, no decent Internet Video performance (like YouTube) feels disappointing (it was not good before I upgraded to 10.9 either).

So, I decided to install Xubuntu on it. First the conclusions:

  1. Xubuntu runs nicely on the MacBook2,1.
  2. Video works fine, much better than on Mac OS, and also suspend/sleep, audio, WiFi seems perfect. I have not tried the webcam.
  3. I ended up using Xubuntu 14.04.1, the 32-bit i386 edition.
  4. Booting and partitioning is not trivial.
  5. International Apple Keyboards are always a challenge in Linux.

Now to the details.

Xubuntu version
The 32-bit EFI and 64-bit CPU that causes problems for current versions of Mac OS is also an issue for Xubuntu. I downloaded and burnt DVD-isos to try different versions. The 64-bit Xubuntu does not boot easily but the 32-bit versions are just fine. For a computer with 2.5Gb RAM as I have, the practical disadvantages of running it in 32-bit mode instead of 64-bit are insignificant.

A nice thing with Xubuntu is the Live-mode; you can start the DVD and test the full system before deciding to install. Of course performance when starting applications suffer. I first installed 14.10; the Live system worked perfectly, but I had video problems (screen was black after system was completely started) after installation and decided to try 14.04.1 instead, which worked just fine. Since 14.04 is a long-term-release it might just be the better choice anyway.

There used to be x64-Mac-images, that fixed the 32-bit-EFI-64-bit-kernel problem but they are not available anymore.

Finally, I think it is quite safe to assume that you will be fine with Ubuntu, Kubuntu or Lubuntu if you prefer them to Xubuntu.

Keyboard issues
I have a Swedish keyboard on my MacBook, and the AltGr (just named Alt on the Mac) does not work out of the box. This cause problems to type particularly the following characters: @$|[]\{}~.

I found it best to just use Generic 105-key PC keyboard and standard Swedish layout. After that a little xmodmap-hack is required.

Put the following in a file called .Xmodmap in your home directory:

keycode 64 = Mode_switch
keycode 11 = 2 quotedbl at at
keycode 13 = 4 dollar 4 dollar
keycode 16 = 7 slash bar backslash
keycode 17 = 8 parenleft bracketleft braceleft
keycode 18 = 9 parenright bracketright braceright
keycode 35 = dead_diaeresis dead_circumflex dead_tilde dead_caron

The first row maps the left Alt ley of my keyboard to something called Mode_switch. The other rows indicate what happens when pressing the buttons 2,7,8 and 9.

The following information from “man xmodmap” was useful in finding the above solution:
Up to eight keysyms may be attached to a key, however the last four are not used in any major X server implementation. The first keysym is used when no modifier key is pressed in conjunction with this key, the second with Shift, the third when the Mode_switch key is used with this key and the fourth when both the Mode_switch and Shift keys are used.

The internet is full of sources telling to use ISO_Level3_Shift. It did not work for me and the above manpage told me exactly what I needed to know.

There are also sources telling you other names than .Xmodmap (like .xmodmaprc , .xmodmap), that also do not work.

Before you are ready to write your .Xmodmap file you can test one by one:

xmodmap -e "keycode 64 = Mode_switch"
xmodmap -e "keycode 11 = 2 quotedbl at at"
xmodmap -e "keycode 13 = 4 dollar 4 dollar"
xmodmap -e "keycode 16 = 7 slash bar backslash"
xmodmap -e "keycode 17 = 8 parenleft bracketleft braceleft"
xmodmap -e "keycode 18 = 9 parenright bracketright braceright"
xmodmap -e "keycode  35 = dead_diaeresis dead_circumflex dead_tilde dead_caron"

The command xev is very useful to find out what keycode corresponds to a physical key on your keyboard.

Partitioning – The hard way
From the beginning, before ever playing with Xubuntu on the computer, I had the following partitions:

1: EFI (small, hidden in Mac OS)
2: Mac OS 10.9 System
3: Mac OS 10.7 System
4: Apple boot (small, hidden in Mac OS)

When I first installed Xubuntu I deleted partition 3 and replaced it with three partitions:

3: biosboot (small, required by EFI)
5: Linux SWAP (4GB)
6: Linux /

That was ok. But when I later deleted those partitions from Mac OS X because I thought that was more safe, the Apple boot partition (#4) disappeared. If it was this thing then perhaps it is ok. Mac OS still boots.

I always choose manual partitioning, and to install the Linux Bootloader (GRUB) on the Linux root partition (/dev/sda6). I have no idea what happens if it is installed on another partition, and particularly not on /dev/sda itself.

rEFInd – The hard way
The recommended way to boot Xubuntu on a Mac is to use rEFInd. Apples EFI-implementation is not supposed to be very competent at booting other systems. So I installed rEFInd (0.8.4) using the install.sh script from Mac OS X. Very easy, and it worked right away. Problems started later.

My first installation of Xubuntu was 14.10, and as mentioned above it had video problems. So I reinstalled 14.04.1 instead of 14.10, same partitioning, and everything was fine. Except rEFInd displayed TWO linux systems as well as Mac OS to boot. This disturbed me enough to decide to delete all traces of Xubuntu and reinstall.

I ended up in the following situation:

  • I have not managed to get rid of the last Linux-icon in rEFInd.
  • I have ended up with a partly broken rEFInd, it displays the error message:
    Error: Invalid Parameter while scanning the EFI directory
  • rEFInd does not boot Xubuntu.
  • I can not uninstall rEFInd as described in its site, by removing the directory EFI/refind, because it does not exist (there are just some rEFInd config files in the EFI directory).
  • I read that efibootmgr can be used form Linux to clear parts of NVRAM, but it is not supposed to have much effect on a Mac anyways. And I failed to use efibootmgr on Live-Xubuntu.

The rEFInd errors actually disappeared by themselves after I had used (started) Mac OS a few times.

Partitioning and rEFInd – the Easy way
I think you will be safe if you do:

  1. Make empty space on the disk, after the Mac OS partitions.
  2. Install rEFInd from Mac OS
  3. Install Xubuntu 14.04.1 i386 (32-bit), let Xubuntu install side by side and take care of partitioning and boot devices

This finally worked for me. My partition table is now:

Number  Start   End    Size    File system     Name                  Flags
 1      20,5kB  210MB  210MB   fat32           EFI system partition  boot
 2      210MB   120GB  120GB   hfs+            Customer
 3      120GB   120GB  1049kB                                        bios_grub
 4      120GB   317GB  198GB   ext4
 5      317GB   320GB  2651MB  linux-swap(v1)

Conclusion
Xubuntu on a MacBook mid 2007 (MacBook2,1) rocks. Better than Mavericks. But dual booting and rEFInd is not completely predictable. The good thing is that it is not very easy to end up with a complete unbeatable computer at least.