Author Archives: zo0ok

Xubuntu on Unsupported MacBook

Last week I wrote about installing Mac OS X Mavericks on my MacBook 2007 (MacBook 2,1). That went fine… but… for a computer I mostly use in my lap, in the living room, no decent Internet Video performance (like YouTube) feels disappointing (it was not good before I upgraded to 10.9 either).

So, I decided to install Xubuntu on it. First the conclusions:

  1. Xubuntu runs nicely on the MacBook2,1.
  2. Video works fine, much better than on Mac OS, and also suspend/sleep, audio, WiFi seems perfect. I have not tried the webcam.
  3. I ended up using Xubuntu 14.04.1, the 32-bit i386 edition.
  4. Booting and partitioning is not trivial.
  5. International Apple Keyboards are always a challenge in Linux.

Now to the details.

Xubuntu version
The 32-bit EFI and 64-bit CPU that causes problems for current versions of Mac OS is also an issue for Xubuntu. I downloaded and burnt DVD-isos to try different versions. The 64-bit Xubuntu does not boot easily but the 32-bit versions are just fine. For a computer with 2.5Gb RAM as I have, the practical disadvantages of running it in 32-bit mode instead of 64-bit are insignificant.

A nice thing with Xubuntu is the Live-mode; you can start the DVD and test the full system before deciding to install. Of course performance when starting applications suffer. I first installed 14.10; the Live system worked perfectly, but I had video problems (screen was black after system was completely started) after installation and decided to try 14.04.1 instead, which worked just fine. Since 14.04 is a long-term-release it might just be the better choice anyway.

There used to be x64-Mac-images, that fixed the 32-bit-EFI-64-bit-kernel problem but they are not available anymore.

Finally, I think it is quite safe to assume that you will be fine with Ubuntu, Kubuntu or Lubuntu if you prefer them to Xubuntu.

Keyboard issues
I have a Swedish keyboard on my MacBook, and the AltGr (just named Alt on the Mac) does not work out of the box. This cause problems to type particularly the following characters: @|[]\{}.

I found it best to just use Generic 105-key PC keyboard and standard Swedish layout. After that a little xmodmap-hack is required.

Put the following in a file called .Xmodmap in your home directory:

keycode 64 = Mode_switch
keycode 11 = 2 quotedbl at at
keycode 16 = 7 slash bar backslash
keycode 17 = 8 parenleft bracketleft braceleft
keycode 18 = 9 parenright bracketright braceright

The first row maps the left Alt ley of my keyboard to something called Mode_switch. The other rows indicate what happens when pressing the buttons 2,7,8 and 9.

The following information from “man xmodmap” was useful in finding the above solution:
Up to eight keysyms may be attached to a key, however the last four are not used in any major X server implementation. The first keysym is used when no modifier key is pressed in conjunction with this key, the second with Shift, the third when the Mode_switch key is used with this key and the fourth when both the Mode_switch and Shift keys are used.

The internet is full of sources telling to use ISO_Level3_Shift. It did not work for me and the above manpage told me exactly what I needed to know.

There are also sources telling you other names than .Xmodmap (like .xmodmaprc , .xmodmap), that also do not work.

Before you are ready to write your .Xmodmap file you can test one by one:

xmodmap -e "keycode 64 = Mode_switch"
xmodmap -e "keycode 11 = 2 quotedbl at at"
xmodmap -e "keycode 16 = 7 slash bar backslash"
xmodmap -e "keycode 17 = 8 parenleft bracketleft braceleft"
xmodmap -e "keycode 18 = 9 parenright bracketright braceright"

The command xev is very useful to find out what keycode corresponds to a physical key on your keyboard.

Partitioning – The hard way
From the beginning, before ever playing with Xubuntu on the computer, I had the following partitions:

1: EFI (small, hidden in Mac OS)
2: Mac OS 10.9 System
3: Mac OS 10.7 System
4: Apple boot (small, hidden in Mac OS)

When I first installed Xubuntu I deleted partition 3 and replaced it with three partitions:

3: biosboot (small, required by EFI)
5: Linux SWAP (4GB)
6: Linux /

That was ok. But when I later deleted those partitions from Mac OS X because I thought that was more safe, the Apple boot partition (#4) disappeared. If it was this thing then perhaps it is ok. Mac OS still boots.

I always choose manual partitioning, and to install the Linux Bootloader (GRUB) on the Linux root partition (/dev/sda6). I have no idea what happens if it is installed on another partition, and particularly not on /dev/sda itself.

rEFInd – The hard way
The recommended way to boot Xubuntu on a Mac is to use rEFInd. Apples EFI-implementation is not supposed to be very competent at booting other systems. So I installed rEFInd (0.8.4) using the install.sh script from Mac OS X. Very easy, and it worked right away. Problems started later.

My first installation of Xubuntu was 14.10, and as mentioned above it had video problems. So I reinstalled 14.04.1 instead of 14.10, same partitioning, and everything was fine. Except rEFInd displayed TWO linux systems as well as Mac OS to boot. This disturbed me enough to decide to delete all traces of Xubuntu and reinstall.

I ended up in the following situation:

  • I have not managed to get rid of the last Linux-icon in rEFInd.
  • I have ended up with a partly broken rEFInd, it displays the error message:
    Error: Invalid Parameter while scanning the EFI directory
  • rEFInd does not boot Xubuntu.
  • I can not uninstall rEFInd as described in its site, by removing the directory EFI/refind, because it does not exist (there are just some rEFInd config files in the EFI directory).
  • I read that efibootmgr can be used form Linux to clear parts of NVRAM, but it is not supposed to have much effect on a Mac anyways. And I failed to use efibootmgr on Live-Xubuntu.

The rEFInd errors actually disappeared by themselves after I had used (started) Mac OS a few times.

Partitioning and rEFInd – the Easy way
I think you will be safe if you do:

  1. Make empty space on the disk, after the Mac OS partitions.
  2. Install rEFInd from Mac OS
  3. Install Xubuntu 14.04.1 i386 (32-bit), let Xubuntu install side by side and take care of partitioning and boot devices

This finally worked for me. My partition table is now:

Number  Start   End    Size    File system     Name                  Flags
 1      20,5kB  210MB  210MB   fat32           EFI system partition  boot
 2      210MB   120GB  120GB   hfs+            Customer
 3      120GB   120GB  1049kB                                        bios_grub
 4      120GB   317GB  198GB   ext4
 5      317GB   320GB  2651MB  linux-swap(v1)

Conclusion
Xubuntu on a MacBook mid 2007 (MacBook2,1) rocks. Better than Mavericks. But dual booting and rEFInd is not completely predictable. The good thing is that it is not very easy to end up with a complete unbeatable computer at least.

Install Mac OS X 10.9 on unsupported MacBook

I have a MacBook Mid 2007 (more technically named MacBook2,1) that officially can not be upgraded beyond Mac OS X 10.7 (Lion). It is however possible to install Mac OS X 10.9 (Mavericks) on it with quite good success and not too much effort.

System information with Mavericks

System information with Mavericks

I want to first write what does not work:

  1. Sleep mode – not working at all – leave on or shut down
  2. The build-in web camera – “works” but not as it did in 10.7, I think
  3. YouTube-video (etc), works occationally (now worse than in 10.7, my experience)

What you need:

  1. A USB Memory, 8GB or larger
  2. Mac OS X Mavericks (i had the install/upgrade Application that I had myself
    downloaded on another Mac, from App Store, when I upgraded it from 10.8 to
    10.9. I always keep these for possible future use.)
  3. SFOTT: I used version 1.4.4 which is currently the latest stable
  4. Audio/Video-drivers from here. Warning, this is one of
    these horrible download pages where you don’t know where to click to get
    the right thing, and what gives you spyware. You should get the file
    mac-mini-mavericks.7z. Discard anything else without opening.
    The 7z-file can be opened with StuffitExpander, that already comes with
    Maverick

Making a bootable USB-drive
You first need to use SFOTT to create your bootable USB-drive (it is called “key” in SFOTT). You simply double-click on SFOTT on a Mac where you both have your Mavericks Install App and your USB-drive. SFOTT is a self guiding menu-driven application. It will take some time to make all the settings in SFOTT (it took me perhaps 15 minutes), but it was self-explanatory and not very difficult. Use the autorun mode to create the drive.

Recovery Scenario
When you install a Mac OS upgrade there is a risk your Mavericks system will not boot. When upgrading from 10.9.0 to 10.9.5 like I did, it will not boot. My impression (after reading different sources) is that this recovery is needed when upgrading from 10.9.0 (or 10.9.1 / 10.9.2) but not later. Nobody knows about 10.9.6 of course, because it is not out. Minor upgrades to applications or security upgrades should not cause need to recovery.

When Mavericks fails to start you need to “re-Patch” using SFOTT. I installed Mavericks on a separate partition, side-by-side with Lion, so when Mavericks failed to start my computer automatically started Lion instead and I could run SFOTT in Lion to re-Patch my Mavericks system.

If you can not do side-by-side you can start from your SFOTT-key (which you still have) and instead of installing Maverick you start the Terminal application. Find the SFOTT.app on the key, and find SFOTT.sh inside SFOTT.app. Run SFOTT.sh and you can re-Patch your broken Mavericks system. I did the entire procedure on my working Mavericks just to test it, and it seems fine.

There is if course no true guarantee that a future Apple upgrade will not break everything completely.

Installing Mavericks
Installation of Mavericks from the USB-drive is very standard. To start the computer from the USB-drive, hold down the “alt”-key (not Apple-key, not ctrl-key) while starting the computer. Choose SFOTT and proceed normally. After about an hour you should have a clean 10.9.0 Mavericks with network/wifi working. Video will work, but with problems (try Safari, and you will see), and Audio will not work.

Upgrade Mavericks
I used App Store to upgrade Mavericks to 10.9.5. That works just fine, until Mavericks fails to start (I ended up in my old Lion system on a reboot, if you have no other system installed your computer with probably just not start). This is where you need to recover your system using SFOTT.

Fixing Audio and Video
The 7z-file I referred to above contains Audio and Video drivers. You run the application “Kext Utility” and the you drag the contents of the folder Extensions into the Kext Utility, and it will install the drivers. There is a folder with “optional wifi drivers”, I have not installed those because wifi has been fine all the time for me.

The MacBook2,1 has Intel GMA950 Video, and there are no supported 64-bit-drivers for Mavericks. The drivers I suggest you to install are supposed to be drivers from a public beta of 10.6 (Snow Leopard) that Apple once released. They seem to work quite fine for me though. And not installing them is worse.

I suggest you upgrade to 10.9.5 before fixing Audio and Video. I guess a later Apple-upgrade could break Audio and Video and require you to reinstall drivers.

Problems booting the SFOTT key
I first created the SFOTT key using the SFOTT beta (that is also supposed to work with Yosemite), and I used System Preferences/Startup Disk (in Lion) to start the installion. This failed and my computer just started up in Lion.

I then created the SFOTT key using 1.4.4, AND i restarted the computer holding down the alt-key. This worked. This key also later worked when I used System Preferences/Startup Disk (in Mavericks) to choose startup drive.

Driver Problems
There are open source Audio drivers called VoodooHDA. I installed those ones with success, but audio volume was low. I tried to fix with no success. Later I found the drivers I referred to above and that I recommend.

I found another download for what was supposed to be the same Video Drivers. But the Kext-utility did not work, and I installed the drivers by copying them directly into /System/Library/Extensions and this gave me a broken unbootable system. I don’t know what went wrong, but I recommend the drivers I linked to.

Video/YouTube Performance
Some videos seem to play perfectly, others dont. I had problems with 10.7 too.

Background and about SFOTT
There are several Apple computers that can run 10.7, that have a 64-bit processor, but that can not officially run 10.8 or later. There are a few issues:

  1. Video Drivers – and in the case of my MacBook2,1 the unofficial ones mentioned
    above may be good enough
  2. 32 bit EFI. Even though the computer has a 64 bit processor, the EFI, the
    software that runs before the Installer/Operating system, is 32 bit, and not
    capable of starting a 64-bit system.
  3. Mavericks does not believe it can run on this hardware.

As I understand it SFOTT installs a little program that 32 bit EFI is capable of starting, and that in turn is capable of staring a 64 bit system. Also, SFOTT patches a few files so Mavericks feels comfortable running on the unsupported hardware.

You can do all of this on your own without SFOTT. SFOTT “just” makes this reasonably easy.

There are plenty of forums, tools and information about running Mac OS X on unsupported hardware (also non-Apple-hardware: a Hackintosh). Those forums of course focus a lot on problems people have.

Yosemite
It is supposed to be possible to install Yosemite in a similar way. SFOTT has a beta release for Yosemite. For my purposes going to Mavericks gave me virtually all advantages of an upgrade (supported version of OS X, able to install latest Xcode, etc).

Conclusion
In the beginning of 2015, it is not that hard to install Mavericks on a MacBook Mid 2007, with a quite good result. I have pointed out the tools and downloads you need and that will work.

Scenarios for GNoSR

I found a beautiful little route for Train Simulator on Workshop: GNoSR. Unfortunately, since the route is not “Final” it is not possible to upload scenarios to it, to Workshop.

I created a scenario for GNoSR, and perhaps there will be more in the future. The scenario is downloadable as an .rwp-file, which is installed with the utilities.exe-program in the railworks folder. As always, please report any problems with the scenario, otherwise I can not fix it.

Scenario 1: Mixed Train to Heith
Drive a mixed train to Heith, stopping at all stations and picking up freight wagons along the way. Duration: 60 minutes. Download: Mixed Train To Heith.

Scenario 2: Petroleum Freight
Drive a heavy freight train with Marine Fuel from Heith to Portbyvie. Duration: 70 minutes. Download: Petroleum Freight.

Dependencies
There should be no additional dependencies or requirements apart from those of GNoSR (Woodhead Line, Western Lines of Scotland and Falmouth Branch). Please let me know if you have problems with this.

Other versions
I consider making other versions of the same scenario, perhaps with the Robinson O4, the Standard 2MT or the 3MT Jinty. But I may not bother if I get no interest whatsoever in the original.

UKTS
It seems the route and scenarios are available on UKTS. I personally find UKTS to be too much work and too many dependencies. My scenario is for the Steam version of the route, and I want people who just use Steam to have some fun with GNoSR.

Scenarios for other routes
Granfield Branch

Very simple REST JSON node.js server

I want to build a modern web application (perhaps using AngularJS) or some mobile application, and I need to have some working server side to get started. There are of course plenty of options; .NET WebApi, LAMP, MongoDB, NodeJS + Express, and many more. But I want it stupid simple. This is tested on Linux, but everything should apply in Windows too.

I wrote a very simple REST/JSON server for node.js, and this is about it (source code in the end).

How to run it
Presuming you have nodejs installed:

$ node simple-rest-server.js

It now listens to port 11337 on 127.0.0.1 (that is hard coded in the code).

Configure with Apache
The problem with port 11337 is that if you build a web application you will get cross site problems if the service runs on a different port than the html files. If you are running apache, you can:

# a2enmod proxy
# a2enmod proxy_http

Add to /etc/apache/sites-enabled/{your default site, or other site}
ProxyPass /nodejs http://localhost:11337
ProxyPassReverse /nodejs http://localhost:11337

# service apache2 restart

You can do this with nginx too, and probably also with IIS.

Use from command line
Assuming you have a json data file (data99.json) you can write to (POST), read from (GET) and delete from (DELETE) the server:

$ curl --data @data99.json http://localhost/nodejs/99
$ curl http://localhost/nodejs/99
$ curl -X DELETE http://localhost/nodejs/99

If you did not configure apache as proxy as suggested above, you need to us the :port instead of /nodejs. In this case 99 is the document id (a positive number). You can add any number of documents with whatever ids you like (as long as they are positive numbers, and as long as the server does not run out of memory). There is no list function in this very simple server (although it would be very easy to add).

Using from AngularJS
The command line is not so much fun, but AngularJS is. If you create your controller with $http the following works:

function myController($scope, $http) {

  // write an object named x with id
  h = $http.post('http://localhost/nodejs/' + id, x)
  h.error(function(r) {
    // your error handling (may use r.error to get error message)
  })
  h.success(function(r)) {
    // your success handling
  })

  // read object with id to variable x
  h = $http.get('http://localhost/nodejs/' + id)
  h.error(function(r) {
    // your error handling
  })
  h.success(function(r) {
    x = r.data
  })

  // delete object with id 
  h = $http['delete']('http://localhost/nodejs/' + id)
  h.error(function(r) {
    // your error handling
  })
  h.success(function(r)) {
    // your success handling
  })
}

I found that Internet Explorer can have problems with $http.delete, thus $http[‘delete’] (very pretty).

What the server also does
The server handles GET, POST and DELETE. It validates and error handles its input (correctly, I think). It stores the data to a file, so you can stop/start the server without losing information.

What the server does not do
In case you want to go from prototyping to production, or you want more features, it is rather simple to:

  1. add function to list objects
  2. add different types of objects
  3. let the server also serve files such as .html and .js files
  4. use MongoDB as backend
  5. add security and authentication

The code
The entire code follows (feel free to modify and use for your own purpose):

/*
 * A very simple JSON/REST server
 *
 * http://host:port/{id}       id is a positive number
 *
 * POST   - create/overwrite   $ curl --data @file.json http...
 * GET    - load               $ curl http...
 * DELETE - delete             $ curl -X DELETE http...
 *
 */
glHost    = { ip:'127.0.0.1', port:'11337' }
glHttp    = require('http')
glUrl     = require('url')
glFs      = require('fs')
glServer  = null
glStorage = null

/* Standard request handler - read all posted data before proceeding */
function requestHandler(req, res) {
  var pd = ""
  req.on("data", function(chunk) {
    pd += chunk
  })
  req.on("end", function() {
    requestHandlerWithData(req, res, pd)
  })
}

/* Custom request handler - posted data in a string */
function requestHandlerWithData(req, res, postdata) {
  var in_url  = glUrl.parse(req.url, true)
  var id      = in_url["pathname"].substring(1) //substring removes leading /
  var retcode = 200
  var retdata = null
  var error   = null

  if ( ! /^[1-9][0-9]*$/.test(id) ) {
    error   = "Invalid id=" + id
    retcode = 400
  }

  if ( ! error ) switch ( req.method ) {
  case "GET":
    if ( ! glStorage[id] ) {
      error = "No object stored with id=" + id
      retcode = 404
    } else {
      retdata = glStorage[id]
    }
    break; 
  case "POST":
    try {
      glStorage[id] = JSON.parse(postdata)
      writeStorage()
    } catch(e) {
      error = "Posted data was not valid JSON"
      retcode = 400
    }
    break;
  case "DELETE":
    delete glStorage[id]
    writeStorage()
    break;
  default:
    error   = "Invalid request method=" + req.method
    retcode = 400
    break;
  }

  res.writeHead(retcode, {"Server": "nodejs"})
  res.writeHead(retcode, {"Content-Type": "text/javascript;charset=utf-8"})
  res.write(JSON.stringify( { error:error, data:retdata } ))
  res.end()

  console.log("" + req.method + " id=" + id + ", " + retcode +
    ( error ? ( " Error=" + error ) : " Success" ) )
}

function writeStorage() {
  glFs.writeFile("./db.json",JSON.stringify(glStorage),function(err) {
    if (err) {
      console.log("Failed to write to db.json" + err)
    } else {
      console.log("Data written to db.json")
    }
  })
}

glFs.readFile("db.json", function(err, data) {
  if (err) {
    console.log("Failed to read data from db.json, create new empty storage")
    glStorage = new Object()
  } else {
    glStorage = JSON.parse(data)
  }
})
glServer = glHttp.createServer(requestHandler)
glServer.listen(glHost.port, glHost.ip)
console.log("Listening to http://" + glHost.ip + ":" + glHost.port + "/{id}")

Installing Citrix Receiver 13.1 in Ubuntu/Debian

The best thing with Citrix Receiver for Linux is that it exists. Apart from that it kind of sucks. Last days I have tried to install it on Xubuntu 14.10 and Debian 7.7, both 64-bit version.

The good thing is that for both Debian and Ubuntu the 64-bit deb-file is actually installable using “dpkg -i”, if you fix all dependencies. I did:

1) #dpkg --add-architecture i386
2) #apt-get update
3) #dpkg -i icaclient_13.1.0.285639_amd64.deb
  ... list of failed dependencies...
4) #dpkg -r icaclient
5) #apt-get install [all packages from (3)]
6) #dpkg -i icaclient_13.1.0.285639_amd64.deb

Step (1) and (2) only needed in Debian.

selfservice is hard to get to start from the start menu. And selfservice gets segmentation fault when OpenVPN is on (WTF?). So for now, I have given up on it.

npica.so is supposed to make the browser plugin work, but not much luck there (guess it is because I have a 64 bit browser). I deleted system-wide symbolic links to npica.so (do: find | grep npica.so in the root directory).

#rm /usr/lib/mozilla/plugins/npica.so
#rm /usr/local/lib/netscape/plugins/npica.so

Then I could tell the Citrix portal that I do have the Receiver even though the browser does not recognize it, and as I launch an application I have choose to run it with wfica.sh (the good old way).

keyboard settings can no longer be made in the GUI but you have to edit your ~/.ICAClient/wfclient.ini file. The following makes Swedish keyboard work for me:

KeyboardLayout = SWEDISH
KeyboardMappingFile = linux.kbd
KeyboardDescription = Automatic (User Profile)
KeyboardType=(Default)

The problem is, when you fix the file, you need to restart all Citrix-related processes for the new settings to apply. If you feel you got the settings right but no success, just restart your computer. I wasted too much time thinking I had killed all processes, and thinking my wfclient.ini-file was bad, when a simple restart fixed it.

Debian on NUC and boot problems

I got a NUC (D54250WYKH) that I installed Debian 7.7 on.

Advice: First update the NUC “BIOS”.

  1. Download from Intel
  2. Put on USB memory
  3. Put USB memory in NUC
  4. Start NUC, Press F7 to upgrade BIOS

If I had done this first I would have saved some time and some reading about EFI stuff I don’t want to know anyway. A few more conclusions follow.

EFI requires a special little EFI-partition. Debian will set it up automatically for you, unless you are an expert and choose manual partitioning, of course ;) That would also have saved me some time.

(X)Ubuntu 14.10 had no problems even without upgrading BIOS.

The NUC is very nice! In case it is not clear: there is space for both an mSATA drive and a 2.5′ drive in my model. In fact, I think there is also space for an extra extra small mSATA drive. Unless building a gaming computer I believe NUC (or similar) is the way to go.

Finally, Debian 7.7 comes with Linux 3.2 kernel which has old audio drivers that produce bad audio quality. I learnt about Debian backports and currently run Linux 3.16 with Debian 7.7 and I have perfect audio now.

Charging Sony Xperia Z1 Compact

The Sony Xperia Z1 Compact can be charged two ways: via Micro USB or via the special magnetic charging connector.

The phone comes with the normal USB cable, that can also be used for synchronization, file transfers and such. But since the phone is water proof the Micro USB connector is hidden behind a little door, and opening and closing this every time charging the phone does not really feel optimal.

The official way to charge via the magnetic connector is to buy the DK32 docking station. It is quite pricey, and quite “light” (the magnet is much stronger than the weight of the thing). Docking/undocking does not really feel like opening/closing a german car door, but otherwise it is nice to have the phone docked and charging. It is quite unclear if this DK32 is compatible with any other very similar Sony Xperia docking stations.

Other options?

I ordered a USB-cable with magnetic connector (but no docking station) from Deal Extreme. Again, quite unclear what models the cable really works with (there are many similar cables, with different phones listed as compatible).

Well, the cable “works”. When attached, it charges the phone just perfectly. Attaching it requires a little bit of a precision move, and when attached it is not very stable against rolling off to the front or the back of the phone. But now that I have learnt how to do it, I prefer it to the old USB cable. I am thinking about building/gluing some type of docking station for it. Note: the +/- connectors are not interchangeable. If I connect it upside-down (the cable from up) the phone restarts, and it is perhaps not entirely healthy for it.

I have the original DK32 at work, so the phone is almost always fully charged when I leave work in the afternoon, and I don’t need to charge it until back at work next day.

Using float and double as integer

Traditionally computers work with integer types of different sizes. For scientific applications, media, gaming and other applications floating point numbers are needed. In old computers floating point numbers where handled in software, by special libraries, making them much slower than integers, but nowadays most CPUs have an FPU that can make fast float calculations.

Until recently I was under impression that integers were still faster than floats and that floats have precision/rounding issues, making the integer datatype the natural and only sane choice for representing mathematical integers. Then I came to learn two things:

  1. In JavaScript, all numbers are 64bit floats (double), effectively allowing 52bit integers when used correctly.
  2. OpenSSL uses the double datatype instead of int in some situations (big numbers) for performance reasons.

Both these applications exploit the fact that if the cost of 64bit float operations is (thanks to the FPU) roughly equal to the cost of 32bit integer operations, then a double can be a more powerful representation of big integers than an int. It is also important to understand that (double) floating point numbers have precision problems only handling decimal points (ex 0.1) and very big numbers, but handle real world integers just fine.

Apart from this, there could be other possible advantages of using float instead of int:

  • If the FPU can execute instructions somewhat in parallell with the ALU/CPU using floats when possible could benefit performance.
  • If there are dedicated floating point registers, making use of them could free up integer registers.

Well, I decided to make a test. I have a real world application:

  • written in C
  • that does calculations on integers (mostly in the range 0-1000000)
  • that has automated tests, so I can modify the program and confirm that it still works
  • that has built in performance/time measurement

Since I had used int to represent a real-world-measurement (length in mm), I decided nothing is really lost if I use float or double instead of int. The values were small enough that a 32bit float would probably be sufficiently precise (otherwise my automated tests would complain). While the program is rather computation heavy, it is not extremely calculation-intense, and the only mathematical operations I use are +,-,>,=,<. That is, even if float-math was for "free" the program would still be heavy but faster.

In all cases gcc is used with -O2 -ffast-math. The int column shows speed relative to the first line (Celeron 630MHz is my reference/baseline). The float/double columns show speed relative to the int speed of the same machine. Higher is better.

Machine int float double Comment
Eee701 Celeron 630MHz / Lubuntu 1.0 0.93 0.93
AMD Athlon II 3Ghz / Xubuntu 5.93 1.02 0.97
PowerBook G4 PPC 867MHz / Debian 1.0 0.94 0.93
Linksys WDR4900 PPC 800MHz / OpenWRT 1.12 0.96 (0.87) 0.41 (0.89) Values in parenthesis using -mcpu=8548
Raspberry Pi ARMv6 700MHz / Raspbian 0.52 0.94 0.93
QNAP TS-109 ARMv5 500MHz / Debian 0.27 0.61 0.52
WRT54GL Mips 200MHz / OpenWRT 0.17 0.20 0.17

A few notes on this:

I have put together quite many measurements and runs to eliminate outliers and variance, to produce the figures above.

There was something strange about the results from the PowerBook G4, and the performance is not what should be expected. I dont know if my machine underperforms, or if there is something wrong with the time measurements. Nevertheless, I believe the int vs float performance is still valid.

The Athlon is much faster than the other machines, giving shorter execution times, and the variance between different runs was bigger than for other machines. The 1.02/0.97 could very well be within error margin of 1.0.

The QNAP TS-109 ARM CPU does not have an FPU, which explains the lower performance for float/double. Other machines displayed similar float/double performance with “-msoft-float”.

The Linksys WDR4900 has an FPU that is capable of both single/double float precision. But with OpenWRT BB RC3 toolchain, gcc defaults to -mcpu=8540, which falls back to software float for doubles. With -mcpu=8548 the FPU is used also for doubles, but for some reason this lowers the single float performance.

Not tested
The situation could possibly change when the division operator is used, but division should be avoided anyway when it comes to optimization.

All tests are done on Linux and with GCC: it would surprise me much if results where very different on other platforms.

More tests could be made on more modern hardware, but precision advantage of double over int is lost for 64-bit machines with native 64-bit long int support.

Conclusion
As a rule of thumb, integers are faster than floats, and replacing integers with floats does not improve performance. Use the datatype that describes your data the best!

Exploiting the 52-bit integer capacity of a double should be considered advanced and platform dependent optimization, and not a good idea in the general case.

Upgrade OpenWRT and reinstalling packages

I just upgraded my OpenWRT router from Barrier Breaker RC2 to RC3. The upgrade guide is excellent, but it only mentions: “You do need to reinstall opkg-packages”… well, it sounds like there should be a smart way to do that.

Before upgrade:

# opkg list-installed > /etc/config/packages.installed

Two things to note: 1) This will take a few kb, that you must have available, and 2) since the file is in /etc/config it will be automatically restored after sysupgrade.

Now the sysupgrade itself (see the upgrade guide):

# sysupgrade -v /tmp/someimage-sysupgrade.bin

The system will restart, and you should be able to ssh into it, just as before the upgrade. Now reinstalling packages:

# opkg update
# opkg install $( cut -f 1 -d ' ' < /etc/config/packages.installed )

You will want to delete the new config files (or manually merge config files, or delete your old and use the new files). The new files have the "extension" -opkg, and can be found with

# cd /
# find | grep -e -opkg\$

That should be it.

Buying a router for OpenWRT

Update 2015-01-10: about AC routers.

For a while I was thinking about buying a new wireless router for my home network, and I had already decided I wanted to run OpenWRT on it. I spent (wasted) quite some time reading the OpenWRT list of supported hardware, and searching for available routers. With this post, I hope to help you focusing on the essentials, to make a good decision quicker.

I presume, if you buy a new router to run OpenWRT, that you want to run the current stable version of OpenWRT (soon Barrier Breaker 14.07), and that you will want to be able to upgrade in the future.

I think it is a good idea to first decide the need for Flash and RAM, and then work from a much shorter list of hardware.

Flash
Most routers available have the following amounts of Flash (storage for kernel, files, configuration).
4Mb: is just enough, barely, to run OpenWRT.
8Mb: is enough for OpenWRT. You will be able to install packages, and even if future versions should be slightly larger, you should be fine.
16Mb: is more than enough for OpenWRT, but if you want to install many packages or put applications on it, then 16Mb gives you much more flexibility than 8Mb.

If you want to store files (backups, a web site, images, whatever), do that on a separate USB-storage (just make sure the router has USB ports). Too little Flash means you can not install packages, or you get errors when changing configurations. This is bad, but something you can handle in a controlled way.

RAM
Most routers available have the following amounts of RAM:
16Mb: is too little to run OpenWRT beyond version 10.03.1, except for special cases. Don’t buy!
32Mb: can run OpenWRT. But my new router is making use of more RAM than that (see below), running 14.07 RC2 and a few packages.
64Mb: should be enough for running several extra packages.
128Mb: is possibly going to be more than you need, but RAM never hurts, especially if you install extra packages or make heavy use of your router.

Too little RAM makes OpenWRT crash and restart, is my personal experience. Even if would kill processes (instead of crashing) in some cases, it is going to be brutal and disruptive – not the kind of service you want. Adding swap to a USB-storage is perhaps possible, but if you really need it you should probably have gotten another router, or you are using the router for the wrong task.

Flash / RAM conclusion
Chances are you will want 8/64Mb or more when buying a router to run OpenWRT. That will disqualify perhaps 80-90% of all supported routers, making your list shorter and your choice easier.

I really like getting the most out of simple hardware. You may very well have a situation where a 8/32Mb (or even a 4/32) router will be just perfect for you (or your parents or some other friends you are helping out). But if adding packages is important to you, I would not settle with 32Mb RAM.

Chipset / CPU
In the supported hardware table, there are three columns: Targets, Platform, CPU-speed. This is most likely not very relevant information to you. The CPU-speed will be of much less importance than the Flash/RAM when it comes to what you can do with the router. Of course higher CPU-speeds are better, and if you want to compare performance, have a look at the OpenSSL performance page (perhaps the RSA sign/verify columns are most useful for deciding CPU performance, since the numbers are not so big, and since there is probably no hardware support for RSA).

Network speed
Unless you have Internet connection faster than 100Mbit/s, chances are your router will be much faster than you connection, even for a cheap router. Some routers have 100Mbit-switch, some have GBit-switch – this may make a real world difference to you, if you often copy big files between your computers (or if your Internet is faster than 100Mbit, of course).

When it comes to Wireless you will find that most routers support B/G/N (2.4Ghz) and many also support A/N (5Ghz). Of course dual-band is nicer, but chances are it will not make any real difference to you whatsoever.

AC Routers 2015-01-10
The situation is getting better. You have the TP-Link Archer C5/C7 and the Linksys WRT1900AC to chose from. For the TP-Link, note that version 1 of C7 does not work (at least the AC does not work). And for the Linksys, it is still not supported by the official and stable OpenWRT version, but there are several options and builds, and since the WiFi sources were finally published the WRT1900AC situation is quickly getting better. I have no personal experience with any AC router, and perhaps they are not the safest choices at this time.

Status
There is a Stutus-column in the supported hardware table. You want it to say a stable version number: 7.09, 8.09, 10.03, 10.03.1, 12.09 or 14.07. Note that old 4/16Mb routers were supported, but are no longer supported with 12.09 and 14.07, so if it says 0.9 you should probably be careful. If it says “trunk” or “rXXXX” it means that it should work if you use the latest bleeding-edge builds: avoid this for production systems, and avoid this if you dont know how trunk works.

Version
The version column is nasty. Manufacturers release different versions of routers under the same name. The specification may vary a lot between versions, and quite often one is supported and the other is not. Have a look at the Netgear WNDR3700 which is very nice if you manage to get v2 or v4, while v3 does not even run OpenWRT.

Bricking and Fail Safe Mode
It can happen that a firmware installation/upgrade fails and the router is “bricked” (does not start). Different routers have different capabilities when it comes to recovering from a failed installation. Before buying a router, you might want to read about its recovery capabilities. I have never bricked a router with OpenWRT (or any other firmware), but you are more likely bricking it with OpenWRT, than just using OEM firmware.

Not getting paid to write this
I suggest, start looking at the TP-link routers. They are available in differnt price/performance segments, they have good price/performance ratio, they are not hard to find and TP-link seems to have a reasonable FOSS strategy/policy making their routers quickly supported by OpenWRT.

Years ago I liked ASUS routers (the WL-500g Premium v2, I bought several of that one for friends) and of course the Linksys WRT54GL. Buffalo seems to have good models, but I have problems finding the good ones where I live. Dlink is not one of my favourites, and when it comes to OpenWRT I find that the models that I can buy do not run OpenWRT, and the models that run OpenWRT are not available for sale here. And Netgear, I already mentioned the WNDR3700 mess above. Ubiquiti seems to be popular among OpenWRT people.

I bought a very reasonably priced TP-Link WDR4900 with 16Mb flash and 128Mb RAM, and it has a 800MHz PowerPC processor which I believe outperforms most ARMs and MIPS based routers available. Note that in China the WDR4900 is a completely different router.

Memory sitation on my WDR4900
On 14.07 RC2, I have installed OpenVPN and stunnel (currently no connections on neither of them) as well as uhttpd/Luci. This is my memory situation on my WDR4900. I dont know if the same amount of memory would be used (ignoring buffers and caches) if the same processes were running on an ARM or MIPS router with 32Mb RAM. But I think it is clear that at least 64Mb or RAM is a good idea for OpenWRT.

# top -n 1

Mem: 46420K used, 80108K free, 0K shrd, 1744K buff, 15872K cached
CPU:   0% usr   9% sys   0% nic  90% idle   0% io   0% irq   0% sirq
Load average: 0.06 0.10 0.07 1/41 31845
  PID  PPID USER     STAT   VSZ %VSZ %CPU COMMAND
25367     1 root     S     5316   4%   0% /usr/sbin/openvpn --syslog openvpn(my
25524     1 nobody   S     2944   2%   0% stunnel /etc/stunnel/stunnel.conf
 2672     1 root     S     1740   1%   0% /usr/sbin/hostapd -P /var/run/wifi-ph
 2795     1 root     S     1736   1%   0% /usr/sbin/hostapd -P /var/run/wifi-ph
 2645     1 root     S     1544   1%   0% /usr/sbin/uhttpd -f -h /www -r ??????
 2340     1 root     S     1528   1%   0% /sbin/netifd
 3205     1 root     S     1516   1%   0% {dynamic_dns_upd} /bin/sh /usr/lib/dd
 2797     1 root     S     1460   1%   0% /usr/sbin/ntpd -n -p 0.openwrt.pool.n
31709 31690 root     S     1460   1%   0% -ash
 2450  2340 root     S     1456   1%   0% udhcpc -p /var/run/udhcpc-eth0.2.pid
31845 31709 root     R     1456   1%   0% top -n 1
31293  3205 root     S     1448   1%   0% sleep 3600
    1     0 root     S     1408   1%   0% /sbin/procd
31690 24443 root     S     1204   1%   0% /usr/sbin/dropbear -F -P /var/run/dro
 2366     1 root     S     1168   1%   0% /usr/sbin/odhcpd
24443     1 root     S     1136   1%   0% /usr/sbin/dropbear -F -P /var/run/dro
 2306     1 root     S     1028   1%   0% /sbin/logd -S 16
24148     1 nobody   S      976   1%   0% /usr/sbin/dnsmasq -C /var/etc/dnsmasq
 1715     1 root     S      876   1%   0% /sbin/ubusd
 2582  2340 root     S      792   1%   0% odhcp6c -s /lib/netifd/dhcpv6.script