Author Archives: zo0ok

Very simple REST JSON node.js server

I want to build a modern web application (perhaps using AngularJS) or some mobile application, and I need to have some working server side to get started. There are of course plenty of options; .NET WebApi, LAMP, MongoDB, NodeJS + Express, and many more. But I want it stupid simple. This is tested on Linux, but everything should apply in Windows too.

I wrote a very simple REST/JSON server for node.js, and this is about it (source code in the end).

How to run it
Presuming you have nodejs installed:

$ node simple-rest-server.js

It now listens to port 11337 on (that is hard coded in the code).

Configure with Apache
The problem with port 11337 is that if you build a web application you will get cross site problems if the service runs on a different port than the html files. If you are running apache, you can:

# a2enmod proxy
# a2enmod proxy_http

Add to /etc/apache/sites-enabled/{your default site, or other site}
ProxyPass /nodejs http://localhost:11337
ProxyPassReverse /nodejs http://localhost:11337

# service apache2 restart

You can do this with nginx too, and probably also with IIS.

Use from command line
Assuming you have a json data file (data99.json) you can write to (POST), read from (GET) and delete from (DELETE) the server:

$ curl --data @data99.json http://localhost/nodejs/99
$ curl http://localhost/nodejs/99
$ curl -X DELETE http://localhost/nodejs/99

If you did not configure apache as proxy as suggested above, you need to us the :port instead of /nodejs. In this case 99 is the document id (a positive number). You can add any number of documents with whatever ids you like (as long as they are positive numbers, and as long as the server does not run out of memory). There is no list function in this very simple server (although it would be very easy to add).

Using from AngularJS
The command line is not so much fun, but AngularJS is. If you create your controller with $http the following works:

function myController($scope, $http) {

  // write an object named x with id
  h = $'http://localhost/nodejs/' + id, x)
  h.error(function(r) {
    // your error handling (may use r.error to get error message)
  h.success(function(r)) {
    // your success handling

  // read object with id to variable x
  h = $http.get('http://localhost/nodejs/' + id)
  h.error(function(r) {
    // your error handling
  h.success(function(r) {
    x =

  // delete object with id 
  h = $http['delete']('http://localhost/nodejs/' + id)
  h.error(function(r) {
    // your error handling
  h.success(function(r)) {
    // your success handling

I found that Internet Explorer can have problems with $http.delete, thus $http[‘delete’] (very pretty).

What the server also does
The server handles GET, POST and DELETE. It validates and error handles its input (correctly, I think). It stores the data to a file, so you can stop/start the server without losing information.

What the server does not do
In case you want to go from prototyping to production, or you want more features, it is rather simple to:

  1. add function to list objects
  2. add different types of objects
  3. let the server also serve files such as .html and .js files
  4. use MongoDB as backend
  5. add security and authentication

The code
The entire code follows (feel free to modify and use for your own purpose):

 * A very simple JSON/REST server
 * http://host:port/{id}       id is a positive number
 * POST   - create/overwrite   $ curl --data @file.json http...
 * GET    - load               $ curl http...
 * DELETE - delete             $ curl -X DELETE http...
glHost    = { ip:'', port:'11337' }
glHttp    = require('http')
glUrl     = require('url')
glFs      = require('fs')
glServer  = null
glStorage = null

/* Standard request handler - read all posted data before proceeding */
function requestHandler(req, res) {
  var pd = ""
  req.on("data", function(chunk) {
    pd += chunk
  req.on("end", function() {
    requestHandlerWithData(req, res, pd)

/* Custom request handler - posted data in a string */
function requestHandlerWithData(req, res, postdata) {
  var in_url  = glUrl.parse(req.url, true)
  var id      = in_url["pathname"].substring(1) //substring removes leading /
  var retcode = 200
  var retdata = null
  var error   = null

  if ( ! /^[1-9][0-9]*$/.test(id) ) {
    error   = "Invalid id=" + id
    retcode = 400

  if ( ! error ) switch ( req.method ) {
  case "GET":
    if ( ! glStorage[id] ) {
      error = "No object stored with id=" + id
      retcode = 404
    } else {
      retdata = glStorage[id]
  case "POST":
    try {
      glStorage[id] = JSON.parse(postdata)
    } catch(e) {
      error = "Posted data was not valid JSON"
      retcode = 400
  case "DELETE":
    delete glStorage[id]
    error   = "Invalid request method=" + req.method
    retcode = 400

  res.writeHead(retcode, {"Server": "nodejs"})
  res.writeHead(retcode, {"Content-Type": "text/javascript;charset=utf-8"})
  res.write(JSON.stringify( { error:error, data:retdata } ))

  console.log("" + req.method + " id=" + id + ", " + retcode +
    ( error ? ( " Error=" + error ) : " Success" ) )

function writeStorage() {
  glFs.writeFile("./db.json",JSON.stringify(glStorage),function(err) {
    if (err) {
      console.log("Failed to write to db.json" + err)
    } else {
      console.log("Data written to db.json")

glFs.readFile("db.json", function(err, data) {
  if (err) {
    console.log("Failed to read data from db.json, create new empty storage")
    glStorage = new Object()
  } else {
    glStorage = JSON.parse(data)
glServer = glHttp.createServer(requestHandler)
glServer.listen(glHost.port, glHost.ip)
console.log("Listening to http://" + glHost.ip + ":" + glHost.port + "/{id}")

Installing Citrix Receiver 13.1 in Ubuntu/Debian

The best thing with Citrix Receiver for Linux is that it exists. Apart from that it kind of sucks. Last days I have tried to install it on Xubuntu 14.10 and Debian 7.7, both 64-bit version.

The good thing is that for both Debian and Ubuntu the 64-bit deb-file is actually installable using “dpkg -i”, if you fix all dependencies. I did:

1) #dpkg --add-architecture i386
2) #apt-get update
3) #dpkg -i icaclient_13.1.0.285639_amd64.deb
  ... list of failed dependencies...
4) #dpkg -r icaclient
5) #apt-get install [all packages from (3)]
6) #dpkg -i icaclient_13.1.0.285639_amd64.deb

Step (1) and (2) only needed in Debian.

selfservice is hard to get to start from the start menu. And selfservice gets segmentation fault when OpenVPN is on (WTF?). So for now, I have given up on it. is supposed to make the browser plugin work, but not much luck there (guess it is because I have a 64 bit browser). I deleted system-wide symbolic links to (do: find | grep in the root directory).

#rm /usr/lib/mozilla/plugins/
#rm /usr/local/lib/netscape/plugins/

Then I could tell the Citrix portal that I do have the Receiver even though the browser does not recognize it, and as I launch an application I have choose to run it with (the good old way).

keyboard settings can no longer be made in the GUI but you have to edit your ~/.ICAClient/wfclient.ini file. The following makes Swedish keyboard work for me:

KeyboardLayout = SWEDISH
KeyboardMappingFile = linux.kbd
KeyboardDescription = Automatic (User Profile)

The problem is, when you fix the file, you need to restart all Citrix-related processes for the new settings to apply. If you feel you got the settings right but no success, just restart your computer. I wasted too much time thinking I had killed all processes, and thinking my wfclient.ini-file was bad, when a simple restart fixed it.

Debian on NUC and boot problems

I got a NUC (D54250WYKH) that I installed Debian 7.7 on.

Advice: First update the NUC “BIOS”.

  1. Download from Intel
  2. Put on USB memory
  3. Put USB memory in NUC
  4. Start NUC, Press F7 to upgrade BIOS

If I had done this first I would have saved some time and some reading about EFI stuff I don’t want to know anyway. A few more conclusions follow.

EFI requires a special little EFI-partition. Debian will set it up automatically for you, unless you are an expert and choose manual partitioning, of course ;) That would also have saved me some time.

(X)Ubuntu 14.10 had no problems even without upgrading BIOS.

The NUC is very nice! In case it is not clear: there is space for both an mSATA drive and a 2.5′ drive in my model. In fact, I think there is also space for an extra extra small mSATA drive. Unless building a gaming computer I believe NUC (or similar) is the way to go.

Finally, Debian 7.7 comes with Linux 3.2 kernel which has old audio drivers that produce bad audio quality. I learnt about Debian backports and currently run Linux 3.16 with Debian 7.7 and I have perfect audio now.

Charging Sony Xperia Z1 Compact

The Sony Xperia Z1 Compact can be charged two ways: via Micro USB or via the special magnetic charging connector.

The phone comes with the normal USB cable, that can also be used for synchronization, file transfers and such. But since the phone is water proof the Micro USB connector is hidden behind a little door, and opening and closing this every time charging the phone does not really feel optimal.

The official way to charge via the magnetic connector is to buy the DK32 docking station. It is quite pricey, and quite “light” (the magnet is much stronger than the weight of the thing). Docking/undocking does not really feel like opening/closing a german car door, but otherwise it is nice to have the phone docked and charging. It is quite unclear if this DK32 is compatible with any other very similar Sony Xperia docking stations.

Other options?

I ordered a USB-cable with magnetic connector (but no docking station) from Deal Extreme. Again, quite unclear what models the cable really works with (there are many similar cables, with different phones listed as compatible).

Well, the cable “works”. When attached, it charges the phone just perfectly. Attaching it requires a little bit of a precision move, and when attached it is not very stable against rolling off to the front or the back of the phone. But now that I have learnt how to do it, I prefer it to the old USB cable. I am thinking about building/gluing some type of docking station for it. Note: the +/- connectors are not interchangeable. If I connect it upside-down (the cable from up) the phone restarts, and it is perhaps not entirely healthy for it.

I have the original DK32 at work, so the phone is almost always fully charged when I leave work in the afternoon, and I don’t need to charge it until back at work next day.

Using float and double as integer

Traditionally computers work with integer types of different sizes. For scientific applications, media, gaming and other applications floating point numbers are needed. In old computers floating point numbers where handled in software, by special libraries, making them much slower than integers, but nowadays most CPUs have an FPU that can make fast float calculations.

Until recently I was under impression that integers were still faster than floats and that floats have precision/rounding issues, making the integer datatype the natural and only sane choice for representing mathematical integers. Then I came to learn two things:

  1. In JavaScript, all numbers are 64bit floats (double), effectively allowing 52bit integers when used correctly.
  2. OpenSSL uses the double datatype instead of int in some situations (big numbers) for performance reasons.

Both these applications exploit the fact that if the cost of 64bit float operations is (thanks to the FPU) roughly equal to the cost of 32bit integer operations, then a double can be a more powerful representation of big integers than an int. It is also important to understand that (double) floating point numbers have precision problems only handling decimal points (ex 0.1) and very big numbers, but handle real world integers just fine.

Apart from this, there could be other possible advantages of using float instead of int:

  • If the FPU can execute instructions somewhat in parallell with the ALU/CPU using floats when possible could benefit performance.
  • If there are dedicated floating point registers, making use of them could free up integer registers.

Well, I decided to make a test. I have a real world application:

  • written in C
  • that does calculations on integers (mostly in the range 0-1000000)
  • that has automated tests, so I can modify the program and confirm that it still works
  • that has built in performance/time measurement

Since I had used int to represent a real-world-measurement (length in mm), I decided nothing is really lost if I use float or double instead of int. The values were small enough that a 32bit float would probably be sufficiently precise (otherwise my automated tests would complain). While the program is rather computation heavy, it is not extremely calculation-intense, and the only mathematical operations I use are +,-,>,=,<. That is, even if float-math was for "free" the program would still be heavy but faster.

In all cases gcc is used with -O2 -ffast-math. The int column shows speed relative to the first line (Celeron 630MHz is my reference/baseline). The float/double columns show speed relative to the int speed of the same machine. Higher is better.

Machine int float double Comment
Eee701 Celeron 630MHz / Lubuntu 1.0 0.93 0.93
AMD Athlon II 3Ghz / Xubuntu 5.93 1.02 0.97
PowerBook G4 PPC 867MHz / Debian 1.0 0.94 0.93
Linksys WDR4900 PPC 800MHz / OpenWRT 1.12 0.96 (0.87) 0.41 (0.89) Values in parenthesis using -mcpu=8548
Raspberry Pi ARMv6 700MHz / Raspbian 0.52 0.94 0.93
QNAP TS-109 ARMv5 500MHz / Debian 0.27 0.61 0.52
WRT54GL Mips 200MHz / OpenWRT 0.17 0.20 0.17

A few notes on this:

I have put together quite many measurements and runs to eliminate outliers and variance, to produce the figures above.

There was something strange about the results from the PowerBook G4, and the performance is not what should be expected. I dont know if my machine underperforms, or if there is something wrong with the time measurements. Nevertheless, I believe the int vs float performance is still valid.

The Athlon is much faster than the other machines, giving shorter execution times, and the variance between different runs was bigger than for other machines. The 1.02/0.97 could very well be within error margin of 1.0.

The QNAP TS-109 ARM CPU does not have an FPU, which explains the lower performance for float/double. Other machines displayed similar float/double performance with “-msoft-float”.

The Linksys WDR4900 has an FPU that is capable of both single/double float precision. But with OpenWRT BB RC3 toolchain, gcc defaults to -mcpu=8540, which falls back to software float for doubles. With -mcpu=8548 the FPU is used also for doubles, but for some reason this lowers the single float performance.

Not tested
The situation could possibly change when the division operator is used, but division should be avoided anyway when it comes to optimization.

All tests are done on Linux and with GCC: it would surprise me much if results where very different on other platforms.

More tests could be made on more modern hardware, but precision advantage of double over int is lost for 64-bit machines with native 64-bit long int support.

As a rule of thumb, integers are faster than floats, and replacing integers with floats does not improve performance. Use the datatype that describes your data the best!

Exploiting the 52-bit integer capacity of a double should be considered advanced and platform dependent optimization, and not a good idea in the general case.

Upgrade OpenWRT and reinstalling packages

I just upgraded my OpenWRT router from Barrier Breaker RC2 to RC3. The upgrade guide is excellent, but it only mentions: “You do need to reinstall opkg-packages”… well, it sounds like there should be a smart way to do that.

Before upgrade:

# opkg list-installed > /etc/config/packages.installed

Two things to note: 1) This will take a few kb, that you must have available, and 2) since the file is in /etc/config it will be automatically restored after sysupgrade.

Now the sysupgrade itself (see the upgrade guide):

# sysupgrade -v /tmp/someimage-sysupgrade.bin

The system will restart, and you should be able to ssh into it, just as before the upgrade. Now reinstalling packages:

# opkg update
# opkg install $( cut -f 1 -d ' ' < /etc/config/packages.installed )

You will want to delete the new config files (or manually merge config files, or delete your old and use the new files). The new files have the "extension" -opkg, and can be found with

# cd /
# find | grep -e -opkg\$

That should be it.

Buying a router for OpenWRT

For a while I was thinking about buying a new wireless router for my home network, and I had already decided I wanted to run OpenWRT on it. I spent (wasted) quite some time reading the OpenWRT list of supported hardware, and searching for available routers. With this post, I hope to help you focusing on the essentials, to make a good decision quicker.

I presume, if you buy a new router to run OpenWRT, that you want to run the current stable version of OpenWRT (soon Barrier Breaker 14.07), and that you will want to be able to upgrade in the future.

I think it is a good idea to first decide the need for Flash and RAM, and then work from a much shorter list of hardware.

Most routers available have the following amounts of Flash (storage for kernel, files, configuration).
4Mb: is just enough, barely, to run OpenWRT.
8Mb: is enough for OpenWRT. You will be able to install packages, and even if future versions should be slightly larger, you should be fine.
16Mb: is more than enough for OpenWRT, but if you want to install many packages or put applications on it, then 16Mb gives you much more flexibility than 8Mb.

If you want to store files (backups, a web site, images, whatever), do that on a separate USB-storage (just make sure the router has USB ports). Too little Flash means you can not install packages, or you get errors when changing configurations. This is bad, but something you can handle in a controlled way.

Most routers available have the following amounts of RAM:
16Mb: is too little to run OpenWRT beyond version 10.03.1, except for special cases. Don’t buy!
32Mb: can run OpenWRT. But my new router is making use of more RAM than that (see below), running 14.07 RC2 and a few packages.
64Mb: should be enough for running several extra packages.
128Mb: is possibly going to be more than you need, but RAM never hurts, especially if you install extra packages or make heavy use of your router.

Too little RAM makes OpenWRT crash and restart, is my personal experience. Even if would kill processes (instead of crashing) in some cases, it is going to be brutal and disruptive – not the kind of service you want. Adding swap to a USB-storage is perhaps possible, but if you really need it you should probably have gotten another router, or you are using the router for the wrong task.

Flash / RAM conclusion
Chances are you will want 8/64Mb or more when buying a router to run OpenWRT. That will disqualify perhaps 80-90% of all supported routers, making your list shorter and your choice easier.

I really like getting the most out of simple hardware. You may very well have a situation where a 8/32Mb (or even a 4/32) router will be just perfect for you (or your parents or some other friends you are helping out). But if adding packages is important to you, I would not settle with 32Mb RAM.

Chipset / CPU
In the supported hardware table, there are three columns: Targets, Platform, CPU-speed. This is most likely not very relevant information to you. The CPU-speed will be of much less importance than the Flash/RAM when it comes to what you can do with the router. Of course higher CPU-speeds are better, and if you want to compare performance, have a look at the OpenSSL performance page (perhaps the RSA sign/verify columns are most useful for deciding CPU performance, since the numbers are not so big, and since there is probably no hardware support for RSA).

Network speed
Unless you have Internet connection faster than 100Mbit/s, chances are your router will be much faster than you connection, even for a cheap router. Some routers have 100Mbit-switch, some have GBit-switch – this may make a real world difference to you, if you often copy big files between your computers (or if your Internet is faster than 100Mbit, of course).

When it comes to Wireless you will find that most routers support B/G/N (2.4Ghz) and many also support A/N (5Ghz). Of course dual-band is nicer, but chances are it will not make any real difference to you whatsoever.

When it comes to AC-compatible routers, you are basically out of luck in August 2014. Avoid the Linksys WRT1900AC (unfortunately!), since it works only with a special version of OpenWRT that Belkin/Linksys built for it (not Barrier Breaker 14.07 that you want, and that you can download from
Update 20140821: There is the TP-Link Archer C7 that I have missed. (I am not going to keep an updated list of AC routers here)

There is a Stutus-column in the supported hardware table. You want it to say a stable version number: 7.09, 8.09, 10.03, 10.03.1, 12.09 or 14.07. Note that old 4/16Mb routers were supported, but are no longer supported with 12.09 and 14.07, so if it says 0.9 you should probably be careful. If it says “trunk” or “rXXXX” it means that it should work if you use the latest bleeding-edge builds: avoid this for production systems, and avoid this if you dont know how trunk works.

The version column is nasty. Manufacturers release different versions of routers under the same name. The specification may very a lot between versions, and quite often one is supported and the other is not. Have a look at the Netgear WNDR3700 which is very nice if you manage to get v2 or v4, while v3 does not even run OpenWRT.

Bricking and Fail Safe Mode
It can happen that a firmware installation/upgrade fails and the router is “bricked” (does not start). Different routers have different capabilities when it comes to recovering from a failed installation. Before buying a router, you might want to read about its recovery capabilities. I have never bricked a router with OpenWRT (or any other firmware), but you are more likely bricking it with OpenWRT, than just using OEM firmware.

Not getting paid to write this
I suggest, start looking at the TP-link routers. They are available in differnt price/performance segments, they have good price/performance ratio, they are not hard to find and TP-link seems to have a reasonable FOSS strategy/policy making their routers quickly supported by OpenWRT.

Years ago I liked ASUS routers (the WL-500g Premium v2, I bought several of that one for friends) and of course the Linksys WRT54GL. Buffalo seems to have good models, but I have problems finding the good ones where I live. Dlink is not one of my favourites, and when it comes to OpenWRT I find that the models that I can buy do not run OpenWRT, and the models that run OpenWRT are not available for sale here. And Netgear, I already mentioned the WNDR3700 mess above. Ubiquiti seems to be popular among OpenWRT people.

I bought a very reasonably priced TP-Link WDR4900 with 16Mb flash and 128Mb RAM, and it has a 800MHz PowerPC processor which I believe outperforms most ARMs and MIPS based routers available. Not that in China the WDR4900 is a completely different router.

Memory sitation on my WDR4900
On 14.07 RC2, I have installed OpenVPN and stunnel (currently no connections on neither of them) as well as uhttpd/Luci. This is my memory situation on my WDR4900. I dont know if the same amount of memory would be used (ignoring buffers and caches) if the same processes were running on an ARM or MIPS router with 32Mb RAM. But I think it is clear that at least 64Mb or RAM is a good idea for OpenWRT.

# top -n 1

Mem: 46420K used, 80108K free, 0K shrd, 1744K buff, 15872K cached
CPU:   0% usr   9% sys   0% nic  90% idle   0% io   0% irq   0% sirq
Load average: 0.06 0.10 0.07 1/41 31845
25367     1 root     S     5316   4%   0% /usr/sbin/openvpn --syslog openvpn(my
25524     1 nobody   S     2944   2%   0% stunnel /etc/stunnel/stunnel.conf
 2672     1 root     S     1740   1%   0% /usr/sbin/hostapd -P /var/run/wifi-ph
 2795     1 root     S     1736   1%   0% /usr/sbin/hostapd -P /var/run/wifi-ph
 2645     1 root     S     1544   1%   0% /usr/sbin/uhttpd -f -h /www -r ??????
 2340     1 root     S     1528   1%   0% /sbin/netifd
 3205     1 root     S     1516   1%   0% {dynamic_dns_upd} /bin/sh /usr/lib/dd
 2797     1 root     S     1460   1%   0% /usr/sbin/ntpd -n -p 0.openwrt.pool.n
31709 31690 root     S     1460   1%   0% -ash
 2450  2340 root     S     1456   1%   0% udhcpc -p /var/run/
31845 31709 root     R     1456   1%   0% top -n 1
31293  3205 root     S     1448   1%   0% sleep 3600
    1     0 root     S     1408   1%   0% /sbin/procd
31690 24443 root     S     1204   1%   0% /usr/sbin/dropbear -F -P /var/run/dro
 2366     1 root     S     1168   1%   0% /usr/sbin/odhcpd
24443     1 root     S     1136   1%   0% /usr/sbin/dropbear -F -P /var/run/dro
 2306     1 root     S     1028   1%   0% /sbin/logd -S 16
24148     1 nobody   S      976   1%   0% /usr/sbin/dnsmasq -C /var/etc/dnsmasq
 1715     1 root     S      876   1%   0% /sbin/ubusd
 2582  2340 root     S      792   1%   0% odhcp6c -s /lib/netifd/dhcpv6.script

Simple minification of JavaScript and HTML

With frameworks like AngularJS it is possible to write really nice web applications just relying on HTML, JavaScript and REST services. Of course you indent and comment in your HTML and JavaScript-files, but this data does not need to be served to the user. The web browser just need the functional parts.

There are many minifiers, uglifiers or obfuscates; programs that remove comments and formatting from your code to make them smaller. Sometimes they also scramble/obfuscate the code with the intention to make it harder for someone to understand (and possibly use or exploit).

Those minifiers can be Windows applications, web pages, web server plugins and they can be implemented in a wide variety of languages or platforms depending on use. What I wanted was something very simple that I could just include in a simple build script on linux system: a command line tool (that does not rely on installing a bunch of java libraries or php-packages, and that does not support hundreds of dangerous options).

For JavaScript it was easy: I found JSMin written by a Master, Crockford. JSMin comes as a single C source file – that is easy for me.

For HTML it was trickier. Probably because few people actually write big HTML files directly – most often a web server and a server framework (like PHP) delivers the code. Also, there were many web based HTML minifies, but those are annoying to automate and depend on. So I actually spent more time looking for something as simple as JSMin, than I actually spent implementing the thing myself. It was tempting to do it in C, but then it would have taken longer time to implement than I already wasted looking for a tool. I choose Python (version 3, so it is incompatible with most peoples Python interpreter). Here we go:

import sys

in_pre = False
in_comment = False

def outCommentHandler(line):
  x = line.find('<!--')
  if -1 == x:
    return line,False,''
    return line[:x],True,line[4+x:]

def inCommentHandler(line):
  x = line.find('-->')
  if -1 == x:
    return True,''
    return False,line[3+x:]

for line in sys.stdin:
  rem = line.strip()
  if in_pre:
    if rem.upper() == '</PRE>':
      in_pre = False
  elif rem.upper() == '<PRE>':
    in_pre = True
  elif '' != rem:
    while '' != rem:
      if in_comment:
        clean = ''
        in_comment,rem = inCommentHandler(rem)
        clean,in_comment,rem = outCommentHandler(rem)
      if '' != clean:
        print(clean, '')

Both jsmin and are used the same way:

$ jsmin < code.js > code.min.js
$ < page.html > page.min.html

Both programs are not perfect.

I found that JSMin fails with regular expression patters like this:

var alpha=/^[a-z]+$/
var ALPHA=/^[A-Z]+$/

Adding ; to the end of the lines fix that problem.
What it does is simply:

  • Preserves <PRE> as long as the PRE-tags are on their own lines
  • Removes all comments: <!-- A comment -->
  • Removes white space in the beginning (and end) of lines
  • Removes empty lines

This is about the low hanging fruit only, but I think it is good enough for most purposes.

What is the “compression” rate?
For my test code:
HTML was compressed from 59kb to 42kb
JavaScript was compressed from 162kb to 108kb

It is possible to do better with better tools, but this is very simple, and it takes away the obvious waste from the files, with minimal risk of changing behavior. More heavy JavaScript minifiers rename variables and rewrites code.

Using WRT54GL with OpenWRT 14.07 in 2014

The Linksys WRT54GL was a very successul product for its time, not the least because sources were available and people could make their own firmwares for it. There are today firmwares like Tomato, DD-WRT and OpenWRT that runs on WRT54GL. My interest is with OpenWRT (as it gives me a full Linux system, not just a firmware, a web gui and parameters to set). The last OpenWRT version that was good and fully supported on the WRT54GL was 10.03.1. But that firmware is four years old, and not so fresh in 2014. I like to keep my Linux systems updated.

I tried to build (from scratch/sources, using buildroot) a very minimal version of Barrier Breaker (OpenWRT 14.07) Release Candidate 1 for WRT54GL. I removed everything that I could possibly live without, and ended up with an image that used less than 3MB of flash. Still, it was impossible to get WiFi running for more than a few minutes, and the router got very slow. Most likely the 16MB of RAM is just not enough (that is the generally accepted explanation).

Update: Someone seems to have been successful running BB with WLAN.

However, without WiFi, the WRT54GL runs Barrier Breaker just fine. That rules out using it as a WiFi router, but it leaves other options open. It is still a functional switch, that is also a Linux-machine with dropbear (ssh) and it can run software (opkg-packages) like nginx, openvpn, stunnel, uhttpd (cgi capable web server), iptables, and many more.

Image Builder
I have used the Image Builder (also called Image Generator) from BB RC2 to generate my own OpenWRT firmware for my WRT54GL. The process can seem scary at first, but it is quite simple. You can choose exactly the packages you want and build an image with just those packages, making the most of your hardware.
Update 2014-10-09: BB 14.07 final works fine, just as RC2.

After downloading and extracting the Image Builder (on my x64 Ubuntu machine), I ran the following command:

make image PROFILE=Broadcom-None
           PACKAGES="-dnsmasq -ppp -ppp-mod-pppoe -libip6tc -kmod-ipv6
           -odhcp6c -ip6tables -odhcpd uhttpd netcat openvpn-easy-rsa
           openvpn-openssl openssl-util stunnel libwrap"

Note, there should be no line breaks when you run this command.

  • Available profiles are found in target/linux/brcm47xx, and gives a starting point for selecting packages
  • Packages can be added using the PACKAGE option to make (ex stunnel)
  • Packages can be removed by using – in front of their name (ex odhcp6c)

I read the massive output which indicated that some packages (netcat, stunnel) failed to install. Also, if there are missing dependencies (libwrap) you will see it in the output.

Not all packages that are available are also included in the Image Builder download, but just download the packages you want and put them in the package folder in the image builder.

Rerun make image until you see no dependency errors, failed packages or other problems.

When make image has finished, have a look in your bin/brcm47xx folder, and you will find your new firmware. The firmware I generated above was less than 3.5Mb, leaving several 100kb for configuration and more packages. Flashing the router the normal way and logging in to the system I find that memory usage is about 10Mb and available disk is about 500kb (the openssl-util alone is 477kb on the filesystem and its opkg is 182kb).

As a benchmark I decided to use the easy-rsa package and build certificates. Particularly the build-dh takes very long time: on this machine 30min. However, on my new TP-Link WDR4900 it took 60 minutes, so obviously this was a silly unpredictable benchmark.

With some fantasy, curiosity and enthusiasm, you can turn your old WRT54GL into a useful component of your home or work network. It can provide some, or a few, useful services. Often it is wise to separate different functions to different hardware, because it makes your network more stable. And not the least, this is a good way to experiment with OpenWRT, without risk of breaking your production WiFi and broadband router.

Upgrading ownCloud 6.0.3 to 7.0.1

I am running ownCloud on a Debian machine with Apache and Mysql, as I have written about before. OwnCloud has released a new version, 7.0.1, and it is possible to update via the Web GUI. I did that, it took a few minutes, and it worked perfectly.

I have written about Performance of ownCloud before. It appears upgrading to 7.0.1 has not changed the performance of the platform at all.