Author Archives: zo0ok

Charging Sony Xperia Z1 Compact

The Sony Xperia Z1 Compact can be charged two ways: via Micro USB or via the special magnetic charging connector.

The phone comes with the normal USB cable, that can also be used for synchronization, file transfers and such. But since the phone is water proof the Micro USB connector is hidden behind a little door, and opening and closing this every time charging the phone does not really feel optimal.

The official way to charge via the magnetic connector is to buy the DK32 docking station. It is quite pricey, and quite “light” (the magnet is much stronger than the weight of the thing). Docking/undocking does not really feel like opening/closing a german car door, but otherwise it is nice to have the phone docked and charging. It is quite unclear if this DK32 is compatible with any other very similar Sony Xperia docking stations.

Other options?

I ordered a USB-cable with magnetic connector (but no docking station) from Deal Extreme. Again, quite unclear what models the cable really works with (there are many similar cables, with different phones listed as compatible).

Well, the cable “works”. When attached, it charges the phone just perfectly. Attaching it requires a little bit of a precision move, and when attached it is not very stable against rolling off to the front or the back of the phone. But now that I have learnt how to do it, I prefer it to the old USB cable. I am thinking about building/gluing some type of docking station for it. Note: the +/- connectors are not interchangeable. If I connect it upside-down (the cable from up) the phone restarts, and it is perhaps not entirely healthy for it.

I have the original DK32 at work, so the phone is almost always fully charged when I leave work in the afternoon, and I don’t need to charge it until back at work next day.

Using float and double as integer

Traditionally computers work with integer types of different sizes. For scientific applications, media, gaming and other applications floating point numbers are needed. In old computers floating point numbers where handled in software, by special libraries, making them much slower than integers, but nowadays most CPUs have an FPU that can make fast float calculations.

Until recently I was under impression that integers were still faster than floats and that floats have precision/rounding issues, making the integer datatype the natural and only sane choice for representing mathematical integers. Then I came to learn two things:

  1. In JavaScript, all numbers are 64bit floats (double), effectively allowing 52bit integers when used correctly.
  2. OpenSSL uses the double datatype instead of int in some situations (big numbers) for performance reasons.

Both these applications exploit the fact that if the cost of 64bit float operations is (thanks to the FPU) roughly equal to the cost of 32bit integer operations, then a double can be a more powerful representation of big integers than an int. It is also important to understand that (double) floating point numbers have precision problems only handling decimal points (ex 0.1) and very big numbers, but handle real world integers just fine.

Apart from this, there could be other possible advantages of using float instead of int:

  • If the FPU can execute instructions somewhat in parallell with the ALU/CPU using floats when possible could benefit performance.
  • If there are dedicated floating point registers, making use of them could free up integer registers.

Well, I decided to make a test. I have a real world application:

  • written in C
  • that does calculations on integers (mostly in the range 0-1000000)
  • that has automated tests, so I can modify the program and confirm that it still works
  • that has built in performance/time measurement

Since I had used int to represent a real-world-measurement (length in mm), I decided nothing is really lost if I use float or double instead of int. The values were small enough that a 32bit float would probably be sufficiently precise (otherwise my automated tests would complain). While the program is rather computation heavy, it is not extremely calculation-intense, and the only mathematical operations I use are +,-,>,=,<. That is, even if float-math was for "free" the program would still be heavy but faster.

In all cases gcc is used with -O2 -ffast-math. The int column shows speed relative to the first line (Celeron 630MHz is my reference/baseline). The float/double columns show speed relative to the int speed of the same machine. Higher is better.

Machine int float double Comment
Eee701 Celeron 630MHz / Lubuntu 1.0 0.93 0.93
AMD Athlon II 3Ghz / Xubuntu 5.93 1.02 0.97
PowerBook G4 PPC 867MHz / Debian 1.0 0.94 0.93
Linksys WDR4900 PPC 800MHz / OpenWRT 1.12 0.96 (0.87) 0.41 (0.89) Values in parenthesis using -mcpu=8548
Raspberry Pi ARMv6 700MHz / Raspbian 0.52 0.94 0.93
QNAP TS-109 ARMv5 500MHz / Debian 0.27 0.61 0.52
WRT54GL Mips 200MHz / OpenWRT 0.17 0.20 0.17

A few notes on this:

I have put together quite many measurements and runs to eliminate outliers and variance, to produce the figures above.

There was something strange about the results from the PowerBook G4, and the performance is not what should be expected. I dont know if my machine underperforms, or if there is something wrong with the time measurements. Nevertheless, I believe the int vs float performance is still valid.

The Athlon is much faster than the other machines, giving shorter execution times, and the variance between different runs was bigger than for other machines. The 1.02/0.97 could very well be within error margin of 1.0.

The QNAP TS-109 ARM CPU does not have an FPU, which explains the lower performance for float/double. Other machines displayed similar float/double performance with “-msoft-float”.

The Linksys WDR4900 has an FPU that is capable of both single/double float precision. But with OpenWRT BB RC3 toolchain, gcc defaults to -mcpu=8540, which falls back to software float for doubles. With -mcpu=8548 the FPU is used also for doubles, but for some reason this lowers the single float performance.

Not tested
The situation could possibly change when the division operator is used, but division should be avoided anyway when it comes to optimization.

All tests are done on Linux and with GCC: it would surprise me much if results where very different on other platforms.

More tests could be made on more modern hardware, but precision advantage of double over int is lost for 64-bit machines with native 64-bit long int support.

Conclusion
As a rule of thumb, integers are faster than floats, and replacing integers with floats does not improve performance. Use the datatype that describes your data the best!

Exploiting the 52-bit integer capacity of a double should be considered advanced and platform dependent optimization, and not a good idea in the general case.

Upgrade OpenWRT and reinstalling packages

I just upgraded my OpenWRT router from Barrier Breaker RC2 to RC3. The upgrade guide is excellent, but it only mentions: “You do need to reinstall opkg-packages”… well, it sounds like there should be a smart way to do that.

Before upgrade:

# opkg list-installed > /etc/config/packages.installed

Two things to note: 1) This will take a few kb, that you must have available, and 2) since the file is in /etc/config it will be automatically restored after sysupgrade.

Now the sysupgrade itself (see the upgrade guide):

# sysupgrade -v /tmp/someimage-sysupgrade.bin

The system will restart, and you should be able to ssh into it, just as before the upgrade. Now reinstalling packages:

# opkg update
# opkg install $( cut -f 1 -d ' ' < /etc/config/packages.installed )

You will want to delete the new config files (or manually merge config files, or delete your old and use the new files). The new files have the "extension" -opkg, and can be found with

# cd /
# find | grep -e -opkg\$

That should be it.

Buying a router for OpenWRT

For a while I was thinking about buying a new wireless router for my home network, and I had already decided I wanted to run OpenWRT on it. I spent (wasted) quite some time reading the OpenWRT list of supported hardware, and searching for available routers. With this post, I hope to help you focusing on the essentials, to make a good decision quicker.

I presume, if you buy a new router to run OpenWRT, that you want to run the current stable version of OpenWRT (soon Barrier Breaker 14.07), and that you will want to be able to upgrade in the future.

I think it is a good idea to first decide the need for Flash and RAM, and then work from a much shorter list of hardware.

Flash
Most routers available have the following amounts of Flash (storage for kernel, files, configuration).
4Mb: is just enough, barely, to run OpenWRT.
8Mb: is enough for OpenWRT. You will be able to install packages, and even if future versions should be slightly larger, you should be fine.
16Mb: is more than enough for OpenWRT, but if you want to install many packages or put applications on it, then 16Mb gives you much more flexibility than 8Mb.

If you want to store files (backups, a web site, images, whatever), do that on a separate USB-storage (just make sure the router has USB ports). Too little Flash means you can not install packages, or you get errors when changing configurations. This is bad, but something you can handle in a controlled way.

RAM
Most routers available have the following amounts of RAM:
16Mb: is too little to run OpenWRT beyond version 10.03.1, except for special cases. Don’t buy!
32Mb: can run OpenWRT. But my new router is making use of more RAM than that (see below), running 14.07 RC2 and a few packages.
64Mb: should be enough for running several extra packages.
128Mb: is possibly going to be more than you need, but RAM never hurts, especially if you install extra packages or make heavy use of your router.

Too little RAM makes OpenWRT crash and restart, is my personal experience. Even if would kill processes (instead of crashing) in some cases, it is going to be brutal and disruptive – not the kind of service you want. Adding swap to a USB-storage is perhaps possible, but if you really need it you should probably have gotten another router, or you are using the router for the wrong task.

Flash / RAM conclusion
Chances are you will want 8/64Mb or more when buying a router to run OpenWRT. That will disqualify perhaps 80-90% of all supported routers, making your list shorter and your choice easier.

I really like getting the most out of simple hardware. You may very well have a situation where a 8/32Mb (or even a 4/32) router will be just perfect for you (or your parents or some other friends you are helping out). But if adding packages is important to you, I would not settle with 32Mb RAM.

Chipset / CPU
In the supported hardware table, there are three columns: Targets, Platform, CPU-speed. This is most likely not very relevant information to you. The CPU-speed will be of much less importance than the Flash/RAM when it comes to what you can do with the router. Of course higher CPU-speeds are better, and if you want to compare performance, have a look at the OpenSSL performance page (perhaps the RSA sign/verify columns are most useful for deciding CPU performance, since the numbers are not so big, and since there is probably no hardware support for RSA).

Network speed
Unless you have Internet connection faster than 100Mbit/s, chances are your router will be much faster than you connection, even for a cheap router. Some routers have 100Mbit-switch, some have GBit-switch – this may make a real world difference to you, if you often copy big files between your computers (or if your Internet is faster than 100Mbit, of course).

When it comes to Wireless you will find that most routers support B/G/N (2.4Ghz) and many also support A/N (5Ghz). Of course dual-band is nicer, but chances are it will not make any real difference to you whatsoever.

When it comes to AC-compatible routers, you are basically out of luck in August 2014. Avoid the Linksys WRT1900AC (unfortunately!), since it works only with a special version of OpenWRT that Belkin/Linksys built for it (not Barrier Breaker 14.07 that you want, and that you can download from OpenWRT.org).
Update 20140821: There is the TP-Link Archer C7 that I have missed. (I am not going to keep an updated list of AC routers here)

Status
There is a Stutus-column in the supported hardware table. You want it to say a stable version number: 7.09, 8.09, 10.03, 10.03.1, 12.09 or 14.07. Note that old 4/16Mb routers were supported, but are no longer supported with 12.09 and 14.07, so if it says 0.9 you should probably be careful. If it says “trunk” or “rXXXX” it means that it should work if you use the latest bleeding-edge builds: avoid this for production systems, and avoid this if you dont know how trunk works.

Version
The version column is nasty. Manufacturers release different versions of routers under the same name. The specification may very a lot between versions, and quite often one is supported and the other is not. Have a look at the Netgear WNDR3700 which is very nice if you manage to get v2 or v4, while v3 does not even run OpenWRT.

Bricking and Fail Safe Mode
It can happen that a firmware installation/upgrade fails and the router is “bricked” (does not start). Different routers have different capabilities when it comes to recovering from a failed installation. Before buying a router, you might want to read about its recovery capabilities. I have never bricked a router with OpenWRT (or any other firmware), but you are more likely bricking it with OpenWRT, than just using OEM firmware.

Not getting paid to write this
I suggest, start looking at the TP-link routers. They are available in differnt price/performance segments, they have good price/performance ratio, they are not hard to find and TP-link seems to have a reasonable FOSS strategy/policy making their routers quickly supported by OpenWRT.

Years ago I liked ASUS routers (the WL-500g Premium v2, I bought several of that one for friends) and of course the Linksys WRT54GL. Buffalo seems to have good models, but I have problems finding the good ones where I live. Dlink is not one of my favourites, and when it comes to OpenWRT I find that the models that I can buy do not run OpenWRT, and the models that run OpenWRT are not available for sale here. And Netgear, I already mentioned the WNDR3700 mess above. Ubiquiti seems to be popular among OpenWRT people.

I bought a very reasonably priced TP-Link WDR4900 with 16Mb flash and 128Mb RAM, and it has a 800MHz PowerPC processor which I believe outperforms most ARMs and MIPS based routers available. Not that in China the WDR4900 is a completely different router.

Memory sitation on my WDR4900
On 14.07 RC2, I have installed OpenVPN and stunnel (currently no connections on neither of them) as well as uhttpd/Luci. This is my memory situation on my WDR4900. I dont know if the same amount of memory would be used (ignoring buffers and caches) if the same processes were running on an ARM or MIPS router with 32Mb RAM. But I think it is clear that at least 64Mb or RAM is a good idea for OpenWRT.

# top -n 1

Mem: 46420K used, 80108K free, 0K shrd, 1744K buff, 15872K cached
CPU:   0% usr   9% sys   0% nic  90% idle   0% io   0% irq   0% sirq
Load average: 0.06 0.10 0.07 1/41 31845
  PID  PPID USER     STAT   VSZ %VSZ %CPU COMMAND
25367     1 root     S     5316   4%   0% /usr/sbin/openvpn --syslog openvpn(my
25524     1 nobody   S     2944   2%   0% stunnel /etc/stunnel/stunnel.conf
 2672     1 root     S     1740   1%   0% /usr/sbin/hostapd -P /var/run/wifi-ph
 2795     1 root     S     1736   1%   0% /usr/sbin/hostapd -P /var/run/wifi-ph
 2645     1 root     S     1544   1%   0% /usr/sbin/uhttpd -f -h /www -r ??????
 2340     1 root     S     1528   1%   0% /sbin/netifd
 3205     1 root     S     1516   1%   0% {dynamic_dns_upd} /bin/sh /usr/lib/dd
 2797     1 root     S     1460   1%   0% /usr/sbin/ntpd -n -p 0.openwrt.pool.n
31709 31690 root     S     1460   1%   0% -ash
 2450  2340 root     S     1456   1%   0% udhcpc -p /var/run/udhcpc-eth0.2.pid
31845 31709 root     R     1456   1%   0% top -n 1
31293  3205 root     S     1448   1%   0% sleep 3600
    1     0 root     S     1408   1%   0% /sbin/procd
31690 24443 root     S     1204   1%   0% /usr/sbin/dropbear -F -P /var/run/dro
 2366     1 root     S     1168   1%   0% /usr/sbin/odhcpd
24443     1 root     S     1136   1%   0% /usr/sbin/dropbear -F -P /var/run/dro
 2306     1 root     S     1028   1%   0% /sbin/logd -S 16
24148     1 nobody   S      976   1%   0% /usr/sbin/dnsmasq -C /var/etc/dnsmasq
 1715     1 root     S      876   1%   0% /sbin/ubusd
 2582  2340 root     S      792   1%   0% odhcp6c -s /lib/netifd/dhcpv6.script

Simple minification of JavaScript and HTML

With frameworks like AngularJS it is possible to write really nice web applications just relying on HTML, JavaScript and REST services. Of course you indent and comment in your HTML and JavaScript-files, but this data does not need to be served to the user. The web browser just need the functional parts.

There are many minifiers, uglifiers or obfuscates; programs that remove comments and formatting from your code to make them smaller. Sometimes they also scramble/obfuscate the code with the intention to make it harder for someone to understand (and possibly use or exploit).

Those minifiers can be Windows applications, web pages, web server plugins and they can be implemented in a wide variety of languages or platforms depending on use. What I wanted was something very simple that I could just include in a simple build script on linux system: a command line tool (that does not rely on installing a bunch of java libraries or php-packages, and that does not support hundreds of dangerous options).

For JavaScript it was easy: I found JSMin written by a Master, Crockford. JSMin comes as a single C source file – that is easy for me.

For HTML it was trickier. Probably because few people actually write big HTML files directly – most often a web server and a server framework (like PHP) delivers the code. Also, there were many web based HTML minifies, but those are annoying to automate and depend on. So I actually spent more time looking for something as simple as JSMin, than I actually spent implementing the thing myself. It was tempting to do it in C, but then it would have taken longer time to implement than I already wasted looking for a tool. I choose Python (version 3, so it is incompatible with most peoples Python interpreter). Here we go:

#!/usr/bin/python3
import sys

in_pre = False
in_comment = False

def outCommentHandler(line):
  x = line.find('<!--')
  if -1 == x:
    return line,False,''
  else:
    return line[:x],True,line[4+x:]

def inCommentHandler(line):
  x = line.find('-->')
  if -1 == x:
    return True,''
  else:
    return False,line[3+x:]

for line in sys.stdin:
  rem = line.strip()
  if in_pre:
    if rem.upper() == '</PRE>':
      in_pre = False
      print(rem)
    else:
      print(line)
  elif rem.upper() == '<PRE>':
    in_pre = True
    print(rem)
  elif '' != rem:
    while '' != rem:
      if in_comment:
        clean = ''
        in_comment,rem = inCommentHandler(rem)
      else:
        clean,in_comment,rem = outCommentHandler(rem)
      if '' != clean:
        print(clean, '')

Both jsmin and htmlmin.py are used the same way:

$ jsmin < code.js > code.min.js
$ htmlmin.py < page.html > page.min.html

Both programs are not perfect.

JSMin
I found that JSMin fails with regular expression patters like this:

var alpha=/^[a-z]+$/
var ALPHA=/^[A-Z]+$/

Adding ; to the end of the lines fix that problem.

htmlmin.py
What it does is simply:

  • Preserves <PRE> as long as the PRE-tags are on their own lines
  • Removes all comments: <!-- A comment -->
  • Removes white space in the beginning (and end) of lines
  • Removes empty lines

This is about the low hanging fruit only, but I think it is good enough for most purposes.

What is the “compression” rate?
For my test code:
HTML was compressed from 59kb to 42kb
JavaScript was compressed from 162kb to 108kb

It is possible to do better with better tools, but this is very simple, and it takes away the obvious waste from the files, with minimal risk of changing behavior. More heavy JavaScript minifiers rename variables and rewrites code.

Using WRT54GL with OpenWRT 14.07 in 2014

The Linksys WRT54GL was a very successul product for its time, not the least because sources were available and people could make their own firmwares for it. There are today firmwares like Tomato, DD-WRT and OpenWRT that runs on WRT54GL. My interest is with OpenWRT (as it gives me a full Linux system, not just a firmware, a web gui and parameters to set). The last OpenWRT version that was good and fully supported on the WRT54GL was 10.03.1. But that firmware is four years old, and not so fresh in 2014. I like to keep my Linux systems updated.

I tried to build (from scratch/sources, using buildroot) a very minimal version of Barrier Breaker (OpenWRT 14.07) Release Candidate 1 for WRT54GL. I removed everything that I could possibly live without, and ended up with an image that used less than 3MB of flash. Still, it was impossible to get WiFi running for more than a few minutes, and the router got very slow. Most likely the 16MB of RAM is just not enough (that is the generally accepted explanation).

Update: Someone seems to have been successful running BB with WLAN.

However, without WiFi, the WRT54GL runs Barrier Breaker just fine. That rules out using it as a WiFi router, but it leaves other options open. It is still a functional switch, that is also a Linux-machine with dropbear (ssh) and it can run software (opkg-packages) like nginx, openvpn, stunnel, uhttpd (cgi capable web server), iptables, and many more.

Image Builder
I have used the Image Builder (also called Image Generator) from BB RC2 to generate my own OpenWRT firmware for my WRT54GL. The process can seem scary at first, but it is quite simple. You can choose exactly the packages you want and build an image with just those packages, making the most of your hardware.
Update 2014-10-09: BB 14.07 final works fine, just as RC2.

After downloading and extracting the Image Builder (on my x64 Ubuntu machine), I ran the following command:

make image PROFILE=Broadcom-None
           PACKAGES="-dnsmasq -ppp -ppp-mod-pppoe -libip6tc -kmod-ipv6
           -odhcp6c -ip6tables -odhcpd uhttpd netcat openvpn-easy-rsa
           openvpn-openssl openssl-util stunnel libwrap"

Note, there should be no line breaks when you run this command.

  • Available profiles are found in target/linux/brcm47xx, and gives a starting point for selecting packages
  • Packages can be added using the PACKAGE option to make (ex stunnel)
  • Packages can be removed by using – in front of their name (ex odhcp6c)

I read the massive output which indicated that some packages (netcat, stunnel) failed to install. Also, if there are missing dependencies (libwrap) you will see it in the output.

Not all packages that are available are also included in the Image Builder download, but just download the packages you want and put them in the package folder in the image builder.

Rerun make image until you see no dependency errors, failed packages or other problems.

When make image has finished, have a look in your bin/brcm47xx folder, and you will find your new firmware. The firmware I generated above was less than 3.5Mb, leaving several 100kb for configuration and more packages. Flashing the router the normal way and logging in to the system I find that memory usage is about 10Mb and available disk is about 500kb (the openssl-util alone is 477kb on the filesystem and its opkg is 182kb).

As a benchmark I decided to use the easy-rsa package and build certificates. Particularly the build-dh takes very long time: on this machine 30min. However, on my new TP-Link WDR4900 it took 60 minutes, so obviously this was a silly unpredictable benchmark.

Conclusion
With some fantasy, curiosity and enthusiasm, you can turn your old WRT54GL into a useful component of your home or work network. It can provide some, or a few, useful services. Often it is wise to separate different functions to different hardware, because it makes your network more stable. And not the least, this is a good way to experiment with OpenWRT, without risk of breaking your production WiFi and broadband router.

Upgrading ownCloud 6.0.3 to 7.0.1

I am running ownCloud on a Debian machine with Apache and Mysql, as I have written about before. OwnCloud has released a new version, 7.0.1, and it is possible to update via the Web GUI. I did that, it took a few minutes, and it worked perfectly.

I have written about Performance of ownCloud before. It appears upgrading to 7.0.1 has not changed the performance of the platform at all.

TP-Link WDR4900, OpenWRT and OpenVPN

My old Linksys WRT54GL has worked fine for long. I want OpenWRT and I want some VPN solution, and the last OpenWRT version that runs properly on WRT54GL is 10.03.1, which is 4 years old by now. That means old versions of all software, and possible security issues.

I gave OpenWRT Barrier Breaker RC1 a chance on my WRT54GL. I even rebuilt OpenWRT from scratch (buildroot and all) to produce a minimal image (no IPv6, among other things). It was well below 3MB big, so I had a whole MB of flash drive available, but the Wireless network just won’t run (for more than a few minutes, then the router almost halts completely). 16Mb of RAM is simply too little.

My thoughts about a new router were ultimately inspired by the announcement of the Linksys WRT1900AC. However, the WRT1900AC was supposed to be FOSS-friendly but now several months later the driver sources required by OpenWRT has still not been released. It seems Belkin has done a really ugly thing. So no thanks Linksys (owned by Belkin).

OpenWRT works on many different routers, but many of them are not very powerful (8Mb flash/32Mb RAM is common). Since I have been struggling with my WRT54GL for years I wanted to get a powerful router this time, that gives me flexibility. As soon as you want something more powerful than 8/32Mb, that is also supported by OpenWRT, you do not have so many choices. And many routers that work are hard to actually buy. To confuse things further, many routers come in different versions with different support, (Netgear WNDR3700 being perhaps the worst example).

TP Link WDR4900
I ended up with a router that is classified as Work in Progress by OpenWRT: the TP-Link WDR4900 (European version 1.3). It has 16Mb of flash and 128Mb or RAM, and a 800MHz PowerPC CPU – who could resist such amazing hardware?

As it was listed as Work in Progress I expected problems, but none so far! I have installed OpenWRT Barrier Breaker RC2 (generic, not the p1020), and I use it as most people would. Both 2.4Ghz and 5GHz WiFi works fine (although, I have read that 40Mhz band does not work, so I have not tried that). The router has 10MB available flash (compared to just 2MB, for a similar 8MB router) and 96Mb available RAM.

OpenVPN
And I have installed OpenVPN. It would be very nice with a VPN solution that works out of the box with iOS and Android, but that is PPTP (which I don’t want) or L2TP/IPSEC-solutions, which all seem very complicated and without really good support anyway. OpenVPN requires a separate App for iOS and Android, but whatever.

I followed the OpenWRT OpenVPN instructions. At first they seem complicated, but it was actually easy and successful. My only issue was that the router would not route VPN traffic to the internet. Turns out a Firewall rule was missing in the instructions (at least for me):

#/etc/config/firewall
config forwarding
	option src 'vpn'
	option dest 'wan'

And, the command “build-dh” which is supposed to take long time, took just over 60 minutes on this router.

I found that each client should have their own certificates (otherwise they get the same IP, which is bad).

Next practical question was what port and protocol should OpenVPN run? For me it is more important to always be able to connect, than to get most optimal performance. So I thought TCP/443 should be a safe choice (no one blocks that). Well, turns out there are firewalls that see the difference between HTTPS/SSL and OpenVPN/SSL and block the later. Some options could possibly be UDP/123 or UDP/53. OpenVPN over ICMP would be VERY nice.

I tried to configure a port forward on the OpenWRT router, to itself, to make OpenVPN operate on several ports. This seems to work very fine with TCP but not with UDP. Don’t really know why but I might find out some day. Alternatively I could run several instances of OpenVPN on different port/protocol combinations, but that would require more configuration (and resources) than just a port forward.

OpenWRT
I first installed BB RC1 (the full 16MB image) and later upgraded to BB RC2 (the smaller, upgrade image). Obviously, the configurations are kept, but all packages are lost – a good thing to know ;) Well, all I need to do is exporting a list of all installed packages, before upgrade, and then run that list after upgrade. Guess I am not the first.

Also, release builds come with Luci (the web GUI) included, but trunk builds do not. I thought RC meant release candidate, implying that Luci would be included, but not so. However, very easy to find information on how to install Luci.

Benchmarking WDR4900
Benchmarks are tricky. I have a little C-program that is mostly CPU-dependent, not using floating point numbers and that outputs computation time (using clock_gettime so it should be accurate) for the problem given to it. I ran this program for problems small enough to run on my old WRT54GL, and compared the results of WRT54GL (200MHz Mips), WDR4900 (800MHz PowerPC) and an Apple PowerBook G4 (866MHz PowerPC) running Debian. In all cases I compliled with GCC and -Os (optimize for small binary). The 866MHz G4 is marginally faster than the 800MHz P1014. The PowerPCs are about 6 times faster than the old 200MHz Mips of the WRT54GL.

I have also found that the WDR4900 is about twice as fast as a Raspberry Pi for the above application.

Owncloud client on Raspbian

I found that Raspbian comes with a very old version (1.2 something) of the Owncloud client. I found no prebuilt more up to date versions, so I built one myself:

$ sudo apt-get install cmake qt4-dev-tools build-essential
$ sudo apt-get install libneon27 libneon27-dev qtkeychain-dev
$ sudo apt-get install sqlite3 libsqlite3-dev libsqlite3-0
$ tar -xjf mirall-1.6.1rc1.tar.bz2
$ mkdir mirall-build
$ cd mirall-build/
$ cmake ../mirall-1.6.1rc1

The owncloud client is now in the bin folder.

Note: I took the commands above from my history, so there is a slight risk of a mistake. Also, I might have installed other packages before, that I am not aware of are not required for owncloud. Feel free to give feedback!

It is quite useful to put a Raspberry Pi with a USB-drive in someone elses home, and let it syncronize your files. That way, you have an off-site backup for worst case scenarios.

Grub Boot Error

My desktop computer (it is still an ASUS Barebone V3-M3N8200) sometimes gives me the following error when I turn it on:

error: attempt to read or write outside of disk 'hd0'.
Entering rescue mode...
grub rescue> _

gruberror

My observations:

  • This has happened now and then for a while
  • It seems to happen more often when the computer have been off for a longer period of time (sounds unlikely, I know)
  • Ctrl-Alt-Del: It always boots properly the second time

I have three SATA drives. BIOS boots the first harddrive, where GRUB is installed on the mbr, and where / is the first and only partition, and /boot lives on the / partition.

The drive in question is (from dmesg):

[    1.339215] ata3.00: ATA-8: OCZ-VERTEX PLUS R2, 1.2, max UDMA/133
[    1.339217] ata3.00: 120817072 sectors, multi 1: LBA48 NCQ (depth 31/32)
[    1.339323] ata3.00: configured for UDMA/133
[    1.339466] scsi 2:0:0:0: Direct-Access     ATA      OCZ-VERTEX PLUS  1.2  PQ: 0 ANSI: 5
[    1.339623] sd 2:0:0:0: Attached scsi generic sg1 type 0
[    1.339715] sd 2:0:0:0: [sda] 120817072 512-byte logical blocks: (61.8 GB/57.6 GiB)

That is, a 60GB SSD drive from OCZ (yes, I had another OCZ SSD drive that died).

I can not explain my occational boot errors, but I have some theories:

  • The SSD drive is broken/corrupted (but no signs within Ubuntu of anything like it)
  • All drive is somehow not initialized when GRUB executes (?)
  • Somehow, more than one hard drive is involved in the boot process, and they are not all initialized at the same time (but this does not seem to be the case)

GSmartControl gives me some suspicious output about my drive… but I do not know how to interpret it:

Error in ATA Error Log structure: checksum error
Error in Self-Test Log structure: checksum error

The (Short) self test completes without errors.

Any ideas or suggestions are welcome! I will update this post if I learn anything new.