Author Archives: zo0ok

Grub Boot Error

My desktop computer (it is still an ASUS Barebone V3-M3N8200) sometimes gives me the following error when I turn it on:

error: attempt to read or write outside of disk 'hd0'.
Entering rescue mode...
grub rescue> _

gruberror

My observations:

  • This has happened now and then for a while
  • It seems to happen more often when the computer have been off for a longer period of time (sounds unlikely, I know)
  • Ctrl-Alt-Del: It always boots properly the second time

I have three SATA drives. BIOS boots the first harddrive, where GRUB is installed on the mbr, and where / is the first and only partition, and /boot lives on the / partition.

The drive in question is (from dmesg):

[    1.339215] ata3.00: ATA-8: OCZ-VERTEX PLUS R2, 1.2, max UDMA/133
[    1.339217] ata3.00: 120817072 sectors, multi 1: LBA48 NCQ (depth 31/32)
[    1.339323] ata3.00: configured for UDMA/133
[    1.339466] scsi 2:0:0:0: Direct-Access     ATA      OCZ-VERTEX PLUS  1.2  PQ: 0 ANSI: 5
[    1.339623] sd 2:0:0:0: Attached scsi generic sg1 type 0
[    1.339715] sd 2:0:0:0: [sda] 120817072 512-byte logical blocks: (61.8 GB/57.6 GiB)

That is, a 60GB SSD drive from OCZ (yes, I had another OCZ SSD drive that died).

I can not explain my occational boot errors, but I have some theories:

  • The SSD drive is broken/corrupted (but no signs within Ubuntu of anything like it)
  • All drive is somehow not initialized when GRUB executes (?)
  • Somehow, more than one hard drive is involved in the boot process, and they are not all initialized at the same time (but this does not seem to be the case)

GSmartControl gives me some suspicious output about my drive… but I do not know how to interpret it:

Error in ATA Error Log structure: checksum error
Error in Self-Test Log structure: checksum error

The (Short) self test completes without errors.

Any ideas or suggestions are welcome! I will update this post if I learn anything new.

Installing Citrix Receiver on Raspberry Pi (Raspbian)

It is obviously tempting to try to use a Raspberry Pi as a thin client. Often, that means a Citrix client, that requires Citrix Receiver (a closed source program available from Citrix in binary form only).

The problem
Raspbian is very much a normal Debian system. Citrix Receiver usually works nicely on Debian, and Citrix provides ARM binaries. But that would be too easy.

There are two ARM binaries available from Citrix (Receiver 13.0, the current)

  1. ARMEL – for ARM cpus without floating point support
  2. ARMHF – for ARM cpus with floating point support AND at least ARMv7

The Raspberry Pi is based on an ARMv6 WITH floating point support. This means that the ARMHF binary can never run, since ARMv6 lacks instructions required by ARMv7. But it also means that the ARMEL version can not run on Raspbian (although it could work on a Raspberry Pi with another OS).

There are different strategies to this problem: fixing the OS, or fixing the Citrix client.

RPi Thin Client Project
There is a nice effort called RPi Thin Client Project. The version from 2013-11-28 runs everything in ARMEL (without Hardware Float). The good thing is that it actually runs Citrix Receiver on a Raspberry Pi, and the performance of the Receiver itself is quite decent. However, it is based on Debian Unstable, and as I started updating and installing packages it did not work very well. Also, the RPi is not very fast even with Hard Float working, and without Hard Float everything except the Citrix client is very slow.

I see there is now a new Hard Float release of the RPi Thin Client project (2014-06-10), but as far as I can see it does not include Citrix Receiver (feel free to correct or update me!).

Unofficial Citrix Client
On a blog hosted by Citrix, there is an unofficial Citrix Receiver available for download. Obviously Muhammad Dawood has access to the real Receiver source code and he is allowed to compile it and distribute unofficial binaries. If you follow his instructions (install Raspbian as usual, then install Citrix Receiver with his special setup-script) you will get a Hard Float Citrix Receiver on a normal Raspbian system. Very nice! I am running it and it works very well, and I am much looking forward to an official/supported version.

The setup script modifies /boot/config.txt, mostly to overclock the RPi to maximize performance. My RPi did not accept it and refused to start. The good thing is that you can remove the SD card from the RPi and edit the config file with another computer. I only have ONE active line in my /boot/config.txt:

gpu_mem=128

…and you can probably change/remove that line too. I have an old 1280×1024
LCD display, that I connect via an Apple HDMI->DVI adapter, and a DVI-DVI cable. The display is comes up at correct resolution (although lxrandr can not run).

Other ideas
I was thinking about decompiling/recompiling one of the official ARM binaries. After reading a bit about it I gave up without thinking about trying. It probably violates any license agreement too.

Perhaps it could be possible to run some official binary (ARMEL, ARMHF or even x86) using QEMU user mode. Probably the performance would be completely unacceptable.

On Raspbian pages I read that theoretically it is possible to run ARMEL applications on Raspbian using Linux/Debian multi-arch, but there seems to be some hacks made in Raspbian, and this multi-arch probably practically unrealistic.

Conclusion
I recommend go with the inofficial/unsupported binaries from Citrix for now. Lets hope Citrix embraces this some day.

Simple Integer Factorization and RSA key sizes

I have been playing with RSA lately, you know the encryption protocol where you generate two primes, multiply them, give the product away to be used for encryption by others and keeping the primes for yourself since they are needed for decryption (that was very simplified).

I have read that the largest known number to ever have been factorized was 768 bits long. Such a number looks like this (in hex):

6d733229f36b1fe086f39b5ae05d0155e7658480fe7c987fe5da51734b6b8c02
a7b6b22e5da0afc0867d7315405bddd9062da121a3b0b2663397ae8a65085d53
927279c1c330247ff20c2f1bd4b4f95375a74beefac29c519a8193e2614ffc19

For encryption to be safe for the next decades, keys that are 2048 or 4096 bits long are used. Or even longer.

One feature of RSA is that the output of encryption is never smaller than the size of the key (well, again, very simplified). So, imagine you want to encrypt 4-digit pin codes, one-by-one, using RSA with 1024-bit key, each pin code would be several hundred bytes, instead of just a few characters. For almost obvious reasons, you can not make a stream cipher of RSA or some other smart mode of operation to work around this problem (please let me know if I am wrong). This makes me want to use RSA with a small enough key to be secure enough for my purposes.

My questions

  1. What key sizes are trivial to break?
  2. What sizes require some qualified effort?
  3. How hard is it really to factorize big integers?

I found a tool called Yafu. It appears to be rather competent. It would require years of effort and advanced skills in math to write a better tool. For integers 320 bits and larger, Yafu requires GGNFS – a seriously very complicated piece of software that also hard to compile. Luckily there are nice windows binaries from Jeff Gilchrist. I also downloaded a binary version of Yafu for Windows. The examples below use Cygwin to have access to some Unix tools (bc, tr, openssl).

Generating a Prime product and factorizing it
There is a very nice JavaScript project for RSA. Set the bit size to whatever you want (I use 128 in this example), click generate, and obtain “Modulus”:

Modulus (hex): 81653c1536c42501a815431dac804899

Convert to upper case using tr:

$ echo 81653c1536c42501a815431dac804899 | tr '[:lower:]' '[:upper:]'
81653C1536C42501A815431DAC804899

Then use bc to convert to decimal:

$ bc -q
ibase=16
81653C1536C42501A815431DAC804899
171996052064283111843964589052488861849
quit

Finally, factorize using yafu:

$ echo "factor(171996052064283111843964589052488861849)" | ./yafu-x64.exe


06/14/14 13:24:28 v1.34.5 @ TOR, System/Build Info:
Using GMP-ECM 6.3, Powered by GMP 5.1.1
detected         Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
detected L1 = 32768 bytes, L2 = 6291456 bytes, CL = 64 bytes
measured cpu frequency ~= 3108.870930
using 20 random witnesses for Rabin-Miller PRP checks

===============================================================
======= Welcome to YAFU (Yet Another Factoring Utility) =======
=======             bbuhrow@gmail.com                   =======
=======     Type help at any time, or quit to quit      =======
===============================================================
cached 78498 primes. pmax = 999983


>>
fac: factoring 171996052064283111843964589052488861849
fac: using pretesting plan: normal
fac: no tune info: using qs/gnfs crossover of 95 digits
div: primes less than 10000
fmt: 1000000 iterations
rho: x^2 + 3, starting 1000 iterations on C39
rho: x^2 + 2, starting 1000 iterations on C39
rho: x^2 + 1, starting 1000 iterations on C39
pm1: starting B1 = 150K, B2 = gmp-ecm default on C39
ecm: 30/30 curves on C39, B1=2K, B2=gmp-ecm default

starting SIQS on c39: 171996052064283111843964589052488861849

==== sieving in progress (1 thread):     624 relations needed ====
====           Press ctrl-c to abort and save state           ====
546 rels found: 293 full + 253 from 2026 partial, (41408.50 rels/sec)

SIQS elapsed time = 0.0740 seconds.
Total factoring time = 0.2690 seconds


***factors found***

P20 = 14624642445740394983
P20 = 11760701343804754303

ans = 1

Now I wrote a little bash script that does everything:

#!/bin/bash

INTSIZE=$1

rm -f key.*
rm -f *.log
rm siqs.dat

BC_LINE_LENGTH=9999
export BC_LINE_LENGTH

openssl genrsa -out key.pem $INTSIZE
openssl rsa -in key.pem -noout -modulus | cut -f 2 -d '=' > key.hex
echo "ibase=16 ; $( cat key.hex )" | bc > key.dec
echo "factor($( cat key.dec ))" | ./yafu-x64.exe -threads 4

I am using yafu with default settings – very hard to imagine I can do anything better than the yafu author. For 320 bits and above, the GGNFS library is required. Performance for different sizes of integers to factorize (using the QuadCore Intel i5-2400 from the output above):

Bits Time Notes
128 0.28s
160 0.33s
192 1.86s
224 8.02s
256 52.6s
288 265s
320 3649s ~30Mb Temp files
352 9291s
384 27261s ~660Mb Temp files
512 73 days http://lukenotricks.blogspot.se/2009/08/solo-desktop-factorization-of-rsa-512.html

Well, this means that a normal Windows desktop computer can break 512 bit RSA within days or weeks. Below 512 bits, brute force is not much of a challenge, and using 256-bit RSA for encrypting short messages is (unfortunately, since it was what I ultimately wanted to explore) just ridiculous.

As keys get larger, more temp disk space is required, somewhere between 512-768 bits it gets seriously complex (I claim this, as a 768 bit integer is the largest to have been known to ever been factorized into two primes). You can read about General Number Field Sieves to get some background.

Not everyone is capable of extracting the Modulus integer from en encrypted file or network stream, installing Yafu, waiting for a little while and then use the two obtained primes to actually generate a fake certificate or actually decrypt anything. So, if you want to encrypt data to prevent your boss or your wife from reading it, you can probably use any key size you like – or why not use an encrypted zip file or an encrypted MS Word file?

If you have a skilled and motivated enemy, who are willing to put some effort into breaking your encryption, I would not use anything close to 512 bits. I assume the police, or FRA (in Sweden) or NSA can break 512 bit RSA within hours or faster when they need to.

I am not giving any sources for any of my claims here. I am not an expert in factorizing large-prime-products, and I am certainly not an expert in Quantum Computers. But as I understand it; 1024 bits should be fine, but perhaps in 10-20 years using even larger keys may make sense, and I don’t expect to see Quantum computers practically breaking real RSA keys in the next 50 years.

It is fascinating that a 128-bit AES key is completely beyond hope to brute force for any power on earth, while 128-bit RSA keys are worthless.

I now wonder, are there other asymmetric ciphers that are secure with significantly shorter keys than RSA?

Installing Ubuntu on Pentium M with forcepae

If trying to install Ubuntu (or Xubuntu, Lubuntu) 14.04 on a Pentium M computer, you may get the following error:

ERROR: PAE is disabled on this Pentium M

ERROR: PAE is disabled on this Pentium M

Just restart the computer and when you come to the install menu…

lubuntu-pae-1

…hit F6 to get a menu of kernel parameters. Now, none of those parameters are what you want, so hit ESC. You should now be able to type forcepae at the end of the kernel command:
lubuntu-pae-3

Now, hit Return, and startup/installation of Ubuntu should proceed just normally.

Background
PAE is a CPU feature that has been available on most x86-CPUs since the Pentium Pro days. Since Ubuntu 12.10, it is a required feature. Some Pentium M CPUs have the PAE feature implemented, but the processor does not announce the feature properly. Since Ubuntu 14.04 the above forcepae option is available, to allow Linux to use PAE even if the CPU officially does not support it.

This affects mostly laptops from perhaps 2000-2005. These laptops are often good computers with 1400-2000MHz CPU and 512+ MB of RAM. As Windows XP is now officially unsupported by Microsoft owners of such harware might want to install an Ubuntu flavour on the computer instead.

There have been ways to make this work with Ubuntu 12.10-13.10. I suggest, abandon those versions and hacks completely, and make a fresh install of 14.04.

I have written before about Ubuntu on Pentium M without PAE.

Migrating from Windows XP
I would personally suggest Xubuntu or Lubuntu as a replacement for Windows XP: Both should be lightweight enough for your Pentium M computer, and both are easy to use with only a Windows background. Lubuntu is most Windows-like and the lightest of them. Xubuntu is a bit heavier (and nicer), and also resembles Mac OS X a bit.

I suggest the “Try Ubuntu without Installing” option. You will have an installera available inside Ubuntu anyways, and you can confirm that most things work properly before you wipe the computer.

Testing ownCloud performance

Update 2014-04-28: I have found that downloading files is quite fast. At about 10% CPU load, the server can saturate my 10MBit/s internet connection, when I download my files to another computer, over https. When uploading files, top shows mostly waiting time. When downloading, top shows mostly idle time. I suspect the SQL communication/overhead is limiting upload performance, and that ownCloud keeps a lot of book keeping in the database. If it does so for a good reason, and download is reasonably fast, I can live with it. I anyway keep my original article below (on upload performance), but I find the performance quite acceptable for my real-world application, on my not so powerful hardware.

Update 2014-04-26: Tried FastCGI/mod-fcgid, see below.

Ubuntu announced that they will cancel the Ubuntu One service, and Condoleezza Rice will start working for Dropbox. So, how am I going to share my files among different computers and devices?

ownCloud appears like a nice option. It is like Dropbox, but I can run it myself, and it works not only for files, but also for contacts/calenders and smartphones.

Buying ownCloud as a service is possible, but as soon as I want to put my pictures (and perhaps some video and music) it gets pretty expensive. If I host myself several hundreds of GB of disk is no problem.

So, I installed ownCloud (6.0.2) on my QNAP TS 109 running Debian (7.4). Horrible performance – it took a minute to log in. Ok – the QNAP has a 500MHz ARM, but even worse, just 128MB of RAM and quite slow disk access. What device to put ownCloud on? A new nice QNAP (TS-221) is quite pricy, and a Raspberry Pi accesses both disk and network over its USB bus. I came to think of buying a used G4 Mac Mini – they are really cheap! Then I came to think of my old Titanium PowerBook G4 that has been gathering dust last year, and I decided to try running ownCloud on it. Perhaps not as a long term solution, but as a learning/testing machine it could work fine.

ownCloud Server configuration
CPU: G4 866MHz
RAM: 1024Mb
HD: 320GB ATA
OS: Debian, 7.4 (PPC) fresh install on entire hard drive
DB: mysql 5.5 (the std version for Debian)
https: apache 2.2 (the std version for Debian)

To improve performance, I enabled APC for PHP, and disabled full text search.

Performance measurements
For the performance tests, I tried to transfer 1x100MB, 10x10Mb and 100x1Mb files. I measured the times with a regular stopwatch, and occationaly I repeated a test when the result was strange. The below measurements are not exactly accurate, but the big picture is there.

Transfers are made from a Windows 7 PC over a Gbit network.

1x100Mb 10x10Mb 100x1Mb
Encryption and checksum on G4 / server
(1): ssl encrypt aes ; sync 7s
(2): md5sum 1s
File transfer using other methods
(3): ftp/Filezilla 3s 3s 4s
(4): sftp/Filezilla 14s 15s 17s
ownCloud
(5): No SSL, NO APC 15s 32s 234s
(6): No SSL, APC 16s 27s 197s
(7): SSL, APC 34s 43s 263s
(8): SSL, APC, encryption 46s 69s 438s

Comments on Performance
(1): tells me that the server is capable of encrypting 100Mb of data, and sync output to local disk, in 7 seconds. The sync is less than a second.
(2): tells me that the server is capable of processing 100Mb of data in a second.
(3): tells me that simply sending the files over the network with a proven protocol takes 3-4 seconds, slightly slower for more smaller files.
(4): tells me that sending the files in an encrypted stream with a proven protocol over the network takes about 15 seconds, slightly slower for more smaller files.
(5): shows that the overhead for many files in ownCloud is massive.
(6): shows that APC helps, but not in a very significant way.
(7): shows the extra cost of SSL (transferring over a secure channel).
(8): shows the extra cost of encrypting the files for the user, on the server (using openssl/AES, according to ownCloud documentation.

It makes sense to compare row (3) and (6), indicating that with no encryption whatsoever the overhead of ownCloud is 5-50x the actual work. Or, the resources used for actually transferring and storing files are 20%-2%, the rest of the resources, 80%-98% are “wasted”. Now ownCloud has some syncroniziation and error handling capacities not found in FTP, but I dont think that justifies this massive overhead.

In the same way it makes sense to compare row (4) and (7), indicating a waste of 60%-94% of resources, for using a secure channel (and I believe that SSH uses stronger encryption than TLS).

For average file size smaller than 1MB, the waste will be even bigger.

I suspect the cost is related to executing php for each and every file. It could also be the use of the database for each file that is expensive. Somewhere I read that there are “hundreds” of database calls for each web server request handled by ownCloud.

Suggestions
It is of course a bit arrogant to suggest solutions to a problem in an Open Source project that I have never contributed to, without even reading the code. Anyway, here we go:

  • Find a way to upload small directories (<10MB, or <60s transfer) as tarballs or zipfiles. This should of course happen transparantly to the user (and only work via the client, not the web). This way hundreds or thousands of small files could be uploaded in a few seconds instead of very long time - and the load on the server would decrease a lot.
  • Similar to the first suggestion, allow files to be uploaded in fragments, to allow upload of 2GB+ files on all server platforms (it is ridiculus that an ARM server, like a QNAP, can not handle 2GB+ files, as I have read in the documentation is the case).
  • Alternatively, allow ownCloud to use ssh/sftp as transfer protocol. It will not work in all situations, but when client and server are allowed to communicate on port 22, and ownCloud is installed on a server with ssh enabled, it could be an option.

I kind of presume that the problem is one-file-per-request and WebDav limitations. Perhaps it is the database that is the problem? Nevertheless, I think som kind of batch-handling of uploads/downloads is the solution in that case too.

LAMP
ownCloud is built on LAMP, and I doubt the performance problems are related to the LA in LAMP. Also, I dont think that the M should be the problem if the databas calls are kept at a reasonable level. The problem must be with P(HP). I understand and appreciate that PHP is simple and productive, and probably 95% of ownCloud can be perfectly written in PHP. But perhaps there are a few things that should be written in something more high-performing (I am talking about C, of course)?

Conclusion
I really like the ambition of ownCloud, and mostly, the software is very nice. The server has many features, and the clients are not only nice, but also available for several platforms.

ownCloud is a quite mature product, at version 6. I wish some effort is put into improving performance. I believe there are possible strategies that would not require very much rewriting, and not need to brake compability. And I also believe it makes much sense to optimize the ownCloud server code: not only because people may run it on Raspberry Pis, QNAPs or old hardware, but also because it would improve the usefulness on more powerful servers.

2014-04-26: FastCGI / mod-fcgid
I decided to try PHP via FastCGI to see if it could improve performance. Very little difference – I disabled it and got back to “recommended” configuration. For details, read on.

I mostly followed this guide (as apache2-mod-fastcgi seems to be replaced by apache2-mod-fcgid lately, other high-ranking guides were out of date). The following options need to be added to /etc/apache2/apache2.conf:

FcgidFixPathinfo 1               (not in the site definition as suggested in guide)
FcgidMaxRequestLen 536870912     (effectively limits maximum file size)

Broken USB Drive

A friend had probems with a 250GB external Western Digital Passport USB drive. I connected it to Linux, and got:

[ 1038.640149] usb 3-5: new full-speed USB device number 4 using ohci-pci
[ 1038.823970] usb 3-5: device descriptor read/64, error -62
[ 1039.111652] usb 3-5: device descriptor read/64, error -62
[ 1039.391408] usb 3-5: new full-speed USB device number 5 using ohci-pci
[ 1039.575187] usb 3-5: device descriptor read/64, error -62
[ 1039.862954] usb 3-5: device descriptor read/64, error -62
[ 1040.142662] usb 3-5: new full-speed USB device number 6 using ohci-pci
[ 1040.550269] usb 3-5: device not accepting address 6, error -62
[ 1040.726092] usb 3-5: new full-speed USB device number 7 using ohci-pci
[ 1041.133774] usb 3-5: device not accepting address 7, error -62
[ 1041.133806] hub 3-0:1.0: unable to enumerate USB device on port 5

Turned out the USB/SATA-controller was broken, but the drive itself was healthy. I took the 2.5′ SATA-drive out of the enclosure and connected it to another SATA-controller – all seems fine.

Compile program for OpenWRT

I felt a strong desire to compile something, anything, for my WRT54GL running OpenWrt. As is often the case, in the end it is very simple, but finding the simple solution is not very easy. Ironically, the best instructions were not on the OpenWRT site (but I downloaded the Toolchain from openwrt.org).

The program I wanted to compile was a pure C program that I have written myself. Almost clean C89/ANSI code, with a few C99/Posix dependencies. No autoconfigure, no makefile.

Solution/Conclusion
I first downloaded the Toolchain for OpenWRT 12.09, brcm47xx from OpenWRT (despite I run OpenWRT 10.03.1 brcm-2.4). I unpacked it to ~/openwrt/.

Second, two environment variables:

$ PATH=$PWD/OpenWrt-Toolchain-brcm47xx-for-mipsel-gcc-4.6-linaro_uClibc-0.9.33.2/toolchain-mipsel_gcc-4.6-linaro_uClibc-0.9.33.2/bin:$PATH

$ STAGING_DIR=$PWD/OpenWrt-Toolchain-brcm47xx-for-mipsel-gcc-4.6-linaro_uClibc-0.9.33.2/toolchain-mipsel_gcc-4.6-linaro_uClibc-0.9.33.2
$ export STAGING_DIR

Third, compile

$ mipsel-openwrt-linux-uclibc-gcc program.c

Finally, send it to openwrt and test

$ scp a.out root@192.168.0.1:.
$ ssh -l root 192.168.0.1

# ./a.out

That was all I had to do, really. Now some comments on this.

Choosing a compiler/toolchain
OpenWRT gives you three download options, that all seems to be relavant if you want to compile stuff:

  • OpenWRT-ImageBuilder…
  • OpenWRT-SDK…
  • OpenWRT-Toolchain…

The Toolchain is a lot smaller than the others. At least for the brcm platform the SDK is named “for-linux-i486″ while the Toolchain is named “for-mipsel”. That confused me because I was not sure the Toolchain was actually a cross compiler.

To confuse things more, as soon as you start reading about how to compile stuff for OpenWRT, everyone talks about the “Buildroot”. No cheating with downloading SDK or Toolchain and get an already compiled compiler! Real men compile their compilers themselves? No disrespect here… the buildroot is fantastic technology, and perhaps I will have my own one day, but right now a C compiler is all I want.

Also, it seems the Toolchain that comes with 10.03.1/brcm-2.4 (my platform) is broken (ok, I am not smart enought to make it work, cc1 complains about unknown parameter). However, in my case the toolchain for 12.09/brcm47xx also worked. I wasted much time with the 10.03.1/brcm-2.4 Toolchain. If my simple steps above dont work quite quickly for you, and you get weird errors from the compiler, download another Toolchain (perhaps even for another platform, and perhaps 12.09/brcm47xx that works for me) just to see if you can get that to work (you may not be able to use the compiled binary, but you can at least confirm that you can generate one).

I suppose it is preferred to use the Toolchain for the OpenWRT platform/version you are actually targeting, but newer toolchains for compatible platforms can also work. Perhaps the newest toolchain is always preferable.

Linking and optimizing
Compiler flags affect the size of your binary, and for OpenWRT you typically want a small binary. I guess the “-Os -s” options to the compiler is the best you can do. The binary itself is static. No dynamic linking. I think that means it only communicates with the rest of the world via Linux system calls, and as long as those have not changed in a non-compatible-way, you can compile your program with a different toolchain than was used to build OpenWRT image and packages (of course, the binary format must be good too).

The C library
What about standard library compliance? My Toolchain came with “uClibc” (although others should be possible). My program uses two things that are not C89/ANSI-C compliant (I know because Visual Studio complained):

1) snprintf(): Worked fine, I believe this is C99 standard.

2) clock_gettime(): Compiled without errors or warnings, but did absolutely nothing. The input timespec struct was not modified at all when the function was called. This should be a POSIX function (not Linux specific). I guess it is either not implemented in uClibc, or I should use another clockid_t (I used CLOCK_MONOTONIC), or there is a system call behind it that does not work properly when toolchain is different from the one that build the kernel.

So, generally the compiler and uclibc worked very nicely, but some testing is required.

Build machine
I run the toolchain/cross compiler on a x64 machine running Ubuntu. The toolchain itself seems to be statically linked (ldd tells me), and built for x86 (readelf tells me). So most x86/x64 Linux machines should work just fine, and if you are on BSD you probably know how to run Linux binaries.

Toolchain limitations
For my purposes, the Toolchain was just what I needed. I do not know how to build ipk-package files, and I do not know how to build a complete OpenWRT image. Perhaps the Toolchain is not the right tool for those purposes.

QEMU
If you install QEMU you can test your OpenWRT binary on your x86/x64 Linux machine:

qemu-mipsel -L OpenWrt-Toolchain-brcm47xx-for-mipsel-gcc-4.6-linaro_uClibc-0.9.33.2/toolchain-mipsel_gcc-4.6-linaro_uClibc-0.9.33.2/ a.out

The same way, I guess it should be quite possible to run the Toolchain on a non x86-machine as well. I will write a few lines when I have compiled my OpenWRT/MIPS binaries on my QNAP/ARM running the x86 compiler/toolchain with the QEMU.

IPv6 access with 6to4 OpenWRT Backfire

A little while ago I shared some information on getting IPv6 at home, when all you have is a dynamic (but real/public) IP-address and a good old WRT54GL router with OpenWRT Backfire (brcm-2.4 edition).

I have now stabilized my configuration and I will share some details. You are presumed to

  • be comfortable with editing configuration files manually (using vi, or some other editor in OpenWRT)
  • use OpenWRT Backfire 10.03.1 on your router (which can probably be any router capable of running OpenWRT)
  • have some understanding of what you are about to do and why
  • have a public (but not necessarily static) IPv4 address

If you mess up your firewall rules, worst case you can not log in to your router or you expose your entire network to the world. Proceed at your own risk.

At some point you will start trying your IPv6 connectivity. I suggest using test-ipv6.com, ipv6-test.com and ipv6.google.com.

A good start is the OpenWRT IPv6 Article (it contains much information, but it is not very well structured). First follow the 6to4, 6rd instructions (down to the firewall rule, which is probably fine, but I dont need it).

You also need to enable IPv6 forwarding (which is described in the 6in4 section).
edit /etc/sysctl.conf:

net.ipv6.conf.all.forwarding=1

Then

/etc/init.d/sysctl restart

Now you should start testing what works and what does not. Run ifconfig both on the router and on your local machine (ipconfig on Wintendo). If you have a reasonably new OS, you should now at least have an IPv6-address, even if you cant ping6 or connect to anything.

Note: Your 6to4 IP should start with 2002: (both router and clients). Addresses starting with fe80: are private addresses and completely useless.

Firewall
You probably have a Masquerading firewall configured for IPv4, but if you bother with IPv6 at all you probably don’t want to do Masquerade for IPv6 (dont know if it is possible).

I wanted my IPv4 to work just normally. And I wanted all my LAN-computers to be real IPv6 members accessible from the IPv6 internet (and protected by firewall, as needed, of course). That means, all replies from Internet should be fine, but incoming traffic from Internet should be restricted. The most natural thing would be to use connection tracking, but I encountered problems.

This is what my firewall configuration looks like now:
/etc/config/firewall

config 'defaults'
	option 'input' 'DROP'
	option 'output' 'ACCEPT'
	option 'forward' 'DROP'
	option 'syn_flood' '1'
	option 'drop_invalid' '1'
	option 'disable_ipv6' '0'

config 'zone'
	option 'name' 'lan'
	option 'network' 'lan'
	option 'input' 'ACCEPT'
	option 'output' 'ACCEPT'
	option 'forward' 'REJECT'
	option 'mtu_fix' '1'

config 'zone'
	option 'name' 'wan'
	option 'network' 'wan'
	option 'family' 'ipv4'
	option 'masq' '1'
	option 'output' 'ACCEPT'
	option 'forward' 'DROP'
	option 'input' 'DROP'

config 'zone'
	option 'name' 'wan6'
	option 'network' '6rd'
	option 'family' 'ipv6'
#	option 'conntrack' '1' 
	option 'output' 'ACCEPT'
	option 'forward' 'DROP'
	option 'input' 'DROP'

config 'forwarding'
	option 'src' 'lan'
	option 'dest' 'wan'
	option 'family' 'ipv4'

config 'forwarding'
	option 'src' 'lan'
	option 'dest' 'wan6'
	option 'family' 'ipv6'

config 'include'
	option 'path' '/etc/firewall.user'

config 'rule'
	option 'target' 'ACCEPT'
	option '_name' 'IPv6 WRT54GL ICMP'
	option 'src' 'wan6'
	option 'proto' 'icmp'
	option 'family' 'ipv6'

config 'rule'
	option '_name' 'IPv6: Forward ICMP'
	option 'target' 'ACCEPT'
	option 'family' 'ipv6'
	option 'src' 'wan6'
	option 'dest' 'lan'
	option 'proto' 'icmp'

config 'rule'
	option '_name' 'IPv6: WRT54GL "reply" to 1024+'
	option 'target' 'ACCEPT'
	option 'family' 'ipv6'
	option 'src' 'wan6'
	option 'dest_port' '1024-65535'
	option 'proto' 'tcp'

config 'rule'
	option '_name' 'IPv6: Forward "reply" to 1024+'
	option 'target' 'ACCEPT'
	option 'family' 'ipv6'
	option 'src' 'wan6'
	option 'dest' 'lan'
	option 'dest_port' '1024-65535'
	option 'proto' 'tcp'

Some comments on this:

  • I think it makes sense to think about IPv6 Internet as a separate wan6, not as part of wan
  • Incoming traffic is forwarded, as long as it is to unpriviliged ports (1024+)
  • ICMP works between everyone
  • The firewall.user script contains nothing of interest for IPv6
  • Masquerade is activated for wan, but conntrack (or masquerade) does not work for wan6
  • I have not needed a rule to allow INPUT protocol 41 to the router itself (the 6to4 traffic over IPv4), perhaps it gets allowed as ESTABLISHED,RELATED

Bridging and Connection tracking problems
I believe my configuration is working properly. But something is not completely right. Loading the firewall…

root@OpenWrt:~# /etc/init.d/firewall restart
Loading defaults
ip6tables: No chain/target/match by that name.
ip6tables: No chain/target/match by that name.
ip6tables: No chain/target/match by that name.
ip6tables: No chain/target/match by that name.
ip6tables: No chain/target/match by that name.
ip6tables: No chain/target/match by that name.
Loading synflood protection
Adding custom chains
Loading zones
Loading forwardings
Loading redirects
Loading rules
Loading includes
Loading interfaces
ip6tables: No chain/target/match by that name.

In the end of OpenWRT IPv6 documentation:
Note: firewall v1 (e.g. still in Backfire 10.03.1-rc4 and up to r25353) has no default rules at all and ip6tables configuration needs to be done from scratch. Insert the rules below to make the packet filter function properly.

ip6tables -A FORWARD -i br-lan -j ACCEPT
ip6tables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
ip6tables -A FORWARD -j REJECT

Well, I should be on a more recent version (10.03.1) but the second line (with conntrack) gives the No chain/target/match by that name error. I don’t know why, and I don’t know how to fix.

Also, in the same document, under the heading Directly forward ISP’s NDP proxy address to LAN there are instructions for “firewalling on ipv6 even for bridged interfaces”. I believe that this is what I want to do, but the ebtables package/module seems to not be available for WRT54GL/Backfire 10.03.1/brcm-2.4, and it also seems to be known to cause performance problems.

Either:

  1. I messed something up when installing/configuring OpenWRT, and now I dont know how to fix it
  2. Something IPv6-related that I want to do is not fully supported on Backfire/brcm-2.4
  3. I am just trying to do the wrong thing, without understanding it

Other config files
In case it is helpful to anyone (and possibly myself in the future) I post a few of my configuration files.

/etc/sysctl.conf (there are more lines)

net.ipv6.conf.default.forwarding=1
net.ipv6.conf.all.forwarding=1

/etc/config/network (all file)

config 'switch' 'eth0'
	option 'enable' '1'

config 'switch_vlan' 'eth0_0'
	option 'device' 'eth0'
	option 'vlan' '0'
	option 'ports' '0 1 2 3 5'

config 'switch_vlan' 'eth0_1'
	option 'device' 'eth0'
	option 'vlan' '1'
	option 'ports' '4 5'

config 'interface' 'loopback'
	option 'ifname' 'lo'
	option 'proto' 'static'
	option 'ipaddr' '127.0.0.1'
	option 'netmask' '255.0.0.0'

config 'interface' 'lan'
	option 'type' 'bridge'
	option 'ifname' 'eth0.0'
	option 'proto' 'static'
	option 'netmask' '255.255.255.0'
	option 'ipaddr' '192.168.8.1'

config 'interface' 'wan'
	option 'ifname' 'eth0.1'
	option 'proto' 'dhcp'

config 'interface' '6rd'
	option 'proto' '6to4'
	option 'adv_subnet' '1'
	option 'adv_interface' 'lan'

/etc/config/radvd (all other configs have option ignore 1)

config interface
	option interface	'lan'
	option AdvSendAdvert	1
	option AdvManagedFlag	0
	option AdvOtherConfigFlag 0
	list client		''
	option ignore		0

And a few packages that you should probably have installed in OpenWRT:

6to4
firewall
ip
ip6tables
kmod-ip6tables
kmod-ipv6
radvd
libip6tc

DHCP & DNS
I have not enabled any (IPv6) DHCP – autoconfigure works fine for me. I have also not configured anything DNS related. My normal DNS resolves IPv6-only hosts ok (i.e. ipv6.google.com).

The day I want to allow incoming traffic to just a few of my local/LAN machines I will have to think about it.

Troubleshooting
The following tools/strategies have proven useful for troubleshooting:

  • ping6 between router and local/LAN machines
  • ping6 to internet hosts (ipv6.google.com)
  • Disable firewall or set policies to ACCEPT
  • Send/receive TCP traffic using ncat (the best nc/netcat) version for OpenWRT.
  • Test ping/ncat to/from an IPv6 host on a different network – I installed miredo on my Lubuntu netbook and let it connect to internet via my iPhone. That way it had no shortcut at all to my router and LAN.
  • I find myself having more success when I unplug my router to restart it; just restarting makes it not come up properly.

ncat
In case you are not familiar with ncat:

On the router (start listening):

root@OpenWrt:~# ncat -6 -l -p 9999

On your local computer (send a message):

$ echo 6-TEST | nc 2002:????:????:1::1 9999

On the router (should have got message):

root@OpenWrt:~# ncat -6 -l -p 9999
6-TEST

This is useful all directions, and on different ports, to confirm that your firewall works as you expect.

USB Drives, dd, performance and No space left

Please note: sudo dd is a very dangerous combination. A little typing error and all your data can be lost!

I like to make copies and backups of disk partitions using dd. USB drives sometimes do not behave very nicely.

In this case I had created a less than 2GB FAT32 partition on a USB memory and made it Lubuntu-bootable, with a 1GB file for saving changes to the live filesystem. The partition table:

It seems I forgot to change the partition to FAT32, but it is formatted with FAT32 and that seems to work fine ;)

$ sudo /sbin/fdisk -l /dev/sdc

Disk /dev/sdc: 4004 MB, 4004511744 bytes
50 heads, 2 sectors/track, 78213 cylinders, total 7821312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000f3a78

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *        2048     3700000     1848976+  83  Linux

I wanted to make an image of this USB drive that I can write to other USB drives. That is why I made the partition/filesystem significantly below 2GB, so all 2GB USB drives should work. This is how I created the image:

$ sudo dd if=/dev/sdb of=lubuntu.img bs=512 count=37000000

So, now I had a 1.85GB file named lubuntu.img, ready to write back to another USB drive. That was when the problems began:

$ sudo dd if=lubuntu.img of=/dev/sdb
dd: writing to ‘/dev/sdb’: No space left on device
2006177+0 records in
2006176+0 records out
1027162112 bytes (1.0 GB) copied, 24.1811 s, 42.5 MB/s

Very fishy! The write speed (42.5MB/s) is obviously too high, and the USB drive is 4GB, not 1GB. I tried with several (identical) USB drives, same problem. This has never happened to me before.

I changed strategy and made an image of just the partition table, and another image of the partion:

$ sudo dd if=/dev/sdb of=lubuntu.sdb bs=512 count=1
$ sudo dd if=/dev/sdb1 of=lubuntu.sdb1

…and restoring to another drive… first the partition table:

$ sudo dd if=lubuntu.sdb if=/dev/sdb

Then remove and re-insert USB Drive, make sure it does not mount automatically before you proceed with the partition.

$ sudo dd if=lubuntu.sdb1 if=/dev/sdb1 

That worked! However, the write speed to USB drives usually slow down as more data is written (in one chunk, somehow). I have noticed this before with other computers and other USB drives. I guess USB drives have some internal mapping table that does not like big files.

Finally, to measure progress of the dd command, send it the signal:

$ sudo kill -USR1 <PID OF dd PROCESS>

Above behaviour noticed on x86 Ubuntu 13.10.

OpenWRT, IPv6, VPN and Replacing WRT54GL

After having relied on the router my Internet provider has supplied me with for years, I decided to take back control over my LAN. There were a few factors that inspired me to put some effort into this:

  1. The announcement of the WRT1900AC could open up the door for a new generation of routers
  2. IPv6 is getting somewhere and I want to be able to play with it to learn – so I want IPv6 at home
  3. I want a VPN solution at home, for different reasons, but one of them is to be able to access the Internet more safely when using public Wifis, and another is to access services when I am abroad
  4. My Wifi at home (supplied by my router from my Internet provider) was not 100% stable

Summary
I ended up keeping my WRT54GL, Installing OpenWRT 10.03.1 on it, and configuring it to provide VPN using PPTP and IPv6 using 6to4. I mostly followed documentation on the OpenWRT web page, but there were and are some issues.
Update 2014-04-12: Details about IPv4 using 6to4.

OpenWRT and WRT54GL
The WRT54GL is not supported by the most recent versions of OpenWRT, and the final release with good WRT54GL support was 10.03.1. Everything I write in this article applies to 10.03.1 (the brcm-2.4 edition).

OpenWRT
OpenWRT is very nice. It used to be more hardcore compared to other router firmware. With that I mean that Tomato (and DD-WRT) are 100% Web-GUI-configurable, while OpenWRT was more dependent on the command line. Most things can now be handled using the Web-GUI. But dont attempt to get advanced things (like VPN/PPTP and IPv6) working without using the command line. If you dont feel comfortable with that, just stay with Tomato (which is very nice). This is for OpenWRT 10.03.1 – perhaps more recent version are more configurable without the command line.

IPv6
For end user needs in 2014, IPv6 is not needed. However, if you anyway decide to play with it, IPv6 is in some ways a more simple protocol than IPv4: not needing a NAT (all your clients get to have real IPs) takes away a lot of things that just happens to be complicated with IPv4. However, although NAT was never meant to provide security it did as a side effect – with IPv6 you need to think about really firewalling incoming traffic to your network. Things like port forwarding and VPN (to access internal resources) suddenly are not needed.

There is also no need for DHCP (as the clients can autoconfigure themselves, and there are so many available addresses on each network, that a conflict is very unlikely). But your IPv6 router must advertise the network so the clients know it exist.

IPv6 – How to get it
How can you get IPv6 if your internet provider only provides IPv4? There are different transition mechanisms that you can use (that are designed just to give you IPv6 when you only have IPv4):

  • Teredo needs to be configured on each client computer seperately, but requires nothing of the network (except that the firewall does not block the traffic). Teredo is the easiest way to access IPv6, but it gives you no IPv6 network. In Debain you just #apt-get install miredo, that is all.
  • Tunnel Brokers provide you IPv6 in a VPN-fashion, much like there are VPN-providers who give you an IP-address in another country, or for anonymization purposes. You can set up the tunnel on a single client, or even better on your router. Your IPv4 router does not have to be your IPv6 router, so it is possible to configure for example a Raspberry Pi as an internal IPv6 router behind a (IPv4) NAT. A Tunner Broker is probably the best and most reliable solution if you have real IPv6 needs. I havn’t tried this, but I suggest start looking at SixXS (who provides free tunnels)
  • 6to4 is a very elegant idea. However, in practice it seems to be a not very popular transition mechanism (supposed to be fading). 6to4 requires that you have a real public IPv4 address (it may be dynamic). This is what I tried, and it works well for me.

Note, when you have IPv6 via a transition mechanism, your cliens may still prefer to use IPv4 when accessing services that are available on IPv4 (which might be all the services you can possibly want to use). There are services to test IPv6.

IPv6 – 6to4 – OpenWRT 10.03.1 on WRT54GL
I followed these instructions (the 6to4 part). I ended up with Firewall problems: the internal IPv6 worked, but I had problems accessing the rest of the world. I have not really stabilized my firewall scripts yet (they give some errors), but if you are not too paranoid, you can try to ACCEPT IPv6 FORWARD on lan (allowing IPv6 traffic from Internet to your local network) and ACCEPT IPv4 INPUT on lan (allowing all IPv4 traffic from Internet to get to your router).
Update 2014-04-12: Details about IPv4 using 6to4.

VPN/PPTP – OpenWRT 10.03.1 on WRT54GL
First, before you set up a PPTP server and use it, consider the security problems with MS-CHAP-v2! If you are aware of the risk and the threat, the advantages with VPN/PPTP are:

  • No need for certificates
  • Good client support

I followed these instructions. Again, I ended up with firewall problems, but found a solution. Try:

iptables -A input_rule -i ppp+ -j ACCEPT
iptables -A forwarding_rule -i ppp+ -j ACCEPT
iptables -A forwarding_rule -o ppp+ -j ACCEPT
iptables -A output_rule -o ppp+ -j ACCEPT

Now, the confusing part is the IP-addresses of your VPN. Each VPN-connection will get both a local and a remote IP-address. And none of these will probably be on your LAN. And this is ok! There is a “localip” option for pptpd which is no longer supported, and I wasted some time trying to assign IP-numbers. But the above firewall rules fixed everything if I just didnt think about about IP-numbers at all.

Best router for OpenWRT
So, what happened to my WRT1900AC plans? Well, the WRT1900AC is not available yet, and I decided to play with my old WRT54GL to see how far I could get with it, and it turned out that for now it does everything I want it to.

OpenWRT has a long list of supported routers (they even have a buyers guide). I did some research (only reading on the Internet) and it seems that TP-link provides fine routers for OpenWRT, for example WDR3600, WDR4300 (N750) or WR1043ND. TP-link also seems to have a good Open Source policy. The N750 is probably what I would buy today, if I were to replace that WRT54GL.

So, what about that WRT1900AC? With Dual core CPU, 256MB of RAM, ESata and USB 3.0 port it is clearly a very capable router. And with 128Mb of storage, much more potent firmwares (or OpenWRT versions) are possible. But is it a good idea? Perhaps the router should only be a router, and other services (fileserver, print server, backup, sql, webserver) are better handled by something else (why not a Raspberry Pi), to not ever disturb the critical router function? I like OpenWRT for having a normal editable filesystem (compared to Tomato or DD-WRT) and packages instead of everything in one image. But 128Mb? Perhaps it would make more sense to just use an SD-card and run Debian?

The WRT1900AC is expensive for being a router, and if it ends up providing no more value/function than the TP N750 mentioned above, what is the point? On the other hand it is not very much money – just expensive for a router. For now I will keep my WRT54GL, but the WRT1900AC is still tempting.