Friday, 2 December 2011

Huawei E585 Review

I've been using a Huawei E585 for several months now, mostly on the train from Robertsbridge to London. This is a cellular 3G to WIFI adaptor (often called a MIFI device). It sits on its own cellular connection and acts as a WIFI base station, meaning it can be used with any WIFI capabale device, eg laptop, phone, internet radio.

Not bad when it works - with a "3" SIM, I can pull 4.3Mbit/s downstream and over 1Mbit/s upstream at good positions.

The problem comes with all the tunnels on the Hastings line - there are two a mile long either side of Sevenoaks and over half a dozen others. Obviously, I don't expect the device to work when there is no signal.

However, I do expect it to re-acquire cleanly when the signal comes back.

The E585 doesn't. Not reliably anyway. On the odd day, I can travel from London Charing Cross to Roberstbridge without a problem. On most days though, the device will lose signal in a tunnel, then get the signal back at the other end - indicated by lots of bars on the display. What it then does is lose its WAN IP - ie loses the IP layer. At this point, it should be making active efforts to get it back. It doesn't. It sits there like a lemon until manually rebooted, whereupon it will come straight back with a strong signal and connection.

This wouldn't be so bad if:

1) I could programmatically reboot the device. So far, all attempts to remote script the web interface have failed due to the particularly weird session handling they employ.

2) They actually issued some firmware updates occasionally;

3) They actually had technical support. "3" don't count, like most cell phone companies, you'll get a bloke in India who asks for a load of irrelevant details then tells you Linux isn't supported - despite the fact it's a WIFI device!

This is typical of so much of the consumer electronics industry where the motto is "first to market, then make the next one".

Thursday, 13 October 2011

Dell EqualLogic PS6500E 1TB SATA RAID capacities

RAID layouts and capacities.

Disk capacity is 931.51GiB and there are 48 such drives


37.83TiB and there are 2 hotspares. Overhead = 90GB per disk.


35.12TiB and 1 hotspare


20.71TiB with 2 hotspares


35.12TiB with 1 hotspares

Wednesday, 12 October 2011

EqualLogic PS6500E speed test with linux hosts

2 linux hosts, each comprising:
  • Dell R610, 96GB RAM, 2x Intel Xeon Processor X5680 (12M Cache, 3.33 GHz, 6.40 GT/s) (6 cores, 12 hyperthreads), 2 x 4 port Intel Pro1000 NICs  (Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01))
  • 2 x Dell PC6224 stacked gigabit switches
  • 1 x EQL PS6500E with 48 x 1TB SATA 7200RPM drives ST31000524NS SATA 3Gb/s 1TB 32MB cache
  • Wiring: 4 x gig from each host to PC6224s split across switches; 8 x gig from both PS6500E controllers to swiches, split across switches.
  • Switches and hosts confirgured for MTU of 9000 to match PS6500
Optimise both hosts thus:
echo 4194304 > /proc/sys/net/core/rmem_default
echo 4194304 > /proc/sys/net/core/rmem_max
echo 4194304 > /proc/sys/net/core/wmem_default
echo 4194304 > /proc/sys/net/core/wmem_max
echo 0 > /proc/sys/net/ipv4/tcp_timestamps
echo 1 > /proc/sys/net/ipv4/tcp_sack
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling
echo "4194304 4194304 4194304" > /proc/sys/net/ipv4/tcp_rmem
echo "4194304 4194304 4194304" > /proc/sys/net/ipv4/tcp_wmem
ifconfig eth8 txqueuelen 1000
ifconfig eth9 txqueuelen 1000
ifconfig eth10 txqueuelen 1000
ifconfig eth11 txqueuelen 1000

PS6500E has 2 x 10TB volumes ona RAID50, each volume is mounted to one of the hosts using round robin multipathing, formatted to XFS and mounted:

iscsiadm -m node --logoutall=all
iscsiadm -m discovery -t st -p
iscsiadm -m node --login -T ''
mkfs.xfs -L TEST3 /dev/dm-0
mount -onoatime,logbsize=262144,logbufs=8 /dev/dm-0 /mnt/

We used streamput and streamget which are simple homegrown C programs to read and write single large buffers of random data to files (few large files), run for a reasonable time to ensure the host RAM is saturated vis a vis caches (streamput can also do O_DIRECT writes to bypass caching). Buffer for single IO operation is 1MB of random data, generated once at program start. Buffer is page aligned for optimal DMA and O_DIRECT.

Our basic streamput test comprised:
./streamput -v -t 1200 -l 4096 -w /mnt/<uniquesubdir>

(Write 4GB files with random name into test dir and repeat for a total of 1200 seconds using plain ordinary C file writes with no special flags)

Our basic streamread test comprised
./streamread -v -t 1200 -w /mnt/<uniquesubdir>

(Read 4GB files with random name from test dir and repeat for a total of 1200 seconds using plain ordinary C file writes with no special flags)

Test 1 simple streamed writing
2 x streamput tests on 2 hosts in parallel



Total 179MB/sec

Test 2 reading
2 x streamread tests on 2 hosts in parallel


Total 224.5MB/sec

Reconfigure to RAID10, setup with XFS s above.

Test 3
2 x streamput tests on 2 hosts in parallel



Total 189.7MB/sec

Test 4

echo 3 > /proc/sys/vm/drop_caches
2 x streamread tests on 2 hosts in parallel



Total 86.4 MB/sec

Reconfigure to RAID10, setup with XFS s above.

Test 5
2 x streamread tests on 2 hosts in parallel



Total 180MB/sec

Test 6

echo 3 > /proc/sys/vm/drop_caches
2 x streamread tests on 2 hosts in parallel


Total 202.9MB/sec

Reconfigure to RAID10, setup with XFS s above.

Test 7 during RAID5 reconstruction

echo 3 > /proc/sys/vm/drop_caches
2 x streamread tests on 2 hosts in parallel



Total 184.8MB/sec

Test 8 Array reconstructing

echo 3 > /proc/sys/vm/drop_caches
2 x streamread tests on 2 hosts in parallel



Total 163.3MB/sec

Array (re)construction time:

Time to initialise RAID5: 58% completion in approx 14 hours. At 13:05, 62%, 15:29 73%. Estimated full time 21 hours 49 minutes

fio IOPS tests

Test 9 RAID5 (Raid rebuild complete)

fio --filename=/dev/dm-0 --direct=1 --rw=randwrite --bs=4k --numjobs=64 --runtime=300 --group_reporting --name=raid5

on 1 host


raid5: (groupid=0, jobs=64): err= 0: pid=21151
  write: io=3407MB, bw=11452KB/s, iops=2862, runt=304630msec
    clat (usec): min=220, max=7629K, avg=21083.39, stdev=47412.87
    bw (KB/s) : min=    0, max= 5261, per=2.36%, avg=269.97, stdev=69.20
  cpu          : usr=0.04%, sys=0.10%, ctx=874983, majf=0, minf=1523
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/872151, short=0/0
     lat (usec): 250=2.28%, 500=90.23%, 750=3.62%, 1000=0.47%
     lat (msec): 2=2.27%, 4=0.53%, 10=0.09%, 20=0.06%, 50=0.05%
     lat (msec): 100=0.02%, 250=0.02%, 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2000=0.01%, >=2000=0.33%

Run status group 0 (all jobs):
  WRITE: io=3407MB, aggrb=11451KB/s, minb=11726KB/s, maxb=11726KB/s, mint=304630msec, maxt=304630msec

Disk stats (read/write):
  dm-0: ios=0/871635, merge=0/0, ticks=0/19261868, in_queue=19261440, util=99.97%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%

Test 10 RAID5 write (Raid rebuild complete) shorter test time

fio --filename=/dev/dm-0 --direct=1 --rw=randwrite --bs=4k --numjobs=64 --runtime=60 --group_reporting --name=raid5


raid5: (groupid=0, jobs=64): err= 0: pid=21222
  write: io=839936KB, bw=13048KB/s, iops=3261, runt= 64375msec
    clat (usec): min=222, max=7480K, avg=18589.17, stdev=43118.68
    bw (KB/s) : min=    0, max= 7520, per=1.55%, avg=202.70, stdev=13.74
  cpu          : usr=0.11%, sys=0.10%, ctx=211492, majf=0, minf=1523
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/209984, short=0/0
     lat (usec): 250=11.03%, 500=83.24%, 750=2.35%, 1000=0.52%
     lat (msec): 2=1.74%, 4=0.54%, 10=0.09%, 20=0.06%, 50=0.05%
     lat (msec): 100=0.02%, 250=0.02%, 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2000=0.03%, >=2000=0.29%

Run status group 0 (all jobs):
  WRITE: io=839936KB, aggrb=13047KB/s, minb=13360KB/s, maxb=13360KB/s, mint=64375msec, maxt=64375msec

Disk stats (read/write):
  dm-0: ios=0/209453, merge=0/0, ticks=0/3930736, in_queue=3930732, util=99.61%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%

Test 10 RAID5 read

fio --filename=/dev/dm-0 --direct=1 --rw=randread --bs=4k --numjobs=64 --runtime=60 --group_reporting --name=raid5


raid5: (groupid=0, jobs=64): err= 0: pid=21295
  read : io=40636KB, bw=654080B/s, iops=159, runt= 63618msec
    clat (usec): min=527, max=6767K, avg=374501.76, stdev=152589.35
    bw (KB/s) : min=    0, max=  318, per=2.21%, avg=14.08, stdev= 1.64
  cpu          : usr=0.09%, sys=0.01%, ctx=10210, majf=0, minf=1651
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=10159/0, short=0/0
     lat (usec): 750=0.11%, 1000=0.15%
     lat (msec): 2=0.97%, 4=9.81%, 10=55.35%, 20=17.33%, 50=1.62%
     lat (msec): 100=0.83%, 250=2.15%, 500=1.53%, 750=0.88%, 1000=0.64%
     lat (msec): 2000=1.39%, >=2000=7.24%

Run status group 0 (all jobs):
   READ: io=40636KB, aggrb=638KB/s, minb=654KB/s, maxb=654KB/s, mint=63618msec, maxt=63618msec

Disk stats (read/write):
  dm-0: ios=10138/0, merge=0/0, ticks=3905828/0, in_queue=3905812, util=99.62%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%

Test 10 RAID5 read on 2 hosts in parallel

fio --filename=/dev/dm-0 --direct=1 --rw=randread --bs=4k --numjobs=64 --runtime=60 --group_reporting --name=raid5

IOPS=138 and 123, total 261

Test 11 RAID 10 Write (Reconstructing) 1 Host
fio --filename=/dev/dm-0 --direct=1 --rw=randwrite --bs=4k --numjobs=64 --runtime=60 --group_reporting --name=raid10


Test 12 RAID 10 Read (Reconstructing) 1 Host
fio --filename=/dev/dm-0 --direct=1 --rw=randread --bs=4k --numjobs=64 --runtime=60 --group_reporting --name=raid10

(Seems wrong but repeatible!)

Tuesday, 23 August 2011

debirf - Building custom debian USB keys/CDROMs and run in RAM

I needed a way to make a simple custom bootable USB key to run linux on some test servers.
Furthermore I needed a strong linux environment that was easy to install additional tools into - such as an iSCSI initiator, bonnie++ and other tools to benchmark a new SAN.

I was also looking for a simple way to run the system from RAM (tmpfs) so I was not dependent on leaving the media in.

debirf is just such a tool. It builds an ISO image of a debian 6 system which boots into RAM and behaves like a normal system (ie you can trivially apt-get extra packages into the running system).

Turns out it's not too hard to tell it to add more packages to the ISO. Of course, when you have an ISOLINUX bootable ISO, it's not too hard to transfer to a USB key, though there is no automated way to do this yet.

It exists as a Debian package and also in Ubuntu, though I found problems running it from Ubuntu 10.10, so I installed a minimal Debian 6 system in a VirtualBox virtual machine.

The rough steps are, as root on the Debian 6 machine:
  • apt-get install debirf syslinux
  • mkdir /root/debirf; cd /root/debirf # You need lots of space here, use a suitable location
  • tar -xzf /usr/share/doc/debirf/example-profiles/rescue.tgz
  • Now edit rescue/debirf.conf and add the following line:

That just makes life slightly easier as you will get a usable isolinux config for using syslinux on a USB key.

Now some more steps:
  • as an example of how to add more packages to the image, add a file rescue/modules/benchmark:
#!/bin/bash -e

# debirf module: benchmark
# remove/install extra packages

# install packages
debirf_exec apt-get --no-install-recommends --assume-yes install bonnie++ iftop fio iperf

  • debirf make rescue
  • debirf makeiso rescue
Hopefully now, you will find an ISO file in rescue/ 

Burn this to a CDWR, or prep a USB key in the usual way:
  • fdisk /dev/sdX # Add one primary partition, type 0xc, boot flag set
  • syslinux -i /dev/sdX1
  • mount /dev/sdX /mnt/usb
  • mount -oloop rescue/<isofilefromdebirf>.iso /mnt/iso
  • cp /mnt/iso/* /mnt/usb/
  • mv /mnt/usb/isolinux.cfg /mnt/usb/syslinux.cfg
  • umount /mnt/iso /mnt/usb
Insert USB key (or CDWR) into computer, boot from it and you should see a syslinux menu offering console over "video" or "serial" (ttyS0 at 115000 baud).

Remove key and use again if required.

You can also just run the "isohybrid" command on the original iso, the dd the result to a USB key:

  • isohybrid rescue/<isofilefromdebirf>.iso
  • dd if=rescue/<isofilefromdebirf>.iso of=/dev/sdX
Where /dev/sdX is the device node of your USB stick. Make sure you umount the sick first as many OSes will auto mount it upon insertion!

Sunday, 31 July 2011

Email for Kids - and moderating messages

I want to give my two kids (5 & 7) email. If I don't, eventually they'll sort themselves out on gmail or something and whilst gmail might do a good job of getting rid of willy pill spams, it won't protect your kids against approaches by undesireables or bullying.

So, this assumes you run your own email server (or can):

The basic requirements are:
  • Anyone can email them, but all such emails are trapped in a queue
  • A moderator (ie one or more of the child's guardians) will be alerted  when there are new emails waiting and can approve or reject each one after checking the content.
  • Some sender addressed could be whitelisted - ie will bypass the requirement for moderation. Do this for your own email addresses or for the kid's trusted friends.
Sounds  a bit like a mailing list doesn't it? A lot like it in fact. Well, to be specific: a mailing list with one member (the kid's physical mailbox address), posting allowed by non list members with the default policy of "hold for moderation".

Mailman meets the requirements here, so this is a quick howto, assuming the mail system itself is exim (using other systems should be possible.

The only other problem is that we should block the kid's physical mailbox address from being able to receive external mail, though it must be able to receive mail from mailman.

Exim config snippets

In the ACLs section, you can put in the bit that disallows external hosts from being able to send directly to the kid's local mailbox:


deny    local_parts = ^.*[@%!/|] : ^\\.

deny hosts = !+relay_from_hosts  
  condition = ${if exists{/etc/exim4/domains/$domain/uservialist}}
  condition = ${lookup{${sg{$local_part}{\N[_\+].+$\N}{}}}lsearch{/etc/exim4/domains/$domain/uservialist}{true}{false}}
  logwrite = Recipient address $local_part@$domain blocked by \
  user protection list /etc/exim4/domains/$domain/uservialist
  message = Recipient globally blocked (handled by list)
accept  hosts           = :

The bit you want is the red part, somewhere near the top of the acl_smtp_rcpt ACL. My setup is multidomain, but if yours is a single domain setup, you could simplify the condition and logwrite lines to:

/etc/exim4/domains/$domain/uservialist -> /etc/exim4/uservialist

This file contains a list of local usernames of your kids, one per line.

Note the sg{} rexeg - this is because I run a system with throwaway email addresses (ie for any valid local part "local", "local+<anything>" and "local_<anything>" are valid addresses mapping back to "local").

If you don't do this, you could simplify that line to:

condition = ${lookup{$local_part}lsearch{/etc/exim4/domains/$domain/uservialist}{true}{false}}

Routers section:

Standard mailman config (file locations correct for Debian 6, may need to adjust for other OSes):

        debug_print = "R: mailman_router for $local_part@$domain"
        domains = +local_domains
        require_files = MAILMAN_HOME/lists/$local_part/config.pck
        driver = accept
        local_part_suffix = -bounces : -bounces+* : \
                      -confirm+* : -join : -leave : \
                      -subscribe : -unsubscribe : \
                      -owner : -request : -admin : -loop
        transport = mailman_transport
        group = MAILMAN_GROUP 

You will need the following definition in the global section of the exim config list:



The only thing left is to configure the mailman list. You'll need to configure the web interface to mailman but the supplied apache.conf snippet in the debian package is OK.

Assuming your kid has a username of kiduser, your domain is and you want them to have an email address of :

Create a list called "kid". Subscribe "" to this list (the only member).
Set yourself and any other guardians' email addresses as the list moderator. 

You may turn off the List-* headers and remove both the subject line prefix [kid] and the standard signature that usually gets added to list emails.

"What steps are required for subscription?" should be set to "Require approval"
"Action to take for postings from non-members for which no explicit action is defined." MUST be set to "Hold" - This is the route by all of the emails come in.

Whitelisting friends:

Add trusted sender addresses to the section: "List of non-member addresses whose postings should be automatically accepted." Do not make such people list members or everyone gets a copy of all emails!!
The beauty is that you can moderate either using the web interface or by email. Mailman will email the list moderators every time a non approved sender emails your kid. Any of the list moderators can approve or reject any of the emails and can choose, on the web interface, to whitelist the sender.

Of course, if you want to monitor your kid's emails, even whitelisted ones, you could add yourself as a list member and you will get a copy of everything.

Tuesday, 28 June 2011

Apache 2.2: PAM authentication and SSL made easy.

The problem

If you run an Apache webserver and need to authenticate web users against system accounts with a central authentication service (LDAP, NIS, Kerberos), you previously had two basic choices:
  1. Use the specific authentication modules, eg auth_kerb or authnz_ldap
  2. Use auth_pam
I don't like option 1 - if you need to change your backend scheme (eg augment LDAP with kerberos, or switch the other way) you now have additional references to LDAP or kerberos sprinkled everywhere. That is a matter of opinion though - if you do want to do direct authentication from Apache, you may still find elements below of use with adaption.

It would also be cute to allow HTTP requests, and redirect them to HTTPS rather than just denying them with an SSLRequireSSL statement.

I am greatly in favour of PAM - it was designed to bring authentication into one place and it offers a lot of additional flexibility. I used to use auth_pam but it seems that the module is dead due to Apache 2.2 API changes.

However there is a very nice alternative: authnz_external. authnz_external forms a link between Apache's authentication phase and an external program which is handed the username and password on a pipe. All the program has to do is perform the authentication step and return a code to authnz_external to indicate success or mode of failure. pwauth is one such readily available program but as the program is decoupled from apache's API, it's pretty easy to write your own.

As it stands, pwauth uses pam via the pam service "pwauth" which makes configuration a breeze. What authnz_external does not do is handle group membership but it can be used in conjunction with authz_unixgroup to handle that.

Another problem is that you generally want to force HTTPS/SSL on for authenticated HTTP to protect against password sniffing. I'd like to present my solution which seems flexible and not prone to accidental misconfiguration issues. This is based on a Debian 6 system but it should be applicable to any Apache 2.2 installation and fairly easy to adapt.

Worked example

mkdir /etc/apache2/snippets

Add the following files and contents:

# Rewrite non SSL to SSL via 301 perm redirect
RewriteEngine on
# Case 1 redirect port 80 to SSL
RewriteCond %{HTTPS} !=on
RewriteCond %{SERVER_PORT} =80
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [R=301]
# Case 2 redirect port 8080 to SSL
RewriteCond %{HTTPS} !=on
RewriteCond %{SERVER_PORT} =8080
RewriteRule ^ https://%{SERVER_NAME}:8443%{REQUEST_URI} [R=301]

[Case 2 is optional and merely demonstrates how to handle alternative cases]

# Set up authnz_external to pwauth
DefineExternalAuth auth_pam pipe /usr/sbin/pwauth

# Set up auth and force user onto HTTPS
# Do the force to HTTPS
        Include /etc/apache2/snippets/redirect-https
# Set up auth external (uses pwauth, needs snippets/authload)
        AuthType Basic
        AuthBasicProvider external
        AuthExternal auth_pam
        AuthName "DDH at King's College London"
# Check unix (via NSS) groups
        AuthzUnixgroup on
# Here be magic - needs an env var "SSL_ON" set for all HTTPS connections
        Order Deny,Allow
        Deny from all
        Allow from env=!SSL_ON
# More magic - if non SSL, we allow with no auth, but redirect above then fires
# so no page served.
# Next time round, HTTPS connection fails the Allow test so falls back to Auth checks
        Satisfy any
# All you need are the appropriate "Require" directive after the Include of this snippet
# because the Require will vary from vhost and/or location.

# Enable SSL and set SSL_ON environment variable 
SSLEngine On
RewriteEngine on
RewriteRule ^ - [E=SSL_ON]

Usage is pretty easy:

In your vhost config:
<virtualhost *:80>
        Include /etc/apache2/sites-available/yoursite.d/globalconfig
<virtualhost *:443>
        Include /etc/apache2/snippets/enablessl
        Include /etc/apache2/snippets/authload
        Include /etc/apache2/sites-available/yoursite.d/globalconfig

and /etc/apache2/sites-available/yoursite.d/globalconfig
DocumentRoot /var/www/
ErrorLog /var/log/apache2/
CustomLog /var/log/apache2/
<directory /var/www/>   
    # Whatever
<location />
    Include /etc/apache2/snippets/auth
    Require group group1 [... group2 etc]
# or
    Require user user1 [... user2 etc]
# and optionally to allow unauthenticated local access:
     Allow from


enablessl sets an Apache Environment variable SSL_ON for any HTTPS connection (this is not an OS level environment variable). This variable is likely to make it through to CGI or WSGI scripts.

authload sets up authnz_external (auth_pam here is merely a local identifier and can be anything as long as you change all occurrences of it)

auth is the hard part. If a request arrives here with SSL_ON set, then it relies on the Auth settings logical-OR any other Allow statements. If the request arrives here without SSL_ON set then we have a problem: we want the redirect rule to fire, but unfortunately Apache applies the Auth and Allow statements first. To get around this, we use the line: Allow from env=!SSL_ON which bypasses any other Allow and Auth rules and allows the request to proceed. This is counter intuitive as we do not actually serve the usual target of this request. Instead, this block is satisfied:
RewriteCond %{HTTPS} !=on
RewriteCond %{SERVER_PORT} =80
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [R=301]
The last statement issues as permanent 301 redirect to the browser to come back to the same URI but with HTTPS on.

The <Location > may be applied to one or more sub URLs if desired.


Don't forget to enable the relevant modules with a2enmod

It's a pretty stable solution, but you must be careful not to have a Satisfy All statement in the same scope or the association between Auth and Allow will be changes from a logical-OR to a logical-AND which will break the scheme.

Generally you should be careful with any other Auth, Allow or Rewrite rules. Rewrite rules performing other tasks are fine, but should come after the section:
Include /etc/apache2/snippets/enablessl
Include /etc/apache2/snippets/authload

Allow statements should only come after Include /etc/apache2/snippets/auth

Don't forget to set up /etc/pam.d/pwauth - this is too system specific to cover here. You could start by copying one of the other services configs to it unless your OS has set it up for you.
You may want to have a trimmed down config that avoids trying local passwd/shadow auth and only uses your external service.

Be aware that pwauth is hard coded to disallow UIDs below 500. This is a #define in the code so pretty easy to rebuild if required.

I recommend testing pwauth on the command line with some test accounts to verify that it is doing what you think it should.


This is a rather special case. Some bright spark decided that the KrbStripRealm statement didn't belong and that modification of the supplied "username" (ie stripping the @realm... part) should really be handled by another more general ID mapping module. I agree with the reasoning but until such a general mapping module actually exists (not that I could find) it was a bit off in my opinion to remove it making auth_kerb useless in a great many installations.

If this applies to you, you may find the authnz_external method above useful. What you will lose is the ability to handle GSSAPI authentication from browsers that support it. If that is important to you, people have reported being able to patch the KrbStripRealm option back in.


Use what you want. For the pedants amongst you, the above code snippets are licensed under the BSD licence - do what you like :)


This is born out of my work with the Department of Digital Humanities, King's College London and credit is due in part to a number of blogs and group comments around the internet.

Friday, 3 June 2011

Fancy free fonts for your website

I guess this might be a well known fact but it wasn't for me...

gives a quick and almost trivially simple way to jazz up your website with the same set of fonts that offers.

Wednesday, 1 June 2011

The future of website design is gadgets. Or is it?

There's almost no need to build custom websites with complex functionality these days, at least if you are a small company or a person with a personal website.

Indeed, many people who once upon a time might have dared venture out with Tripod or Geocities are quite happy with Facebook. Facebook offers the scenario of publishing something about yourself, popping up a few snaps and interacting with others by way of comments.

After discounting Facebook users and also "proper" companies like Amazon or Sainsburys who need a "real website" with complex functionality, there remains a group of people, including me, who want to maintain a couple of websites with real man's HTML and CSS, but also want a bit of dynamic content such as a front page with news items and readers comments or a calendar of interesting events.

Traditionally, we would have had to have coded such things, usually badly, usually ugly, often unfinished. I have a couple of sites like this - my own website, and one for the village I live in.

Don't get me wrong - I am not a web designer. I am a systems programmer. State of the art design for me is using a couple of Gimp artistic plugins on photos and abusing the not-yet-standard CSS colour gradient properties. On a good day, my HTML and CSS might just all pass the W3C validators, because that appeals to my sense of neatness as a programmer. On a really good day, the pages might look OK in everything from IE8 upwards, Firefox, Chrome, Safari and a text browser.

Thus, I find myself experimenting with the IFRAME and OBJECT HTML tags to embed other peoples' hard work into my sites. Case in point - this blog, hosted by Google's I have two on the village site - one for the front page news items and one for bulletins from the local police. I have a couple of Google calendars too: one for the police again, as it makes sense to put crime reports on a calendar and one for upcoming village events.

Google calendars are a joy to embed: they adapt themselves to whatever space you give them. The work involved is nothing more that using the Google "embed calendar" feature to set the display attributes, then pasting their generated code snippet into my site as an IFRAME or OBJECT. I set the display size and all is well.

It looks like my page has a calendar or "agenda" list, you can click it and it does what it's supposed to without caring one jot that it is part of a larger scheme.

Things aren't quite so easy with the blog though. That adapts its width nicely to suit the space it's given (especially if one hacks the blog template to achieve a fluid resizing model). But there's one thing blogs all have in common: they get longer. And longer. And then suddenly shorter as some magical archive date is passed.

Now we have the crux of the problem: IFRAMES don't dynamically resize very well. Well, sometimes they do, but not if they are contained in a DIV block that controls their placement on a fluid page layout.

So we have three choices, it seems:
  1. Declare the frame to be a "reasonable size". This works nicely, until the contained content overflows it. Then both the frame and the browser are likely to grow scrollbars and it really isn't a natural experience working two scrollbars at once to follow the content;
  2. Make the frame vastly oversized. This is better in some ways, leaving all the content at the mercy of the main browser scrollbar. But it looks silly when the reader gets to the end to find a screen or mores worth of empty space before the page footer.
  3. Pull some serious JavaScript-Fu. This seems to be the way everyone tries to handle the problem. Essentially it boils down to asking the frame how big it is (repeatedly as it may change as the reader clicks within it) and telling the container blocks to match that size with suitable padding.
Option 3 runs into a serious problem when the embedded content is in a different DNS domain to the container page. Allowing unfettered JavaScript shenanigans between two domains is considered a Bad Idea (TM) for a variety of reasons that could empty your bank account or see all your contacts signed up for a healthy dose of extra SPAM. So the designers made it difficult, on purpose, and with good reason.

There are ways around this, involving putting a little JavaScript "server" on the target site (assuming you can) and having it tell the containing page's JavaScript the rendered size of the frame, so that the containing page can adjust itself. Having experimented with this, I can vouch for the fact that it is complicated and fragile, being easily upset by the semantics of the container blocks, such as DIVs on a two column page layout.

Some people on the forums I visited today suggested other solutions, such as server side handling. For example, rather than embed a blog site, simply process the blog's XML feed and generate your own text for direct inclusion in the page.

That would work well for a number of use cases, mostly where you know you only want the reader to see the last few days' worth of entries. However, you lose the richness of the original site, such as the ease of browsing older archived material or leaving interactive comments.

You could implement that yourself, but at that point, you are coming dangerously close to the amount of work had you written your own personal system from scratch.

But, here's a thought. And it's a crazy one: Wouldn't it be nice to have a page embedding mechanism where it is simple to tell it what you want it to do? You probably either want a fixed size (which may be relative to the browser window, other container or even absolute), or you want it to grow, vertically at least, to suit the content. Possibly, just, you may want to put some constraints on how big or how small it is allowed to go.

Call me naive, but it doesn't sound like a tall order to me, at least not for the browser makers nor the W3C standards body.

I certainly hope they see a need and get on with it - because, I believe that gadgets, to coin a Google term, are the way forward. I can see a future where significant sections of websites could be built quickly and simply out of embedded gadgets and content-blocks either written, or hosted by other sites while still maintaining the odd benefit of hosting your own site.

The whole idea brings a number of other issues, such as searchability and coherent Google indexing, but that's for another article.


It occurred to me this morning, over coffee that that there may be a sensible compromise solution. Having concluded that a blog site is probably better being left as a blog site without embedding, then what if:
  • We use the XML feed to present a list of recent titles and perhaps the first paragraph which are server side rendered onto our main website.
  • For each article, we add in a link such as "Read full article".
  • Clicking the link takes the reader to a new browser tab or page which is nothing but the blog site - no embedding tricks.
This might very well be a good compromise solution. It has the advantage of keeping our main website alive with changing content which is good for Google search rankings and also for any Google custom search engines embedded within the website (Google does not, to my knowledge, introspect embedded object/iframe content when spidering a site).

Saturday, 28 May 2011

Can primary school children use Linux?

Long story abbreviated:

I have two children, 5 and 7 years old. The primary school would like them to have ready access to the school's VLE website. I happened to have two old laptops, both semi broken and under powered - but fine with a light OS and sitting on a table plugged into the wall. One laptop is a 6 year old HP, the other an Asus eeePC.

I decided on Xubuntu (Ubuntu but with the less hungry XFCE window manager), version 10.04, a Long Term Support release, maintained for 3 years.

Two base installations later, I then integrated the laptops with my home systems and locked the WIFI to my base station (ie removed NetworkManager).

Then a cursory configuration of each of their desktops - fix the fonts to be a bit bigger for small kids, clean up the "gnome style" twin task panels into a more Windows like single panel (the aim here is to allow them to feel comfortable going between my systems and the school computers). Added some media players and Flash so that the school VLE website worked correctly and defaulted Firefox to the school VLE.

The last jobs included installing the ubuntu-edu-primary metapackage, which adds lots of great education stuff like a fractions quiz, hangman, kiddie friendly paint program and lots of other stuff. Added GoogleEarth too.

My daughter (7) had a chance to further customise her desktop with verbal guidance and I fixed my son's (5) up by asking him what background colour he'd like.

It is quick and responsive, uncluttered and has a rich environment perfect for kids their age. My daughter is even learning perl programming. They know about logging out, saving files, hibernating and remembering to turn the power off.

They have few problems with the differences between Linux and MS Windows - the "Start" menu is in the same place and does similar things. Most apps have a similar menu layout (eg "File/Save", "Edit", "Help" and the more common keystrokes such as CTRL-C/V/X copy/paste/cut and CTRL-S save are the same anyway.

So overall this has proven a great success. Total cost of legitimate software: £0

So you can't run GUI tools on your MySQL server?

GUI tools bring in a lot of dependent packages which is usually undesirable on a tight linux server. MySQL server is usually configured  to listen to a local UNIX domain socket and the MySQL root user is usually only allowed by default to connect from this socket. If you have your security right, this socket should have restricted permissions and not allow everyone to connect.

So when you want to run a GUI such as MySQL Administrator as root on your server, how do you manage this?

Fortunately, the answer comes via socat which is a more modern version of netcat along with our old friend, SSH tunnels. socat and openSSH are core packages in Debian and Ubuntu, although socat may need sourcing from a third party repository for some linux distros.

Here's the magic:

# On the MySQL server, as whichever linux user can access the MySQL socket:
socat tcp-listen:13306,reuseaddr,fork,bind= unix:/var/run/mysqld/mysqld.sock
# On your PC
ssh -L3306:localhost:13306

The socat command will need the path adjusted for the location of mysqld.sock (check /etc/mysql/my.cnf - it may be in /var/lib, /var/run or /tmp). socat creates a tcp server on port 13306 accessible from only.

The ssh command needs the host to be your MySQL server and you may login with any account that is permitted. What happens now, is ssh creates a tcp server on your PC on port 3306 bound to only and wired through to socat on the MySQL server which is wired to the MySQL unix domain socket.

So your PC's tcp listener in now effectively wired to the heart of your remote MySQL server - clever eh?

Now run MySQL Administrator or whatever tool from the comfort of your machine! Remember to connect to, standard port of 3306. Don't try to use "localhost" as, for some reason, probably due to bad mushrooms, the MySQL developers decided that "localhost" meant "unix domain socket". Sigh...

Security implications

On the MySQL server, be aware that any other local user may now connect to the 13306 port, thus gain root access to your databases, depending on whether root has a password configured. The same applies to your PC on port 3306 - so if your "PC" happens to be a *nix multiuser server with loads of other people logged in, this would classify as a Bad Idea (TM).

Close down your ssh tunnel and socat as soon as you have finished.