Donate

Categories

Advert

XHTML

Valid XHTML 1.0 Transitional

Ethernet Interface Naming With Systemd

Systemd has a new way of specifying names for Ethernet interfaces as documented in systemd.link(5). The Debian package should keep working with the old 70-persistent-net.rules file, but I had a problem with this that forced me to learn about systemd.link(5).

Below is a little shell script I wrote to convert a basic 70-persistent-net.rules (that only matches on MAC address) to systemd.link files.

#!/bin/bash

RULES=/etc/udev/rules.d/70-persistent-net.rules

for n in $(grep ^SUB $RULES|sed -e s/^.*NAME..// -e s/.$//) ; do
  NAME=/etc/systemd/network/10-$n.link
  LINE=$(grep $n $RULES)
  MAC=$(echo $LINE|sed -e s/^.*address….// -e s/…ATTR.*$//)
  echo "[Match]" > $NAME
  echo "MACAddress=$MAC" >> $NAME
  echo "[Link]" >> $NAME
  echo "Name=$n" >> $NAME
done

Unikernels

At LCA I attended a talk about Unikernels. Here are the reasons why I think that they are a bad idea:

Single Address Space

According to the Unikernel Wikipedia page [1] a significant criteria for a Unikernel system is that it has a single address space. This gives performance benefits as there is no need to change CPU memory mappings when making system calls. But the disadvantage is that any code in the application/kernel can access any other code directly.

In a typical modern OS (Linux, BSD, Windows, etc) every application has a separate address space and there are separate memory regions for code and data. While an application can request the ability to modify it’s own executable code in some situations (if the OS is configured to allow that) it won’t happen by default. In MS-DOS and in a Unikernel system all code has read/write/execute access to all memory. MS-DOS was the least reliable OS that I ever used. It was unreliable because it performed tasks that were more complex than CP/M but had no memory protection so any bug in any code was likely to cause a system crash. The crash could be delayed by some time (EG corrupting data structures that are only rarely accessed) which would make it very difficult to fix. It would be possible to have a Unikernel system with non-modifyable executable areas and non-executable data areas and it is conceivable that a virtual machine system like Xen could enforce that. But that still wouldn’t solve the problem of all code being able to write to all data.

On a Linux system when an application writes to the wrong address there is a reasonable probability that it will not have write access and you will immediately get a SEGV which is logged and informs the sysadmin of the address of the crash.

When Linux applications have bugs that are difficult to diagnose (EG buffer overruns that happen in production and can’t be reproduced in a test environment) there are a variety of ways of debugging them. Tools such as Valgrind can analyse memory access and tell the developers which code had a bug and what the bug does. It’s theoretically possible to link something like Valgrind into a Unikernel, but the lack of multiple processes would make it difficult to manage.

Debugging

A full Unix environment has a rich array of debugging tools, strace, ltrace, gdb, valgrind and more. If there are performance problems then tools like sysstat, sar, iostat, top, iotop, and more. I don’t know which of those tools I might need to debug problems at some future time.

I don’t think that any Internet facing service can be expected to be reliable enough that it will never need any sort of debugging.

Service Complexity

It’s very rare for a server to have only a single process performing the essential tasks. It’s not uncommon to have a web server running CGI-BIN scripts or calling shell scripts from PHP code as part of the essential service. Also many Unix daemons are not written to run as a single process, at least threading is required and many daemons require multiple processes.

It’s also very common for the design of a daemon to rely on a cron job to clean up temporary files etc. It is possible to build the functionality of cron into a Unikernel, but that means more potential bugs and more time spent not actually developing the core application.

One could argue that there are design benefits to writing simple servers that don’t require multiple programs. But most programmers aren’t used to doing that and in many cases it would result in a less efficient result.

One can also argue that a Finite State Machine design is the best way to deal with many problems that are usually solved by multi-threading or multiple processes. But most programmers are better at writing threaded code so forcing programmers to use a FSM design doesn’t seem like a good idea for security.

Management

The typical server programs rely on cron jobs to rotate log files and monitoring software to inspect the state of the system for the purposes of graphing performance and flagging potential problems.

It would be possible to compile the functionality of something like the Nagios NRPE into a Unikernel if you want to have your monitoring code running in the kernel. I’ve seen something very similar implemented in the past, the CA Unicenter monitoring system on Solaris used to have a kernel module for monitoring (I don’t know why). My experience was that Unicenter caused many kernel panics and more downtime than all other problems combined. It would not be difficult to write better code than the typical CA employee, but writing code that is good enough to have a monitoring system running in the kernel on a single-threaded system is asking a lot.

One of the claimed benefits of a Unikernel was that it’s supposedly risky to allow ssh access. The recent ssh security issue was an attack against the ssh client if it connected to a hostile server. If you had a ssh server only accepting connections from management workstations (a reasonably common configuration for running servers) and only allowed the ssh clients to connect to servers related to work (an uncommon configuration that’s not difficult to implement) then there wouldn’t be any problems in this regard.

I think that I’m a good programmer, but I don’t think that I can write server code that’s likely to be more secure than sshd.

On Designing It Yourself

One thing that everyone who has any experience in security has witnessed is that people who design their own encryption inevitably do it badly. The people who are experts in cryptology don’t design their own custom algorithm because they know that encryption algorithms need significant review before they can be trusted. The people who know how to do it well know that they can’t do it well on their own. The people who know little just go ahead and do it.

I think that the same thing applies to operating systems. I’ve contributed a few patches to the Linux kernel and spent a lot of time working on SE Linux (including maintaining out of tree kernel patches) and know how hard it is to do it properly. Even though I’m a good programmer I know better than to think I could just build my own kernel and expect it to be secure.

I think that the Unikernel people haven’t learned this.

Compatibility and a Linux Community Server

Compatibility/interoperability is a good thing. It’s generally good for systems on the Internet to be capable of communicating with as many systems as possible. Unfortunately it’s not always possible as new features sometimes break compatibility with older systems. Sometimes you have systems that are simply broken, for example all the systems with firewalls that block ICMP so that connections hang when the packet size gets too big. Sometimes to take advantage of new features you have to potentially trigger issues with broken systems.

I recently added support for IPv6 to the Linux Users of Victoria server. I think that adding IPv6 support is a good thing due to the lack of IPv4 addresses even though there are hardly any systems that are unable to access IPv4. One of the benefits of this for club members is that it’s a platform they can use for testing IPv6 connectivity with a friendly sysadmin to help them diagnose problems. I recently notified a member by email that the callback that their mail server used as an anti-spam measure didn’t work with IPv6 and was causing mail to be incorrectly rejected. It’s obviously a benefit for that user to have the problem with a small local server than with something like Gmail.

In spite of the fact that at least one user had problems and others potentially had problems I think it’s clear that adding IPv6 support was the correct thing to do.

SSL Issues

Ben wrote a good post about SSL security [1] which links to a test suite for SSL servers [2]. I tested the LUV web site and got A-.

This blog post describes how to setup PFS (Perfect Forward Secrecy) [3], after following it’s advice I got a score of B!

From the comments on this blog post about RC4 etc [4] it seems that the only way to have PFS and not be vulnerable to other issues is to require TLS 1.2.

So the issue is what systems can’t use TLS 1.2.

TLS 1.2 Support in Browsers

This Wikipedia page has information on SSL support in various web browsers [5]. If we require TLS 1.2 we break support of the following browsers:

The default Android browser before Android 5.0. Admittedly that browser always sucked badly and probably has lots of other security issues and there are alternate browsers. One problem is that many people who install better browsers on Android devices (such as Chrome) will still have their OS configured to use the default browser for URLs opened by other programs (EG email and IM).

Chrome versions before 30 didn’t support it. But version 30 was released in 2013 and Google does a good job of forcing upgrades. A Debian/Wheezy system I run is now displaying warnings from the google-chrome package saying that Wheezy is too old and won’t be supported for long!

Firefox before version 27 didn’t support it (the Wikipedia page is unclear about versions 27-31). 27 was released in 2014. Debian/Wheezy has version 38, Debian/Squeeze has Iceweasel 3.5.16 which doesn’t support it. I think it is reasonable to assume that anyone who’s still using Squeeze is using it for a server given it’s age and the fact that LTS is based on packages related to being a server.

IE version 11 supports it and runs on Windows 7+ (all supported versions of Windows). IE 10 doesn’t support it and runs on Windows 7 and Windows 8. Are the free upgrades from Windows 7 to Windows 10 going to solve this problem? Do we want to support Windows 7 systems that haven’t been upgraded to the latest IE? Do we want to support versions of Windows that MS doesn’t support?

Windows mobile doesn’t have enough users to care about.

Opera supports it from version 17. This is noteworthy because Opera used to be good for devices running older versions of Android that aren’t supported by Chrome.

Safari supported it from iOS version 5, I think that’s a solved problem given the way Apple makes it easy for users to upgrade and strongly encourages them to do so.

Log Analysis

For many servers the correct thing to do before even discussing the issue is to look at the logs and see how many people use the various browsers. One problem with that approach on a Linux community site is that the people who visit the site most often will be more likely to use recent Linux browsers but older Windows systems will be more common among people visiting the site for the first time. Another issue is that there isn’t an easy way of determining who is a serious user, unlike for example a shopping site where one could search for log entries about sales.

I did a quick search of the Apache logs and found many entries about browsers that purport to be IE6 and other versions of IE before 11. But most of those log entries were from other countries, while some people from other countries visit the club web site it’s not very common. Most access from outside Australia would be from bots, and the bots probably fake their user agent.

Should We Do It?

Is breaking support for Debian/Squeeze, the built in Android browser on Android <5.0, and Windows 7 and 8 systems that haven’t upgraded IE as a web browsing platform a reasonable trade-off for implementing the best SSL security features?

For the LUV server as a stand-alone issue the answer would be no as the only really secret data there is accessed via ssh. For a general web infrastructure issue it seems that the answer might be yes.

I think that it benefits the community to allow members to test against server configurations that will become more popular in the future. After implementing changes in the server I can advise club members (and general community members) about how to configure their servers for similar results.

Does this outweigh the problems caused by some potential users of ancient systems?

I’m blogging about this because I think that the issues of configuration of community servers have a greater scope than my local LUG. I welcome comments about these issues, as well as about the SSL compatibility issues.

Using LetsEncrypt

Lets Encrypt is a new service to provide free SSL keys [1]. I’ve just set it up on a few servers that I run.

Issues

The first thing to note is that the client is designed to manage your keys and treat all keys on a server equally with a single certificate. It shouldn’t be THAT difficult to do things in other ways but it would involve extra effort. The next issue that can make things difficult is that it is designed that the web server will have a module to negotiate new keys automatically. Automatically negotiating new keys will be really great when we get that all going, but as I didn’t feel like installing a slightly experimental Apache module on my servers that meant I had to stop Apache while I got the keys – and I’ll have to do that again every 3 months as the keys have a short expiry time.

There are some other ways of managing keys, but the web servers I’m using Lets Encrypt with at the moment aren’t that important and a couple of minutes of downtime is acceptable.

When you request multiple keys (DNS names) for one server to make it work without needless effort you have to get them all in the one operation. That gives you a single key file for all DNS names which is very convenient for services that don’t support getting the hostname before negotiating SSL. But it could be difficult if you wanted to have one of the less common configurations such as having a mail server and a web server on the same IP addess but using different keys

How To Get Keys

deb http://mirror.internode.on.net/pub/debian/ testing main

The letsencrypt client is packaged for Debian in Testing but not in Jessie. Adding the above to the /etc/apt/sources.list file for a Jessie system allows installing it and a few dependencies from Testing. Note that there are problems with doing this, you can’t be certain that all the other apps installed will be compatible with the newer versions of libraries that are installed and you won’t get security updates.

letsencrypt certonly --standalone-supported-challenges tls-sni-01

The above command makes the letsencrypt client listen on port 443 to talk to the Lets Encrypt server. It prompts you for server names so if you want to minimise the downtime for your web server you could specify the DNS names on the command-line.

If you run it on a SE Linux system you need to run “setsebool allow_execmem 1” before running it and “setsebool allow_execmem 0” afterwards as it needs execmem access. I don’t think it’s a problem to temporarily allow execmem access for the duration of running this program, if you use KDE then you will be forced to allow such access all the time for the desktop to operate correctly.

How to Install Keys

[ssl:emerg] [pid 9361] AH02564: Failed to configure encrypted (?) private key www.example.com:443:0, check /etc/letsencrypt/live/www.example.com/fullchain.pem

The letsencrypt client suggests using the file fullchain.pem which has the key and the full chain of certificates. When I tried doing that I got errors such as the above in my Apache error.log. So I gave up on that and used the separate files. The only benefit of using the fullchain.pem file is to have a single line in a configuration file instead of 3. Trying to debug issues with fullchain.pem took me a lot longer than copy/paste for the 3 lines.

Under /etc/letsencrypt/live/$NAME there are symlinks to the real files. So when you get new keys the old keys will be stored but the same file names can be used.

SSLCertificateFile "/etc/letsencrypt/live/www.example.com/cert.pem"
SSLCertificateChainFile "/etc/letsencrypt/live/www.example.com/chain.pem"
SSLCertificateKeyFile "/etc/letsencrypt/live/www.example.com/privkey.pem"

The above commands are an example for configuring Apache 2.

smtpd_tls_cert_file = /etc/letsencrypt/live/smtp.example.com/cert.pem
smtpd_tls_key_file = /etc/letsencrypt/live/smtp.example.com/privkey.pem
smtpd_tls_CAfile = /etc/letsencrypt/live/smtp.example.com/chain.pem

Above is an example of Postfix configuration.

ssl_cert = </etc/letsencrypt/live/smtp.example.com/cert.pem
ssl_key = </etc/letsencrypt/live/smtp.example.com/privkey.pem
ssl_ca = </etc/letsencrypt/live/smtp.example.com/chain.pem

Above is an example for Dovecot, it goes in /etc/dovecot/conf.d/10-ssl.conf in a recent Debian version.

Conclusion

At this stage using letsencrypt is a little fiddly so for some commercial use (where getting the latest versions of software in production is difficult) it might be a better option to just pay for keys. However some companies I’ve worked for have had issues with getting approval for purchases which would make letsencrypt a good option to avoid red tape.

When Debian/Stretch is released with letsencrypt I think it will work really well for all uses.

Finding Storage Performance Problems

Here are some basic things to do when debugging storage performance problems on Linux. It’s deliberately not an advanced guide, I might write about more advanced things in a later post.

Disk Errors

When a hard drive is failing it often has to read sectors several times to get the right data, this can dramatically reduce performance. As most hard drives aren’t monitored properly (email or SMS alerts on errors) it’s quite common for the first notification about an impending failure to be user complaints about performance.

View your kernel message log with the dmesg command and look in /var/log/kern.log (or wherever your system is configured to store kernel logs) for messages about disk read errors, bus resetting, and anything else unusual related to the drives.

If you use an advanced filesystem like BTRFS or ZFS there are system commands to get filesystem information about errors. For BTRFS you can run “btrfs device stats MOUNTPOINT” and for ZFS you can run “zpool status“.

Most performance problems aren’t caused by failing drives, but it’s a good idea to eliminate that possibility before you continue your investigation.

One other thing to look out for is a RAID array where one disk is noticeably slower than the others. For example if you have a RAID-5 or RAID-6 array every drive should have almost the same number of reads and writes, if one disk in the array is at 99% performance capacity and the other disks are at 5% then it’s an indication of a failing disk. This can happen even if SMART etc don’t report errors.

Monitoring IO

The iostat program in the Debian sysstat package tells you how much IO is going to each disk. If you have physical hard drives sda, sdb, and sdc you could run the command “iostat -x 10 sda sdb sdc” to tell you how much IO is going to each disk over 10 second periods. You can choose various durations but I find that 10 seconds is long enough to give results that are useful.

By default iostat will give stats on all block devices including LVM volumes, but that usually gives too much data to analyse easily.

The most useful things that iostat tells you are the %util (the percentage utilisation – anything over 90% is a serious problem), the reads per second “r/s“, and the writes per second “w/s“.

The parameters to iostat for block devices can be hard drives, partitions, LVM volumes, encrypted devices, or any other type of block device. After you have discovered which block devices are nearing their maximum load you can discover which of the partitions, RAID arrays, or swap devices on that disk are causing the load in question.

The iotop program in Debian (package iotop) gives a display that’s similar to that of top but for disk io. It generally isn’t essential (you can run “ps ax|grep D” to get most of that information), but it is handy. It will tell you which programs are causing IO on a busy filesystem. This can be good when you have a busy system and don’t know why. It isn’t very useful if you have a system that is used for one task, EG a database server that is known to be busy doing database stuff.

It’s generally a good idea to have sysstat and iotop installed on all systems. If a system is experiencing severe performance problems you might not want to wait for new packages to be installed.

In Debian the sysstat package includes the sar utility which can give historical information on system load. One benefit of using sar for diagnosing performance problems is that it shows you the time of day that has the most load which is the easiest time to diagnose performance problems.

Swap Use

Swap use sometimes confuses people. In many cases swap use decreases overall disk use, this is the design of the Linux paging algorithms. So if you have a server that accesses a lot of data it might swap out some unused programs to make more space for cache.

When you have multiple virtual machines on one system sharing the same disks it can be difficult to determine the best allocation for RAM. If one VM has some applications allocating a lot of RAM but not using it much then it might be best to give it less RAM and force those applications into swap so that another VM can cache all the data it accesses a lot.

The important thing is not the amount of swap that is allocated but the amount of IO that goes to the swap partition. Any significant amount of disk IO going to a swap device is a serious problem that can be solved by adding more RAM.

Reads vs Writes

The ratio of reads to writes depends on the applications and the amount of RAM. Some applications can have most of their reads satisfied from cache. For example an ideal configuration of a mail server will have writes significantly outnumber reads (I’ve seen ratios of 5:1 for writes to reads on real mail servers). Ideally a mail server will cache all new mail for at least an hour and as the most prolific users check their mail more frequently than that most mail will be downloaded before it leaves the cache. If you have a mail server with reads outnumbering writes then it needs more RAM. RAM is cheap nowadays so if you don’t want to compete with Gmail it should be cheap to buy enough RAM to cache all recent mail.

The ratio of reads to writes is important because it’s one way of quickly determining if you have enough RAM and adding RAM is often the cheapest way of improving performance.

Unbalanced IO

One common performance problem on systems with multiple disks is having more load going to some disks than to others. This might not be a problem (EG having cron jobs run on disks that are under heavy load while the web server accesses data from lightly loaded disks). But you need to consider whether it’s desirable to have some disks under more load than others.

The simplest solution to this problem is to just have a single RAID array for all data storage. This is also the solution that gives you the maximum available disk space if you use RAID-5 or RAID-6.

A more complex option is to use some SSDs for things that require performance and disks for things that don’t. This can be done with the ZIL and L2ARC features of ZFS or by just creating a filesystem on SSD for the data that is most frequently accessed.

What Did I Miss?

I’m sure that I missed something, please let me know of any other basic things to do – or suggestions for a post on more advanced things.

Sociological Images 2015

3 men 1 women on lift sign

The above sign was at the Melbourne Docks in December 2014 when I was returning from a cruise. I have no idea why there are 3 men and 1 woman on the sign (and a dock worker was also surprised when I explained why I was photographing it). I wonder whether a sign that had 3 women and 1 man would ever have been installed or not noticed if it was installed.

rules for asking questions at LCA2015

At the start of the first day of LCA 2015 the above was displayed at the keynote as a flow-chart for whether someone should ask a question at a lecture. Given that the first real item in the list is that a question should fit in a tweet I think it was inspired by my blog post about the length of conference questions [1].

Astronomy Miniconf suggestions for delegates

At the introduction to the Astronomy Miniconf the above slide was displayed. In addition to referencing the flow-chart for asking questions it recommends dimming laptop screens (among other things).

sign saying men to the left because women are always right

The above sign was at a restaurant in Auckland in January 2015. I thought that sort of sexist “joke” went out of fashion a few decades ago.

gendered nerf weaponary

The above photo is from a Melbourne department store in February 2015. Why gender a nerf gun? That just doesn’t make sense. Also it appeared that the only nerf crossbow was the purple/pink one, is a crossbow considered feminine nowadays?

Picture of Angela appropriating Native American clothing

The above picture is a screen-shot of one of the “Talking Angela” series of Android games from March. Appropriating the traditional clothing of marginalised groups is a bad thing. People of Native American heritage who want to wear their traditional clothing face discrimination when they do so, when white people play dress-up in clothing that is a parody of Native American style it’s really offensive. The site Racialicious.com has a tag for articles about appropriation [2].

The above was in a library advertising an Ebook reader. In this case they didn’t even have pointlessly gendered products they just had pointlessly gendered adverts for the same product. They also perpetuate the myth that only girls read vampire books and only boys read about space. Also why is the girl lying down to read while the boy is sitting up?

Above is an Advent calendar on sale in a petrol station. Having end of year holiday presents that have nothing to do with religious festivals makes sense. But Advent is a religious observance. I think this would be a better candidate for “war on Christmas” paranoia than a coffee cup of the wrong colour.

The above photo is of boys and girls pipette suckers. Pointlessly gendered recreational products like Nerf guns is one thing, but I think that doing it to scientific equipment is a bigger problem. Are scientists going to stop work if they can’t find a pipette sucker of the desired gender? Is worrying about this going to distract them from their research (really bad if working with infectious or carcinogenic solutions). The Integra advertising claims to be doing this to promote breast cancer research which is also bogus. Here is a Sociological Images article about the problems of using pink to market breast cancer research [3] and the Sociological Images post about pinkwashing (boobies against breast cancer) is also worth reading [4].

As an aside I made a mistake in putting a pipette sucker over the woman’s chest in that picture. The way that Integra portreyed her chest is relevant to analysis of this advert. But unfortunately I didn’t photograph that.

Here is a link to my sociological images post from 2014 [5].

LUV Server Upgrade to Jessie

On Sunday night I started the process of upgrading the LUV server to Debian/Jessie from Debian/Wheezy. My initial plan was to just upgrade Apache first but dependencies required upgrading systemd too.

One problem I’ve encountered in the past is that the Wheezy version of systemd will often hang on an upgrade to a newer version. Generally the solution to this is to run “systemctl daemon-reexec” from another terminal. The problem in this case was that not all the libraries needed for systemd had been installed, so systemd could re-exec itself but immediately aborted. The kernel really doesn’t like it when process 1 aborts repeatedly and apparently immediately hanging is the result. At the time I didn’t know this, all I knew was that my session died and the server stopped responding to pings immediately after I requested a reexec.

The LUV server is hosted at VPAC for free. As their staff have actual work to do they couldn’t spend a lot of time working on the LUV server. They told me that the screen was flickering and suspected a VGA cable. I got to the VPAC server room with the spare LUV server (LUV had been given 3 almost identical Sun servers from Barwon Water) at 16:30. By 17:30 I had fixed the core problem (boot with “init=/bin/bash“, mount the root filesystem rw, finish the upgrade of systemd and it’s dependencies, and then reboot normally). That got it into a stage where the Xen server for Wikimedia Au was working but most LUV functionality wasn’t working.

By 23:00 on Monday I had the full list server functionality working for users, this is the main feature that users want when it’s not near a meeting time. I can’t remember whether it was Monday night or Tuesday morning when I got the Drupal site going (the main LUV web site). Last night at midnight I got the last of the Mailman administrative interface going, I admit I could have got it going a bit earlier by putting SE Linux in permissive mode, but I don’t think that the members would have benefited from that (I’ll upload a SE Linux policy package that gets Mailman working on Jessie soon).

Now it’s Wednesday and I’m still fixing some cron jobs. Along the way I noticed some problems with excessive disk space use that I’m fixing now and I’ve also removed some Wikimedia related configuration files that were obsolete and would have prevented anyone from using a wikimedia.org.au address to subscribe to the LUV mailing lists.

Now I believe that everything is working correctly and generally working better than before.

Lessons Learned

While Sunday night wasn’t a bad time to start the upgrade it wasn’t the best. If I had started the upgrade on Monday morning there would have been less down-time. Another possibility might be to do the upgrade while near the VPAC office during business hours, I could have started the upgrade while at a nearby cafe and then visited the server room immediately if something went wrong.

Doing an upgrade on a day when there’s no meeting within a week was a good choice. It wasn’t really a conscious choice as I’m usually doing other LUV work near the meeting day which precludes doing other LUV work that doesn’t need to be done soon. But in future it would be best to consciously plan upgrades for a date when users aren’t going to need the service much.

While the Wheezy systemd bug is unlikely to ever be fixed there are work-arounds that shouldn’t result in a broken server. At the moment it seems that the best option would be to kill -9 the systemctl processes that hang until the packages that systemd depends on are installed. The problem is that the upgrade hangs while the new systemctl tries to tell the old systemd to restart daemons. If we can get past that to the stage where the shared objects are installed then it should be ok.

The Apache upgrade from 2.2.x to 2.4.x changed the operation of some access control directives and it took me some time to work out how to fix that. Doing a Google search on the differences between those would have led me to the Apache document about upgrading from 2.2 to 2.4 [1]. That wouldn’t have prevented some down-time of the web sites but would have allowed me to prepare for it and to more quickly fix the problems when they became apparent. Also the rather confusing configuration of the LUV server (supporting many web sites that are no longer used) didn’t help things. I think that removing cruft from an installation before an upgrade would be better than waiting until after things break.

Next time I do an upgrade of such a server I’ll write notes about it while I go. That will give a better blog post about it if it becomes newsworthy enough to be blogged about and also more opportunities to learn better ways of doing it.

Sorry for the inconvenience.

Mail Server Training

Today I ran a hands-on training session on configuring a MTA with Postfix and Dovecot for LUV. I gave each student a virtual machine running Debian/Jessie with full Internet access and instructions on how to configure it as a basic mail server. Here is a slightly modified set of instructions that anyone can do on their own system.

Today I learned that documentation that includes passwords on a command-line should have quotes around the password, one student used a semi-colon character in his password which caused some confusion (it’s the command separator character in BASH). I also discovered that trying to just tell users which virtual server to login to is prone to errors, in future I’ll print out a list of user-names and passwords for virtual servers and tear off one for each student so there’s no possibility of 2 users logging in to the same system.

I gave each student a sub-domain of unixapropos.com (a zone that I use for various random sysadmin type things). I have changed the instructions to use example.com which is the official address for testing things (or you could use any zone that you use). The test VMs that I setup had a user named “auser”, the documentation assumes this account name. You could change “auser” to something else if you wish.

Below are all the instructions for anyone who wants to try it at home or setup virtual machines and run their own training session.

Basic MTA Configuration

  1. Run “apt-get install postfix” to install Postfix, select “Internet Site” for the type of mail configuration and enter the domain name you selected for the mail name.
  2. The main Postfix configuration file is /etc/postfix/main.cf. Change the myhostname setting to the fully qualified name of the system, something like mta.example.com.
    You can edit /etc/postfix/main.cf with vi (or any other editor) or use the postconf command to change it, eg “postconf -e myhostname=mta.example.com“.
  3. Add “home_mailbox=Maildir/” to the Postfix configuration to make it deliver to a Maildir spool in the user’s home directory.
  4. Restart Postfix to apply the changes.
  5. Run “apt-get install swaks libnet-ssleay-perl” to install swaks (a SMTP test tool).
  6. Test delivery by running the command “swaks -f auser@example.com -t auser@example.com -s localhost“. Note that swaks displays the SMTP data so you can see exactly what happens and if something goes wrong you will see everything about the error.
  7. Inspect /var/log/mail.log to see the messages about the delivery. View the message which is in ~auser/Maildir/new.
  8. When other students get to this stage run the same swaks command but with the -t changed to the address in their domain, check the mail.log to see that the messages were transferred and view the mail with less to see the received lines. If you do this on your own specify a recipient address that’s a regular email address of yours (EG a Gmail account).

Basic Pop/IMAP Configuration

  1. Run “apt-get install dovecot-pop3d dovecot-imapd” to install Dovecot POP and IMAP servers.
    Run “netstat -tln” to see the ports that have daemons listening on them, observe that ports 110 and 143 are in use.
  2. Edit /etc/dovecot/conf.d/10-mail.conf and change mail_location to “maildir:~/Maildir“. Then restart Dovecot.
  3. Run the command “nc localhost 110” to connect to POP, then run the following commands to get capabilities, login, and retrieve mail:
    user auser
    pass WHATEVERYOUMADEIT
    capa
    list
    retr 1
    quit
  4. Run the command “nc localhost 143” to connect to IMAP, then run the following commands to list capabilities, login, and logout:
    a capability
    b login auser WHATEVERYOUMADEIT
    c logout
  5. For the above commands make note of the capabilities, we will refer to that later.

Now you have a basically functional mail server on the Internet!

POP/IMAP Over SSL

To avoid password sniffing we need to use SSL. To do it properly requires obtaining a signed key for a DNS address but we can do the technical work with the “snakeoil” certificate that is generated by Debian.

  1. Edit /etc/dovecot/conf.d/10-ssl.conf and change “ssl = no” to “ssl = required“. Then add the following 2 lines:
    ssl_cert = </etc/ssl/certs/ssl-cert-snakeoil.pem
    ssl_key = </etc/ssl/private/ssl-cert-snakeoil.key
    1. Run “netstat -tln” and note that ports 993 and 995 are not in use.
    2. Edit /etc/dovecot/conf.d/10-master.conf and uncomment the following lines:
      port = 993
      ssl = yes
      port = 995
      ssl = yes
    3. Restart Dovecot, run “netstat -tln” and note that ports 993 and 995 are in use.
  2. Run “nc localhost 110” and “nc localhost 143” as before, note that the capabilities have changed to include STLS/STARTTLS respectively.
  3. Run “gnutls-cli --tofu 127.0.0.1 -p 993” to connect to the server via IMAPS and “gnutls-cli --tofu 127.0.0.1 -p 995” to connect via POP3S. The --tofu option means to “Trust On First Use”, it stores the public key in ~/.gnutls and checks it the next time you connect. This allows you to safely use a “snakeoil” certificate if all apps can securely get a copy of the key.

Postfix SSL

  1. Edit /etc/postfix/main.cf and add the following 4 lines:
    smtpd_tls_received_header = yes
    smtpd_tls_loglevel = 1
    smtp_tls_loglevel = 1
    smtp_tls_security_level = may

    Then restart Postfix. This makes Postfix log TLS summary messages to syslog and in the Received header. It also permits Postfix to send with TLS.
  2. Run “nc localhost 25” to connect to your SMTP port and then enter the following commands:
    ehlo test
    quit

    Note that the response to the EHLO command includes 250-STARTTLS, this is because Postfix was configured with the Snakeoil certificate by default.
  3. Run “gnutls-cli --tofu 127.0.0.1 -p 25 -s” and enter the following commands:
    ehlo test
    starttls
    ^D

    After the CTRL-D gnutls-cli will establish a SSL connection.
  4. Run “swaks -tls -f auser@example.com -t auser@example.com -s localhost” to send a message with SSL encryption. Note that swaks doesn’t verify the key.
  5. Try using swaks to send messages to other servers with SSL encryption. Gmail is one example of a mail server that supports SSL which can be used, run “swaks -tls -f auser@example.com -t YOURADDRESS@gmail.com” to send TLS (encapsulated SSL) mail to Gmail via swaks. Also run “swaks -tls -f auser@example.com -t YOURADDRESS@gmail.com -s localhost” to send via your new mail server (which should log that it was a TLS connection from swaks and a TLS connection to Gmail).

SASL

SASL is the system of SMTP authentication for mail relaying. It is needed to permit devices without fixed IP addresses to send mail through a server. The easiest way of configuring Postfix SASL is to have Dovecot provide it’s authentication data to Postfix. Among other things if you change Dovecot to authenticate in another way you won’t need to make any matching changes to Postfix.

  1. Run “mkdir -p /var/spool/postfix/var/spool” and “ln -s ../.. /var/spool/postfix/var/spool/postfix“, this allows parts of Postfix to work with the same configuration regardless of whether they are running in a chroot.
  2. Add the following to /etc/postfix/main.cf and restart Postfix:
    smtpd_sasl_auth_enable = yes
    smtpd_sasl_type = dovecot
    smtpd_sasl_path = /var/spool/postfix/private/auth
    broken_sasl_auth_clients = yes
    smtpd_sasl_authenticated_header = yes
  3. Edit /etc/dovecot/conf.d/10-master.conf, uncomment the following lines, and then restart Dovecot:
    unix_listener /var/spool/postfix/private/auth {
    mode = 0666
    }
  4. Edit /etc/postfix/master.cf, uncomment the line for the submission service, and restart Postfix. This makes Postfix listen on port 587 which is allowed through most firewalls.
  5. From another system (IE not the virtual machine you are working on) run “swaks -tls -f auser@example.com -t YOURADDRESS@gmail.com -s YOURSERVER and note that the message is rejected with “Relay access denied“.
  6. Now run “swaks -tls --auth-user auser --auth-password WHATEVER -f auser@example.com -t YOURREALADDRESS -s YOURSERVER” and observe that the mail is delivered (subject to anti-spam measures at the recipient).
  7. Configuring a MUA

    If every part of the previous 3 sections is complete then you should be able to setup your favourite MUA. Use “auser” as the user-name for SMTP and IMAP, mail.example.com for the SMTP/IMAP server and it should just work! Of course you need to use the same DNS server for your MUA to have this just work. But another possibility for testing is to have the MUA talk to the server by IP address not by name.

Running a Shell in a Daemon Domain

allow unconfined_t logrotate_t:process transition;
allow logrotate_t { shell_exec_t bin_t }:file entrypoint;
allow logrotate_t unconfined_t:fd use;
allow logrotate_t unconfined_t:process sigchld;

I recently had a problem with SE Linux policy related to logrotate. To test it out I decided to run a shell in the domain logrotate_t to interactively perform some of the operations that logrotate performs when run from cron. I used the above policy to allow unconfined_t (the default domain for a sysadmin shell) to enter the daemon domain.

Then I used the command “runcon -r system_r -t logrotate_t bash” to run a shell in the domain logrotate_t. The utility runcon will attempt to run a program in any SE Linux context you specify, but to succeed the system has to be in permissive mode or you need policy to permit it. I could have written policy to allow the logrotate_t domain to be in the role unconfined_r but it was easier to just use runcon to change roles.

Then I had a shell in the logrotate_t command to test out the post-rotate scripts. It turned out that I didn’t really need to do this (I had misread the output of an earlier sesearch command). But this technique can be used for debugging other SE Linux related problems so it seemed worth blogging about.

A Long Term Review of Android Devices

Xperia X10

My first Android device was The Sony Ericsson Xperia X10i [1]. One of the reasons I chose it was for the large 4″ screen, nowadays the desirable phones (the ones that are marketed as premium products) are all bigger than that (the Galaxy S6 is 5.1″) and even the slightly less expensive phones are bigger. At the moment Aldi is advertising an Android phone with a 4.5″ screen for $129. But at the time there was nothing better in the price range that I was willing to pay.

I devoted a lot of my first review to the default apps for SMS and Email. Shortly after that I realised that the default email app is never going to be adequate (I now use K9 mail) and the SMS app is barely adequate (but I mostly use instant messaging). I’ve got used to the fact that most apps that ship with an Android device are worthless, the camera app and the app to make calls are the only built in apps I regularly use nowadays.

In the bug list from my first review the major issue was lack of Wifi tethering which was fixed by an update to Android 2.3. Unfortunately Android 2.3 ran significantly more slowly which decreased the utility of the phone.

The construction of the phone is very good. Over the last 2 years the 2 Xperia X10 phones I own have been on loan to various relatives, many of whom aren’t really into technology and can’t be expected to take good care of things. But they have not failed in any way. Apart from buying new batteries there has been no hardware failure in either phone. While 2 is a small sample size I haven’t see any other Android device last nearly as long without problems. Unfortunately I have no reason to believe that Sony has continued to design devices as well.

The Xperia X10 phones crash more often than most Android phones with spontaneous reboots being a daily occurrence. While that is worse than any other Android device I’ve used it’s not much worse.

My second review of the Xperia X10 had a section about ways of reducing battery use [2]. Wow, I’d forgotten how much that sucked! When I was last using the Xperia X10 the Life360 app that my wife and I use to track each other was taking 15% of the battery, on more recent phones the same app takes about 2%. The design of modern phones seems to be significantly more energy efficient for background tasks and the larger brighter displays use more energy instead.

My father is using one of the Xperia phones now, when I give him a better phone to replace it I will have both as emergency Wifi access points. They aren’t useful for much else nowadays.

Samsung Galaxy S

In my first review of the Galaxy S I criticised it for being thin, oddly shaped, and slippery [3]. After using it for a while I found the shape convenient as I could easily determine the bottom of the phone in my pocket and hold it the right way up before looking at it. This is a good feature for a phone that’s small enough to rotate in my pocket – the Samsung Galaxy Note series of phones is large enough to not rotate in a pocket. In retrospect I think that being slippery isn’t a big deal as almost everyone buys a phone case anyway. But it would still be better for use on a desk if the bulge was at the top.

I wrote about my Galaxy S failing [4]. Two of my relatives had problems with those phones too. Including a warranty replacement I’ve seen 4 of those phones in use and only one worked reliably. The one that worked reliably is now being used by my mother, it’s considerably faster than the Xperia X10 because it has more RAM and will probably remain in regular use until it breaks.

CyanogenMod

I tried using CyanogenMod [5]. The phone became defective 9 months later so even though CyanogenMod is great I don’t think I got good value for the amount of time spent installing it. I haven’t tried replacing the OS of an Android phone since then.

I really wish that they would start manufacturing phones that can have the OS replaced as easily as a PC.

Samsung Galaxy S3 and Wireless Charging

The Galaxy S3 was the first phone I owned which competes with phones that are currently on sale [6]. A relative bought one at the same time as me and her phone is running well with no problems. But my S3 had some damage to it’s USB port which means that the vast majority of USB cables don’t charge it (only Samsung cables can be expected to work).

After I bought the S3 I bought a Qi wireless phone charging device [7]. One of the reasons for buying that is so if a phone gets a broken USB port then I can still use it. It’s ironic that the one phone that had a damaged USB port also failed to work correctly with the Qi card installed.

The Qi charger is gathering dust.

One significant benefit of the S3 (and most Samsung phones) is that it has a SD socket. I installed a 32G SD card in the S3 and now one of my relatives is happily using it as a media player.

Nexus 4

I bought a Nexus 4 [8] for my wife as she needed a better phone but didn’t feel like paying for a Galaxy S3. The Nexus 4 is a nice phone in many ways but the lack of storage is a serious problem. At the moment I’m only keeping it to use with Google Cardboard, I will lend it to my parents soon.

In retrospect I made a mistake buying the Nexus 4. If I had spent a little more money on another Galaxy S3 then I would have had a phone with a longer usage life as well as being able to swap accessories with my wife.

The Nexus 4 seems reasonably solid, the back of the case (which is glass) broke on mine after a significant impact but the phone continues to work well. That’s a tribute to the construction of the phone and also the Ringke Fusion case [9].

Generally the Nexus 4 is a good phone so I don’t regret buying it. I just think that the Galaxy S3 was a better choice.

Galaxy Note 2

I got a Samsung Galaxy Note 2 in mid 2013 [10]. In retrospect it was a mistake to buy the Galaxy S3, the Note series is better suited to my use. If I had known how good it is to have a larger phone I’d have bought the original Galaxy Note when it was first released.

Generally everything is good about the Note 2. While it only has 16G of storage (which isn’t much by today’s standards) it has an SD socket to allow expansion. It’s currently being used by a relative as a small tablet. With a 32G SD card it can fit a lot of movies.

Bluetooth Speakers

I received Bluetooth speakers in late 2013 [11]. I was very impressed by them but ended up not using them for a while. After they gathered dust for about a year I started using them again recently. While nothing has changed regarding my review of the Hive speakers (which I still like a lot) it seems that my need for such things isn’t as great as I thought. One thing that made me start using the Bluetooth speakers again is that my phone case blocks the sound from my latest phone and makes it worse than phone sound usually is.

I bought Bluetooth speakers for some relatives as presents, the relatives seemed to appreciate them but I wonder how much they actually use them.

Nexus 5

The Nexus 5 [12] is a nice phone. When I first reviewed it there were serious problems with overheating when playing Ingress. I haven’t noticed such problems recently so I think that an update to Android might have made it more energy efficient. In that review I was very impressed by the FullHD screen and it made me want a Note 3, at the time I planned to get a Note 3 in the second half of 2014 (which I did).

Galaxy Note 3

Almost a year ago I bought the Samsung Galaxy Note 3 [13]. I’m quite happy with it at the moment but I don’t have enough data for a long term review of it. The only thing to note so far is that in my first review I was unhappy with the USB 3 socket as that made it more difficult to connect a USB cable in the dark. I’ve got used to the socket and I can now reliably plug it in at night with ease.

I wrote about Rivers jeans being the only brand that can fit a Samsung Galaxy Note series phone in the pocket [14]. The pockets of my jeans have just started wearing out and I think that it’s partly due to the fact that I bought a Armourdillo Hybrid case [15] for my Note 3. I’ve had the jeans for over 3 years with no noticable wear apart from the pockets starting to wear out after 10 months of using the Armourdillo case.

I don’t think that the Armourdillo case is bad, but the fact that it has deep grooves and hard plastic causes it to rub more on material when I take the phone out of my pocket. As I check my phone very frequently this causes some serious wear. This isn’t necessarily a problem given that a phone costs 20* more than a pair of jeans, if the case was actually needed to save the phone then it would be worth having some jeans wear out. But I don’t think I need more protection than a gel case offers.

Another problem is that the Armourdillo case is very difficult to remove. This isn’t a problem if you don’t need access to your phone, IE if you use a phone like the Nexus 5 that doesn’t permit changing batteries or SD cards. But if you need to change batteries, SD cards, etc then it’s really annoying. My wife seems quite happy with her Armoudillo case but I don’t think it was a good choice for me. I’m considering abandoning it and getting one of the cheap gel cases.

The sound on the Note 3 is awful. I don’t know how much of that is due to a limitation in the speaker and how much is due to the case. It’s quite OK for phone calls but not much good for music.

Tablets

I’m currently on my third tablet. One was too cheap and nasty so I returned it. Another was still cheap and I hardly ever used it. The third is a Galaxy Note 10 which works really well. I guess the lesson is to buy something worthwhile so you can use it. A tablet that’s slower and has less storage than a phone probably isn’t going to get used much.

Phone Longevity

I owned the Xperia X10 for 22 months before getting the Galaxy S3. As that included 9 months of using a Galaxy S I only had 13 months of use out of that phone before lending it to other people.

The Galaxy S3 turned out to be a mistake as I replaced it in only 7 months.

I had the Note 2 for 15 months before getting the Note 3.

I have now had the Note 3 for 11 months and have no plans for a replacement any time soon – this is the longest I’ve owned an Android phone and been totally satisfied with it. Also I only need to use it for another 4 months to set a record for using an Android phone.

The Xperia was “free” as part of a telco contract. The other phones were somewhere between $500 and $600 each when counting the accessories (case, battery, etc) that I bought with them. So in 4 years and 7 months I’ve spent somewhere between $1500 and $1800 on phones plus the cost of the Xperia that was built in to the contract. The Xperia probably cost about the same so I’ll assume that I spent $2000 on phones and accessories. This seems like a lot. However that averages out to about $1.20 per day (and hopefully a lot less if my Note 3 lasts another couple of years). I could justify $1.20 per day for either the amount of paid work I do on Android phones or the amount of recreational activities that I perform (the Galaxy S3 was largely purchased for Ingress).

Conclusion

I think that phone companies will be struggling to maintain sales of high end phones in the future. When I chose the Xperia X10 I knew I was making a compromise, the screen resolution was an obvious limitation on the use of the device (even though it was one of the best devices available). The storage in the Xperia was also a limitation. Now FullHD is the minimum resolution for any sort of high-end device and 32G of storage is small. I think that most people would struggle to observe any improvement over a Nexus 5 or Note 3 at this time. I think that this explains the massive advertising campaign for the Galaxy S6 that is going on at the moment. Samsung can’t sell the S6 based on it being better than previous phones because there’s not much that they can do to make it obviously better. So they try and sell it for the image.